1 Introduction

We consider a probability measure \(\pi (\hbox {d}x)\) with a density \(\pi (x)\propto \hbox {e}^{U(x)}\) on \(\mathbb {R}^d\) with an unknown normalising constant. A typical task is the computation of the following quantity

$$\begin{aligned} \pi (g):=\mathbb {E}_\pi g = \int _{\mathbb {R}^d} g(x) \pi (\hbox {d}x), \quad g\in L^1(\pi ). \end{aligned}$$
(1)

Even if \(\pi (\hbox {d}x)\) is given in an explicit form, quadrature methods, in general, are inefficient in high dimensions. On the other hand, probabilistic methods scale very well with the dimension and are often the method of choice. With this in mind, we explore the connection between dynamics of stochastic differential equations (SDEs)

$$\begin{aligned} \hbox {d}X_{t}= \nabla U(X_{t})\hbox {d}t+\sqrt{2}\hbox {d}W_{t}, \quad X_{0} \in \mathbb {R}^{d}, \end{aligned}$$
(2)

and the target probability measure \(\pi (\hbox {d}x)\). The key idea is that under appropriate assumptions on U(x) one can show that the solution to (2) is ergodic and has \(\pi (\hbox {d}x)\) as its unique invariant measure (Has’minskiĭ 1980). However, there exist only a limited number of cases where analytical solutions to (2) are available and typically some form of approximation needs to be employed.

The numerical analysis approach (Kloeden and Platen 1992) is to discretise (2) and run the corresponding Markov chain for a long time interval. One drawback of the numerical analysis approach is that it might be the case that even though (2) is geometrically ergodic, the corresponding numerical discretisation might not be (Roberts and Tweedie 1996), while in addition extra care is required when \(\nabla U\) is not globally Lipschitz (Mattingly et al. 2002; Talay 2002; Roberts and Tweedie 1996; Shardlow and Stuart 2000; Hutzenthaler et al. 2011). The numerical analysis approach also introduces bias because the numerical invariant measure does not coincide with the exact one in general (Abdulle et al. 2014; Talay and Tubaro 1990), resulting hence in a biased estimation of \(\pi (g)\) in (1). Furthermore, if one uses the Euler–Maruyama method to discretise (2), then computational complexityFootnote 1 of \(\mathcal {O}(\varepsilon ^{-3})\) is required for achieving a root mean square error of order \(\mathcal {O}(\varepsilon )\) in the approximation of (1). Furthermore, even if one mitigates the bias due to numerical discretisation by a series of decreasing time steps in combination with an appropriate weighted time average of the quantity of interest (Lamberton and Pagès 2002), the computational complexity still remains \(\mathcal {O}(\varepsilon ^{-3})\) (Teh et al. 2016).

An alternative way of sampling from \(\pi (\hbox {d}x)\) exactly, so that it does not face the bias issue introduced by pure discretisation of (2), is by using the Metropolis–Hastings algorithm (Hastings 1970). We will refer to this as the computational statistics approach. The fact that the Metropolis Hastings algorithm leads to asymptotically unbiased samples of the probability measure is one of the reasons why it has been the method of choice in computational statistics. Moreover, unlike the numerical analysis approach, computational complexity of \(\mathcal {O}(\varepsilon ^{-2})\) is now required for achieving root mean square error of order \( \mathcal {O}(\varepsilon )\) in the (asymptotically unbiased) approximation of (1). We notice that MLMC (Giles 2015) and the unbiasing scheme (Rhee and Glynn 2012, 2015; Glynn and Rhee 2014) are able to achieve the \(\mathcal {O}(\varepsilon ^{-2})\) complexity for computing expectations of SDEs on a fixed time interval [0, T], despite using biased numerical discretisations. We are interested in extending this approach to the case of ergodic SDEs on the time interval \([0, \infty )\); see also discussion in Giles (2015).

A particular application of (2) is when one is interested in approximating the posterior expectations for a Bayesian inference problem. More precisely, if for a fixed parameter x the data \(\left\{ y_{i}\right\} _{i=1,\ldots ,N}\) are i.i.d. with densities \({\pi (y_i|x)}\), then \(\nabla U(x)\) becomes

$$\begin{aligned} \nabla U(x)=\nabla \log {\pi _{0}(x)}+\sum _{i=1}^{N}\nabla \log {\pi (y_{i}|x)}, \end{aligned}$$
(3)

with \(\pi _{0}(x)\) being the prior distribution of x. When dealing with problems where the number of data items \(N \gg 1\) is large, both the standard numerical analysis and the MCMC approaches suffer due to the high computational cost associated with calculating the likelihood terms \(\nabla \log {\pi (y_{i}|x)}\) over each data item \(y_{i}\). One way to circumvent this problem is the stochastic gradient Langevin dynamics algorithm (SGLD) introduced in Welling and Teh (2011), which replaces the sum of the N likelihood terms by an appropriately reweighted sum of \(s \ll N\) terms. This leads to the following recursion formula

$$\begin{aligned} x_{k+1}= & {} x_{k}+h\left( \nabla \log {\pi _{0}(x_{k})}+\frac{N}{s}\sum _{i=1}^{s}\nabla \log {\pi (y_{\tau _{i}^{k}}\vert x_{k})}\right) \nonumber \\&+\, \sqrt{2h}\xi _{k} \end{aligned}$$
(4)

where \(\xi _{k}\) is a standard Gaussian random variable on \(\mathbb {R}^{d}\) and \(\tau ^{k}=(\tau _{1}^{k},\ldots ,\tau _{s}^{k})\) is a random subset of \([N]=\{1,\ldots ,N\}\), generated, for example, by sampling with or without replacement from [N]. Notice that this corresponds to a noisy Euler discretisation, which for fixed Ns still has computational complexity \(\mathcal {O}(\varepsilon ^{-3})\) as discussed in Teh et al. (2016) and Vollmer et al. (2016). In this article, we are able to show that careful coupling between fine and coarse paths allows the application of the MLMC framework and hence reduction of the computational complexity of the algorithm to \(\mathcal {O}(\varepsilon ^{-2}|\log {\varepsilon }|^{3})\). We also remark that coupling in time has been recently further developed in Fang and Giles (2016), Fang and Giles (2017) and Fang and Giles (2019) for Euler schemes.

We would like to stress that in our analysis of the computational complexity of MLMC for SGLD, we treat N and s as fixed parameters. Hence, our results show that in cases in which one is forced to consider \(s \ll N\) samples (e.g. in the big data regime, where the cost of taking into account all N samples is prohibitively large) MLMC for SGLD can indeed reduce the computational complexity in comparison with the standard MCMC. However, recently the authors of Nagapetyan et al. (2017) have argued that for the standard MCMC the gain in complexity of SGLD due to the decreased number of samples can be outweighed by the increase in the variance caused by subsampling. We believe that an analogous analysis for MLMC would be highly non-trivial and we leave it for future work.

In summary, the main contributions of this paper are:

  1. 1.

    Extension of the MLMC framework to the time interval \([0,\infty )\) for (2) when U is strongly concave.

  2. 2.

    A convergence theorem that allows the estimation of the MLMC variance using uniform-in-time estimates in the 2-Wasserstein metric for a variety of different numerical methods.

  3. 3.

    A new method of estimation of expectations with respect to the invariant measures without the need of accept/reject steps (as in MCMC). The methods we propose can be better parallelised than MCMC, since computations on all levels can be performed independently.

  4. 4.

    The application of this scheme to stochastic gradient Langevin dynamics (SGLD) which reduces the complexity of \(\mathcal {O}(\varepsilon ^{-3})\) to \( \mathcal {O}(\varepsilon ^{-2}\left| \log \varepsilon \right| ^{3})\) much closer to the standard \(\mathcal {O}(\varepsilon ^{-2})\) complexity of MCMC.

The rest of the paper is organised as follows: In Sect. 2 we describe the standard MLMC framework, discuss the contracting properties of the true trajectories of (2) and describe an algorithm for applying MLMC with respect to time T for the true solution of (2). In Sect. 3 we present the new algorithm, as well as a framework that allows proving its convergence properties for a numerical method of choice. In Sect. 4 we present two examples of suitable numerical methods, while in Sect. 5 we describe a new version of SGLD with complexity \(\mathcal {O}(\varepsilon ^{-2}\left| \log \varepsilon \right| ^{3})\). We conclude in Sect. 6 where a number of relevant numerical experiments are described.

2 Preliminaries

In Sect. 2.1 we review the classic, finite time, MLMC framework, while in Sect. 2.2 we state the key asymptotic properties of solutions of (2) when U is strongly concave.

2.1 MLMC with fixed terminal time

Fix \(T>0\) and consider the problem of approximating \(\mathbb {E}[g(X_T)]\) where \(X_T\) is a solution of the SDE (2) and \(g:\mathbb {R}^d\rightarrow \mathbb {R}\). A classical approach to this problem consists of constructing a biased (bias arising due to time-discretisation) estimator of the form

$$\begin{aligned} \frac{1}{N}\sum _{i=1}^N g\big (\big (x_T^{M}\big )^{(i)}\big ), \end{aligned}$$
(5)

where \((x_T^{M})^{(i)}\) for \(i = 1, \ldots , N\) are independent copies of the random variable \(x_T^M\), with \(\{ x_{kh}^M \}_{k=0}^{M}\) being a discrete time approximation of (2) over [0, T] with the discretisation parameter h and with M time steps, i.e. \(Mh = T\). A central limit theorem for the estimator (5) has been derived in Duffie and Glynn (1995), and it was shown that its computational complexity is \(\mathcal {O}(\varepsilon ^{-3})\), for the root mean square error \(\mathcal {O}(\varepsilon )\) (as opposed to \(\mathcal {O}(\varepsilon ^{-2})\) that can be obtained if we could sample \(X_{T}\) without the bias). The recently developed MLMC approach allows recovering optimal complexity \(\mathcal {O}(\varepsilon ^{-2})\), despite the fact that the estimator used therein builds on biased samples. This is achieved by exploiting the following identity (Giles 2015; Kebaier 2005)

$$\begin{aligned} \mathbb {E}[g_L] = \mathbb {E}[g_0] + \sum _{\ell =1}^{L} \mathbb {E}[ g_{\ell } - g_{\ell -1} ], \end{aligned}$$
(6)

where \(g_\ell :=g(x_T^{M_\ell })\) and for any \(\ell =0\ldots L\) the Markov chain \(\{x_{kh_{\ell }}^{M_\ell }\}_{k=0}^{M_{\ell }}\) is the discrete time approximation of (2) over [0, T], with the discretisation parameter \(h_\ell \) and with \(M_{\ell }\) time steps ( hence \(M_\ell h_\ell = T\)). This identity leads to an unbiased estimator of \(\mathbb {E}[g_L]\) given by

$$\begin{aligned} \frac{1}{N_0} \sum _{i=1}^{N_0} g_0^{(i,0)} + \sum _{\ell =1}^{L}\left\{ \frac{1}{N_\ell } \sum _{i=1}^{N_\ell } \big (g_{\ell }^{(i,\ell )} - g_{\ell -1}^{(i,\ell )} \big ) \right\} , \end{aligned}$$

where \(g_{\ell }^{(i,\ell )}= g\big (\big (x_T^{M_\ell }\big )^{(i)}\big )\) and \(g_{\ell - 1}^{(i,\ell )}= g((x_T^{M_{\ell - 1}})^{(i)})\) are independent samples at level \(\ell \). The inclusion of the level \(\ell \) in the superscript \((i,\ell )\) indicates that independent samples are used at each level \(\ell \). The efficiency of MLMC lies in the coupling of \(g_{\ell }^{(i,\ell )}\) and \(g_{\ell -1}^{(i,\ell )}\) that results in small \(\mathrm {Var}[g_{\ell } - g_{\ell -1} ]\). In particular, for the SDE (2) one can use the same Brownian path to simulate \(g_{\ell }\) and \(g_{\ell -1}\) which, through the strong convergence property of the underlying numerical scheme used, yields an estimate for \(\mathrm {Var}[g_{\ell } - g_{\ell -1} ]\).

By solving a constrained optimisation problem (cost and accuracy), one can see that reduced computational complexity (variance) arises since the MLMC method allows one to efficiently combine many simulations on low accuracy grids (at a corresponding low cost), with relatively few simulations computed with high accuracy and high cost on very fine grids. It is shown in Giles (2015) that under the assumptionsFootnote 2

$$\begin{aligned} \bigl |\mathbb {E}[g- g_{\ell } ]|\le c_1h_\ell ^{\alpha },\quad \mathrm {Var}[g_{\ell } - g_{\ell -1} ]\le c_2 h_\ell ^{\beta }, \end{aligned}$$
(7)

for some \(\alpha \ge 1/2,\)\(\beta >0,\)\(c_1>0\) and \(c_2>0,\) the computational complexity of the resulting multi-level estimator with accuracy \(\varepsilon \) is proportional to

$$\begin{aligned} \mathcal {C}= {\left\{ \begin{array}{ll} \varepsilon ^{-2}, &{}\quad \beta >\gamma , \\ \varepsilon ^{-2}\log ^2(\varepsilon ), &{}\quad \beta =\gamma , \\ \varepsilon ^{-2-(1-\beta )/\alpha }, &{}\quad 0<\beta <\gamma \end{array}\right. } \end{aligned}$$

where the cost of the algorithm is of order \(h_L^{-\gamma }\). Typically, the constants \(c_{1},c_{2}\) grow exponentially in time T as they follow from classical finite time weak and strong convergence analysis of the numerical schemes. The aim of this paper is to establish the bounds (7) uniformly in time, i.e. to find constants \(\widetilde{c}_1\), \(\widetilde{c}_2 > 0\) independent of T such that

$$\begin{aligned} \sup _{T>0}\bigl |\mathbb {E}[g - g_{\ell } ]|\le \widetilde{c}_1 h_\ell ^{\alpha },\quad \sup _{T>0}\mathrm {Var}[g_{\ell } - g_{\ell -1} ]\le \widetilde{c}_2 h_\ell ^{\beta }.\nonumber \\ \end{aligned}$$
(8)

Remark 2.1

The reader may notice that in the regime when \(\beta >\gamma \), the computationally complexity of \(\mathcal {O}(\varepsilon ^{-2})\) coincides with that of an unbiased estimator. Nevertheless, the MLMC estimator as defined here is still biased, with the bias being controlled by the choice of final level parameter L. However, in this setting it is possible to eliminate the bias by a clever randomisation trick (Rhee and Glynn 2015).

2.2 Properties of ergodic SDEs with strongly concave drifts

Consider the SDE (2) and let U satisfy the following condition

  • HU0 For any \(x,y \in \mathbb {R}^d\) there exists a positive constant m s.t.

    $$\begin{aligned} \left\langle \nabla U(y) - \nabla U(x),y-x\right\rangle \le - m | x-y| ^{2}, \end{aligned}$$
    (9)

which is also known as a one-side Lipschitz condition. Condition HU0 is satisfied for strongly concave potential, i.e. when for any \(x,y \in \mathbb {R}^d\) there exists constant m s.t.

$$\begin{aligned} U(y) \le U(x)+\left\langle \nabla U(x),y-x\right\rangle - \frac{m}{2}| x-y| ^{2}. \end{aligned}$$

In addition HU0 implies that

$$\begin{aligned} \left\langle \nabla U(x),x\right\rangle \le - \frac{m}{2} | x| ^{2} + \frac{1}{2m} | \nabla U(0)|^2, \quad \forall x\in \mathbb {R}^d \end{aligned}$$
(10)

which in turn implies that

$$\begin{aligned} \left\langle \nabla U(x), x \right\rangle \le - m' | x| ^{2} + 2b | \nabla U(0) |^2, \quad \forall x\in \mathbb {R}^d \end{aligned}$$
(11)

for someFootnote 3\(m'>0,b\ge 0\). Condition HU0 ensures the contraction needed to establish uniform-in-time estimates for the solutions of (2). For the transparency of the exposition we introduce the following flow notation for the solution to (2), starting at \(X_{0}=x\)

$$\begin{aligned} \psi _{s,t,W}(x) := x + \int _s^t\nabla U(X_{r})\hbox {d}r + \int _s^t \sqrt{2}\hbox {d}W_{r},\quad x\in \mathbb {R}^{d}. \end{aligned}$$
(12)

The theorem below demonstrates that solutions to (2) driven by the same Brownian motion, but with different initial conditions, enjoy an exponential contraction property.

Theorem 2.2

Let \((W_t)_{t\ge 0}\) be a standard Brownian motion in \(\mathbb {R}^d\). We fix random variables \(Y_0\), \(X_0\in \mathbb {R}^{d}\) and define \(X_T=\psi _{0,T,W}(X_0)\) and \(Y_T=\psi _{0,T,W}(Y_0)\). If HU0 holds, then

$$\begin{aligned} \mathbb {E}|X_{T}-Y_{T}|^{2} \le \mathbb {E}|X_{0}-Y_{0}|^{2} \hbox {e}^{-2mT} \end{aligned}$$
(13)

Proof

The result follows from Itô’s formula. Indeed, we have

$$\begin{aligned}&\frac{1}{2}\hbox {e}^{2mt} |X_{t}-Y_{t}|^{2}= \frac{1}{2}|X_{0}-Y_{0}|^{2}+\int _{0}^{t}m \hbox {e}^{2ms } |X_{s}-Y_{s}|^{2}\hbox {d}s\\&\quad + \int _{0}^{t} \hbox {e}^{2ms}\left\langle \nabla U(X_{s}) - \nabla U(Y_{s}),X_{s}-Y_{s}\right\rangle \hbox {d}s. \end{aligned}$$

Assumption HU0 yields

$$\begin{aligned} \mathbb {E}|X_{T}-Y_{T}|^{2} \le \hbox {e}^{-2mT}\mathbb {E}|X_{0}-Y_{0}|^{2}, \end{aligned}$$

as required. \(\square \)

Remark 2.3

The 2-Wasserstein distance between probability measures \(\nu _{1}\) and \(\nu _{2}\) defined on a Polish metric space E, is given by

$$\begin{aligned} \mathcal {W}_{2}(\nu _{1},\nu _{2})= & {} \left( \inf _{\pi \in \Gamma (\nu _{1},\nu _{2})}\int _{E\times E}|x-y|^{2}\pi (\hbox {d}x,\hbox {d}y)\right) ^{\frac{1}{2}}, \end{aligned}$$

with \(\Gamma (\nu _{1},\nu _{2})\) being the set of couplings of \(\nu _{1}\) and \(\nu _{2}\) (all probability measures on \(E\times E\) with marginals \(\nu _{1}\) and \(\nu _{2}\)). We denote \(\mathcal {L}(\psi _{0,t,W}(x))= P_{t}(x,\cdot )\). That is, \(P_{t}\) is the transition kernel of the SDE (2). Since the choice of the same driving Brownian motion in Theorem 2.2 is an example of a coupling, Equation (13) implies

$$\begin{aligned} \mathcal {W}_{2}\left( P_{t}(x,\cdot ),P_{t}(y,\cdot )\right) \le | x-y| \exp \left( -mt\right) \end{aligned}$$
(14)

Consequently, \(P_{t}\) has a unique invariant measure, and thus, the process is ergodic (Hairer et al. 2011). In the present paper we are not concerned with determining couplings that are optimal; for practical considerations one should only consider couplings that are feasible to implement [see also discussion in Agapiou et al. (2018) and Giles and Szpruch (2014)].

2.3 Coupling in time T

For the MLMC method with different discretisation parameters on different levels, coupling with the same Brownian motion is not enough to obtain good upper bounds on the variance, as, in general, solutions to SDEs (2) are 1 / 2-Hölder continuous (Krylov 2009), i.e. for any \(t>s>0\) there exists a constant \(C>0\) such that

$$\begin{aligned} \mathbb {E}|X_t - X_s |^2 \le C |t-s| \end{aligned}$$
(15)

and it is well known that this bound is sharp. As we shall see later, this bound will not lead to an efficient MLMC implementation. However, by suitable coupling of the SDE solutions on time intervals of length T and S, \(T > S\), respectively, we will be able to take advantage of the exponential contraction property obtained in Theorem 2.2.

Let \((T_\ell )_{\ell \le 0}\) be an increasing sequence of positive real numbers. To couple processes with different terminal times \(T_i\) and \(T_j\), \(i\ne j\), we exploit the time homogeneous Markov property of the flow (12). More precisely, for each \(\ell \ge 0\) one would like to construct a pair \((\mathcal {X}^{(f,\ell )},\mathcal {X}^{(c,\ell )})\) of solutions to (2), which we refer to as fine and coarse paths, such that

$$\begin{aligned} \mathcal {L}(\mathcal {X}^{(f,\ell )}(T_{\ell - 1}))&=\mathcal {L}(X_{T_{\ell }}), \nonumber \\ \mathcal {L}(\mathcal {X}^{(c,\ell )}(T_{\ell - 1}))&=\mathcal {L}(X_{T_{\ell -1}}), \quad \forall \ell \ge 0, \end{aligned}$$
(16)

and

$$\begin{aligned} \mathbb {E}|\mathcal {X}^{(f,\ell )}(T_{\ell - 1}) - \mathcal {X}^{(c,\ell )}(T_{\ell - 1}) |^2 \le \mathbb {E}|X_{T_\ell } - X_{T_{\ell -1}} |^2.\quad \end{aligned}$$
(17)

Following Rhee and Glynn (2012, 2015); Agapiou et al. (2018); Giles (2015), we propose a particular coupling denoted by \((X^{(f,\ell )},X^{(c,\ell )})\), and constructed in the following way (see also Fig. 1a)

  • FirstFootnote 4 obtain a solution to (2) over \([0,T_\ell - T_{\ell -1}]\). We fix an \(\mathbb {R}^d\)-valued random variable X(0) and take \(X^{(f,\ell )}(0) = \psi _{0,(T_\ell - T_{\ell -1}),\tilde{W}}(X(0)) \).

  • Next couple fine and coarse paths on the remaining time interval \([0, T_{\ell -1}]\) using the same Brownian motion W, i.e.

    $$\begin{aligned} X^{(f,\ell )}(T_{\ell -1}) = \psi _{0,T_{\ell -1},W}(X^{(f,\ell )}(0)), \end{aligned}$$

    and

    $$\begin{aligned} X^{(c,\ell )}(T_{\ell -1}) = \psi _{0,T_{\ell -1},W}(X(0)). \end{aligned}$$
Fig. 1
figure 1

Shifted couplings

We note here that \(\nabla U(\cdot )\) in (2) is time homogenous; hence, the same applies for the corresponding transition kernel \(\mathcal {L}(\psi _{0,t,W}(x))= P_{t}(x,\cdot )\), which implies that condition (16) holds. Now Theorem 2.2 yields

$$\begin{aligned} \mathbb {E}|X^{(f,\ell )}(T_{\ell -1}) - X^{(c,\ell )}(T_{\ell -1})|^{2} \le \mathbb {E}|X^{(f,\ell )}(0)-X(0)|^{2} \hbox {e}^{-2mT_{\ell -1}}. \end{aligned}$$
(18)

implying that condition (17) is also satisfied. We now take \(\rho >1\) and define

$$\begin{aligned} T_{\ell } := \frac{\log {2}}{2m} \rho (\ell +1) \quad \forall \ell \ge 0. \end{aligned}$$
(19)

In our case \(g_{\ell }^{(i,\ell )}=g((X^{(f,\ell )}(T_{\ell -1}))^{(i)})\) and \(g_{\ell -1}^{(i,\ell )} =g((X^{(c,\ell )}(T_{\ell -1}))^{(i)})\) and we assume that g is globally Lipschitz with Lipschitz constant K. Hence,

$$\begin{aligned}&\mathbb {E}|g(X^{(f,\ell )}(T_{\ell -1})) - g(X^{(c,\ell )}(T_{\ell -1}))|^{2}\\&\quad \le K^{2}\mathbb {E}|X^{(f,\ell )}(T_{\ell -1}) - X^{(c,\ell )}(T_{\ell -1})|^{2} \\&\quad \le K^{2}\mathbb {E}|X^{(f,\ell )}(0)-X(0)|^{2} \hbox {e}^{-2mT_{\ell -1}}\\&\quad \le K^{2}\mathbb {E}|X^{(f,\ell )}(0)-X_{0}|^{2} 2^{-\rho \ell } \\&\quad \le K^{2}C | T_{\ell }-T_{\ell -1} |2^{-\rho \ell }. \quad \forall i\ge 0, \end{aligned}$$

where the last inequality follows from (15).

3 MLMC in T for approximation of SDEs

Having described a coupling algorithm with good contraction properties, we now present the main algorithm in Sect. 3.1, while in Sect. 3.2 we present a general numerical analysis framework for proving the convergence of our algorithm.

3.1 Description of the general algorithm

We now focus on the numerical discretisation of the Langevin equation (2). In particular, we are interested in coupling the discretisations of (2) based on step size \(h_{\ell }\) and \(h_{\ell -1}\) with \(h_{\ell }=h_{0}2^{-\ell }\). Furthermore, as we are interested in computing expectations with respect to the invariant measure \(\pi (\hbox {d}x)\), we also increase the time endpoint \(T_{\ell }\uparrow \infty \) which is chosen such that \(\frac{T_{\ell }}{h_{0}}, \frac{T_{\ell }}{h_{\ell }}\in \mathbb {N}\). We illustrate the main idea using two generic discrete time stochastic processes \((x_k)_{k\in \mathbb {N}},(y_k)_{k\in \mathbb {N}}\) which can be defined as

$$\begin{aligned} x_{k+1}= S^f_{h,\xi _k}(x_k), \quad y_{k+1}=S^c_{h,\tilde{\xi }_k}(y_k), \end{aligned}$$
(20)

where \(S_{h,\xi _k}(x_k) = S(x_k,h,\xi _k)\) and the operators \(S^{f},S^{c}: \mathbb {R}^d \times \mathbb {R}_+ \times \mathbb {R}^{d\times m} \rightarrow \mathbb {R}^d \) are Borel measurable, whereas \(\xi ,\tilde{\xi }\) are random inputs to the algorithms. The operators \(S^f\) and \(S^c\) in (20) need not be the same. This extra flexibility allows analysing various coupling ideas.

For example, for the Euler discretisation we have

$$\begin{aligned} S_{h,\xi }(x)=x+h\nabla U(x)+\sqrt{2h} \xi , \end{aligned}$$

where \(\xi \sim \mathcal {N}(0,I)\). We will also use the notation \(P_{h}(x,\cdot )=\mathcal {L}\left( S_{h,\xi }(x)\right) \) for the corresponding Markov kernel.

For MLMC algorithms one evolves both fine and coarse paths jointly, over a time interval of length \(T_{\ell -1}\), by doing two steps for the finer level (with the time step \(h_\ell \)) and one on the coarser level (with the time step \(h_{\ell -1}\)). We will use the notation \((x^f_{\cdot }),(x^c_{\cdot })\) for

$$\begin{aligned} x_{k+\frac{1}{2}}^{f}&=S_{\frac{h}{2},\xi _{k+\frac{1}{2}}}^{f}\left( x_{k}^{f}\right) , \quad x_{k+1}^{f} =S_{\frac{h}{2},\xi _{k+1}}^{f}\left( x_{k+\frac{1}{2}}^{f}\right) \end{aligned}$$
(21)
$$\begin{aligned} x_{k+1}^{c}&=S_{h,\tilde{\xi }_{k+1}}^{c}\big (x_{k}^{c}\big ) . \end{aligned}$$
(22)

The algorithm generating \((x^{f}_{k})_{k\in \mathbb {N}/2}\) and \((x^{c}_{k})_{k\in \mathbb {N}}\) is presented in Algorithm 1.

figure a

3.2 General convergence analysis

We will now present a general theorem for estimating the bias and the variance in the MLMC set up. We refrain from prescribing the exact dynamics of \((x_k)_{k\ge 0}\) and \((y_k)_{k\ge 0}\) in (20), as we seek general conditions allowing the construction of uniform-in-time approximations of (2) in the \(L^2\)-Wasserstein norm. The advantage of working in this general setting is that if we wish to work with more advanced numerical schemes than the Euler method (e.g. implicit, projected, adapted or randomised scheme) or general noise terms (e.g. \(\alpha \)-stable processes), it will be sufficient to verify relatively simple conditions to see the performance of the complete algorithm. To give the reader some intuition behind the abstract assumptions, we discuss the specific methods in Sect. 4.

3.2.1 Uniform estimates in time

Definition 3.1

(Bias) We say that a process \((x_k)_{k\in \mathbb {N}}\) converges weakly uniformly in time with order \(\alpha >0\) to the solution \((X_t)_{t \ge 0}\) of the SDE (2), if there exists a constant \(c>0\) such that for any \(h > 0\),

$$\begin{aligned} \sup _{t\ge 0}|\mathbb {E}[g(X_t)] - \mathbb {E}[g(x_{\lfloor t/h \rfloor })]|\le c h^{\alpha }\, \quad g\in C^r_K(\mathbb {R}). \end{aligned}$$

We define MLMC variance as follows:

Definition 3.2

(MLMC variance) Let the operators in (21)–(22) satisfy that for all x

$$\begin{aligned} \mathcal {L}\left( S^{f}_{h,\xi }(x)\right) =\mathcal {L}\left( S_{h,\tilde{\xi }}^{c}(x)\right) . \end{aligned}$$
(23)

We say that the MLMC variance is of order \(\beta >0\) if there exists a constant \(c_V>0\) s.t. for any \(h > 0\),

$$\begin{aligned} \sup _{t\ge 0}\mathbb {E}|g(x^{c}_{\lfloor t/h\rfloor }) - g(x^{f}_{\lfloor t/h\rfloor })|^2 \le c_V h^{\beta }. \end{aligned}$$
(24)

3.2.2 Statement of sufficient conditions

We now discuss the necessary conditions imposed on a generic numerical method (20) to estimate MLMC variance. We decompose the global error into the one step error and the regularity of the scheme. To proceed we introduce the notation \(x^h_{k,x_s} \) for the process at time k with initial condition \(x_s\) at time \(s<k\). If it is clear from the context what initial condition is used, we just write \(x^h_{k}\). We also define the conditional expectation operator as \(\mathbb {E}_n[\cdot ]:=\mathbb {E}[\cdot | \mathcal {F}_{n}]\), where \(\mathcal {F}_n := \sigma \left( x^h_k : k \le n \right) \) .

We now have the following definition.

Definition 3.3

(\(L^2\)-regularity). We will say that the one step operator \(S : \mathbb {R}^d \times \mathbb {R}_+ \times \mathbb {R}^{d\times m} \rightarrow \mathbb {R}^d \) is \(L^2\)-regular uniformly in time if for any \(\mathcal {F}_n\)-measurable random variables \(x_n\), \(y_n\in \mathbb {R}^d\) there exist constants K, \(C_{\mathcal {R}}\), \(C_{\mathcal {H}}\), \(\widetilde{\beta }>0\) and random variables \(Z_{n+1}\), \(\mathcal {R}_{n+1} \in \mathcal {F}_{n+1}\) and \(\mathcal {H}_{n} \in \mathcal {F}_{n}\), such that for all \(h \in (0,1)\)

$$\begin{aligned} S_{h,\xi _{n+1}}(x_n) - S_{h,\xi _{n+1}}(y_n) = x_n - y_n + Z_{n+1} \end{aligned}$$

and

$$\begin{aligned}&\mathbb {E}_n[ |S_{h,\xi _{n+1}}(x_n) - S_{h,\xi _{n+1}}(y_n) |^2] \le (1 - Kh)|x_n - y_n |^2 \nonumber \\&\quad + \mathcal {R}_{n} \nonumber \\&\mathbb {E}_n[|Z_{n+1}|^2] \le \mathcal {H}_{n} |x_n - y_n |^2 h, \end{aligned}$$
(25)

where

$$\begin{aligned}&\sup _{n\ge 1}\mathbb {E}\left[ \sum _{i=0}^{n-1} \hbox {e}^{(i-(n-1))hK/2} \mathcal {R}_i \right] \le C_{\mathcal {R}} h^{\widetilde{\beta }}, \nonumber \\&\quad \sup _{n\ge 1}\mathbb {E}\left[ |\mathcal {H}_n|^2\right] \le C_{\mathcal {H}}. \end{aligned}$$
(26)

We now introduce the set of the assumptions needed for the proof of the main convergence theorem.

Assumption 1

Consider two processes \((x^{f}_{k})_{2k\in \mathbb {N}}\) and \((x^{c}_{k})_{k\in \mathbb {N}}\) obtained from the recursive application of the operators \(S^f_{h,\xi }(\cdot )\) and \(S^c_{h,\xi }(\cdot )\) as defined in (20). We assume that

H0:

There exists a constant \(H>0\) such that for all \(q>1\)

$$\begin{aligned} \sup _k \mathbb {E}|x^{f}_{k}|^{q} \le H \quad \text {and} \quad \sup _k \mathbb {E}|x^{c}_{k}|^q \le H. \end{aligned}$$
H1:

For any \(x\in \mathbb {R}^d\)

$$\begin{aligned} \mathcal {L}\left( S^{f}_{h,\xi }(x)\right) =\mathcal {L}\left( S_{h,\tilde{\xi }}^{c}(x)\right) . \end{aligned}$$
H2:

The operator \(S^f_{h,\xi }(\cdot )\) is \(L^2\)-regular uniformly in time.

Below, we present the main convergence result of this section. By analogy to (21)–(22), we use the notation

$$\begin{aligned} x^{f}_{{n,x^{c}_{{n-1}}}}&= S^f_{h,\xi _{n}}\left( x^{f}_{{n-\frac{1}{2},x^{c}_{{n-1}}}} \right) \\ x^{f}_{{n-\frac{1}{2},x^{c}_{{n-1}}}}&= S^f_{h,\xi _{n-\frac{1}{2}}}\left( x^{c}_{{n-1}} \right) \\ x^{c}_{{n,x^{c}_{{n-1}}}}&= S^c_{h,\tilde{\xi }_n}\left( x^{c}_{{n-1}} \right) . \end{aligned}$$

Using the estimates derived here, we can immediately estimate the rate of decay of MLMC variance.

Theorem 3.4

Take \((x^{f}_{n})_{n\in \mathbb {N}/2}\) and \((x^{c}_{n})_{n\in \mathbb {N}}\) with \(h\in (0,1]\) and assume that H0H2 hold. Moreover, assume that there exist constants \(c_{s}>0,c_{w}>0\) and \(\alpha \ge \frac{1}{2}\), \(\beta \ge 0\), \(p \ge 1\) with \(\alpha \ge \frac{\beta }{2}\) such that for all \(n\ge 1\)

$$\begin{aligned} |\mathbb {E}_{n-1}\big (x^{c}_{{n,x^{c}_{{n-1}}}} - x^{f}_{{n,x^{c}_{{n-1}}}}\big )| \le c_{w}(1+ |x^{c}_{{n-1}}|^p)h^{\alpha +1}, \end{aligned}$$
(27)

and

$$\begin{aligned} \mathbb {E}_{n-1}\big [|x^{c}_{{n,x^{c}_{{n-1}}}} - x^{f}_{{n,x^{c}_{{n-1}}}} |^2\big ] \le c_{s}(1+|x^{c}_{{n-1}}|^{2p})h^{\beta +1}. \end{aligned}$$
(28)

Then, the global error is bounded by

$$\begin{aligned} \mathbb {E}[|x^{c}_{{T/h,x_{0}}} - x^{f}_{{T/h,y_0}} |^2]&\le |x_0-y_0 |^2 \hbox {e}^{- K/2 T} + 2\Gamma h^{\beta }/K \\&\quad + \sum _{j=0}^{n-1}\hbox {e}^{ (j-(n-1))h K /2 }\mathbb {E}(\mathcal {R}_{j})\,, \end{aligned}$$

where \(T/h = n\) and \(\Gamma \) is given by (29).

Proof

We begin using the following identity

$$\begin{aligned} x^{c}_{{n,x_{0}}} - x^{f}_{{n,y_{0}}}&= x^{c}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{f}_{{n-1}}} } \\&= \big (x^{c}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{c}_{{n-1}} } } \big ) + \big ( x^{f}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{f}_{{n-1}} } }\big ). \end{aligned}$$

We will be able to deal with the first term in the sum by using Equations (27) and (28), while the second term will be controlled because of the \(L^{2}\) regularity of the numerical scheme. Indeed, by squaring both sides in the equality above, we have

$$\begin{aligned}&|x^{c}_{{n,y_{0}}} - x^{f}_{{n,x_{0}}}|^2 = | x^{c}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{c}_{{n-1}} } } |^2 \\&\quad + |x^{f}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{f}_{{n-1}} } }|^2 \\&\quad +2 \big \langle x^{c}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{c}_{{n-1}} } } , x^{c}_{{n-1}} - x^{f}_{{n-1}}+ Z_{n}\big \rangle , \end{aligned}$$

where in the last line we have used Assumption H2. Applying conditional expectation operator to both sides of the above equality, we obtain

$$\begin{aligned}&\mathbb {E}_{n-1}[|x^{c}_{{n,y_{0}}} - x^{f}_{{n,x_{0}}}|^2] = \mathbb {E}_{n-1}\big [|x^{c}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{c}_{{n-1}} } } |^2\big ] \\&\quad + \mathbb {E}_{n-1}\big [| x^{f}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{f}_{{n-1}} } }|^2\big ] \\&\quad + 2 \langle x^{c}_{{n-1}} -x^{f}_{{n-1}}, \mathbb {E}_{n-1}\big [x^{c}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{c}_{{n-1}} } } \big ] \rangle \\&\quad +2 \mathbb {E}_{n-1} \big \langle Z_{n}, x^{c}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{c}_{{n-1}} } } \big \rangle . \end{aligned}$$

Applying Cauchy–Schwarz inequality and using the weak error estimate (27) lead to

$$\begin{aligned}&\mathbb {E}_{n-1}\big [|x^{c}_{{n,y_{0}}} - x^{f}_{{n,x_{0}}}|^2\big ] \le \mathbb {E}_{n-1}\big [|x^{c}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{c}_{{n-1}} } } |^2\big ] \\&\quad + \mathbb {E}_{n-1}\big [| x^{f}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{f}_{{n-1}} } }|^2\big ] \\&\quad + 2 c_{w} h^{\alpha +1} | x^{c}_{{n-1}} -x^{f}_{{n-1}}| \big (1+ |x^{c}_{{n-1}}|^p\big ) \\&\quad + 2 (\mathbb {E}_{n-1}[ |Z_{n}|^2])^{1/2} \big (\mathbb {E}_{n-1}[| x^{c}_{{n, x^{c}_{{n-1}} } } - x^{f}_{{n, x^{c}_{{n-1}} } } |^2]\big )^{1/2}. \end{aligned}$$

By Assumptions H0H2 and the strong error estimate (28), we have

$$\begin{aligned}&\mathbb {E}_{n-1}[|x^{c}_{{n,y_{0}}} - x^{f}_{{n,x_{0}}}|^2] \le c_{s}(1+|x^{c}_{{n-1}}|^{2p})h^{\beta +1} \\&\qquad + |x^{c}_{{n-1}}-x^{f}_{{n-1}}|^2 (1 - K h ) + \mathcal {R}_{n-1}\\&\qquad + 2 c_{w} h^{\alpha +1} | x^{c}_{{n-1}} -x^{f}_{{n-1}}| (1+ |x^{c}_{{n-1}}|^p) \\&\qquad + 2 \Big ( \mathbb {E}_{n-1}[\mathcal {H}_{n}] |x^{c}_{{n-1}}-x^{f}_{{n-1}}|^2 h \Big )^{1/2} \Big ( c_{s}(1+|x^{c}_{{n-1}}|^{2p})h^{\beta +1}\Big )^{1/2} \\&\quad \le c_{s}(1+|x^{c}_{{n-1}}|^{2p})h^{\beta +1} + |x^{c}_{{n-1}}-x^{f}_{{n-1}}|^2 (1 - K h ) + \mathcal {R}_{n-1}\\&\qquad + 2 c_{w} h^{\alpha +1} | x^{c}_{{n-1}} -x^{f}_{{n-1}}| (1+ |x^{c}_{{n-1}}|^p) \\&\qquad + 2 \Big ( |x^{c}_{{n-1}}-x^{f}_{{n-1}}|^2 h \Big )^{1/2} \Big ( c_{s} \mathbb {E}_{n-1}\big [\mathcal {H}_{n} (1+|x^{c}_{{n-1}}|^{2p}) \big ] h^{\beta +1}\Big )^{1/2}, \end{aligned}$$

while taking expected values and applying Cauchy–Schwarz inequality and the fact that \(\alpha \ge \frac{\beta }{2} \) and \(h<1\) (and hence \(h^{\alpha +1}\le h^{\frac{\beta }{2} +1} \)) give

$$\begin{aligned}&\mathbb {E}[|x^{c}_{{n,y_{0}}} - x^{f}_{{n,x_{0}}}|^2] \le c_{s}(1+\mathbb {E}[|x^{c}_{{n-1}}|^{2p}])h^{\beta +1} \\&\quad + \mathbb {E}[|x^{c}_{{n-1}}-x^{f}_{{n-1}}|^2] (1 - K h ) + \mathbb {E}[\mathcal {R}_{n-1}]\\&\quad + 2\sqrt{2} c_{w} \big (\mathbb {E}\big [| x^{c}_{{n-1}} -x^{f}_{{n-1}}|^2 h \big ]\big )^{1/2} \big (\mathbb {E}\big [(1+ |x^{c}_{{n-1}}|^{2p}) h^{\beta +1}\big ]\big )^{1/2} \\&\quad + 2 \mathbb {E}\Big [ |x^{c}_{{n-1}}-x^{f}_{{n-1}}|^2 h \Big ]^{1/2} \mathbb {E}\Big [ c_{s}\mathcal {H}_{n-1}\big (1+|x^{c}_{{n-1}}|^{2p}\big ) h^{\beta +1} \Big ]^{1/2}. \end{aligned}$$

Now Young’s inequality gives that for any \(\varepsilon >0\)

$$\begin{aligned}&\mathbb {E}\big [|x^{c}_{{n-1}} -x^{f}_{{n-1}}|^2 h \big ]^{1/2} \mathbb {E}\big [\big (1+|x^{c}_{{n-1}}|^{2p}\big )h^{\beta +1}\big ]^{1/2}\\&\quad \le \varepsilon \mathbb {E}\big [\big (x^{c}_{{n-1}} -x^{f}_{{n-1}}\big )^2\big ]h + \frac{1}{4\varepsilon }\mathbb {E}\big [\big (1+|x^{c}_{{n-1}}|^{2p}\big )\big ]h^{\beta +1} \end{aligned}$$

and

$$\begin{aligned}&\mathbb {E}\Big [ |x^{c}_{{n-1}}-x^{f}_{{n-1}}|^2 h \Big ]^{1/2} \mathbb {E}\Big [ c_{s}\mathcal {H}_{n-1}\big (1+|x^{c}_{{n-1}}|^{2p}\big ) h^{\beta +1} \Big ]^{1/2}\\&\quad \le \varepsilon \mathbb {E}\Big [|x^{c}_{{n-1}}-x^{f}_{{n-1}}|^2 \Big ] h + \frac{1}{4\varepsilon } \mathbb {E}\Big [ c_{s}\mathcal {H}_{n-1}\big (1+|x^{c}_{{n-1}}|^{2p}\big )\Big ]h^{\beta +1} \,, \end{aligned}$$

while

$$\begin{aligned} \mathbb {E}\Big [ \mathcal {H}_{n-1}\big (1+|x^{c}_{{n-1}}|^{2p}\big )\Big ] \le \frac{1}{2} \mathbb {E}\Big [ |\mathcal {H}_{n-1}|^2\Big ] + \mathbb {E}\Big [ \big (1+|x^{c}_{{n-1}}|^{4p}\big )\Big ]. \end{aligned}$$

Let \(\gamma _n:=\mathbb {E}[|x^{c}_{{n,y_{0}}} - x^{f}_{{n,x_{0}}}|^2]\). Since \((1+\mathbb {E}[|x^{c}_{{n-1}}|^{2p}]) \le (1+\mathbb {E}[|x^{c}_{{n-1}}|^{4p}])\), we have

$$\begin{aligned} \gamma _n&\le \left( c_{s}H+\frac{2\sqrt{2}c_{w}H+c_{s}\big (\mathbb {E}[|\mathcal {H}_{n-1}|^2]+ 2 H\big )}{4\varepsilon } \right) h^{\beta +1} \\&\quad +\mathbb {E}[\mathcal {R}_{n-1}] + \gamma _{n-1} \big (1 -[K-(2\sqrt{2}c_{w}+2) \varepsilon ] h \big )\\ \end{aligned}$$

Fix \(\varepsilon =\frac{ K}{2(2\sqrt{2}c_{w}+2)}\), and define

$$\begin{aligned} \Gamma&:= \Bigg (c_{s}H + \big (2\sqrt{2}c_{w}+2\big )\nonumber \\&\quad \times \frac{\big (2\sqrt{2}c_{w}H+c_{s}\big (\mathbb {E}[\sup _n|\mathcal {H}_{n-1}|^2]+2 H\big )\big )}{2 K}\Bigg ). \end{aligned}$$
(29)

We have

$$\begin{aligned} \gamma _n \le \left( 1-K h/2 \right) \gamma _{n-1} + \Gamma h^{\beta +1}+\mathbb {E}[\mathcal {R}_{n-1}]. \end{aligned}$$
(30)

We complete the proof by Lemma 3.5. \(\square \)

Lemma 3.5

Let \(a_n, g_n, c \ge 0\), \(n \in \mathbb {N}\) be given. Moreover, assume that \(1+\lambda >0\). Then, if \(a_n \in \mathbb {R}\), \(n \in \mathbb {N}\), satisfies

$$\begin{aligned} a_{n+1} \le a_n(1+ \lambda ) + g_{n} + c, \quad n \ge 0 \,, \end{aligned}$$

then

$$\begin{aligned} a_n \le a_0 \hbox {e}^{n\lambda } + c \frac{\hbox {e}^{n\lambda }-1}{\lambda } + \sum _{j=0}^{n-1} g_{j} \hbox {e}^{((n-1) - j)\lambda }, \qquad n \ge 1. \end{aligned}$$

Remark 3.6

Note that if we can choose \(\widetilde{\beta } > \beta \) in (26) (which, as we will see in Sect. 4, is the case, for example, for Euler and implicit Euler schemes), then from Theorem 3.4 we get

$$\begin{aligned} \mathbb {E}[|x^{c}_{{T/h,x_{0}}} - x^{f}_{{T/h,y_0}} |^2] \le |x_0-y_0|^2\hbox {e}^{-K/2T} + (2\Gamma /K + C_{\mathcal {R}})h^{\beta }. \end{aligned}$$

3.2.3 Optimal choice of parameters

Theorem 3.4 is fundamental in terms of applying the MLMC as it guarantees that the estimate for the variance in (7) holds. In particular, we have the following lemma.

Lemma 3.7

Assume that all the assumptions from Theorem 3.4 hold. Let \(g(\cdot )\) be a Lipschitz function. Define

$$\begin{aligned} h_\ell =2^{-\ell }, \quad T_{\ell } \sim -\frac{2\beta }{K} \left( \log {h_{0}}+\ell \log {2} \right) , \quad \forall \ell \ge 0. \end{aligned}$$

Then, resulting MLMC variance is given by.

$$\begin{aligned} \text {Var}[\Delta _{\ell }] \le 2^{-\beta \ell }, \quad \Delta _{\ell }=g\left( x_{\frac{T_{\ell -1}}{h_{\ell -1}}}^{(f,\ell )}\right) -g\left( x_{\frac{T_{\ell -1}}{h_{\ell -1}}}^{(c,\ell )}\right) \end{aligned}$$

Proof

Since g is a Lipschitz function and

$$\begin{aligned} \mathbb {E}\left| x_{\frac{T_{\ell }-T_{\ell -1}}{h_{\ell }}}^{h_{\ell }}-x_{0}\right| ^{2}<\infty , \end{aligned}$$

the proof is a direct consequence of Theorem 3.4. \(\square \)

Remark 3.8

Unlike in the standard MLMC complexity theorem (Giles 2015) where the cost of simulating single path is of order \(\mathcal {O}(h_\ell ^{-1})\), here we have \(\mathcal {O}(h_\ell ^{-1}|\log {h_\ell }|)\). This is due to the fact that terminal times are increasing with levels. For the case \(h_\ell =2^{-\ell }\) this results in cost per path \(\mathcal {O}(2^{-\ell }\ell )\) and does not exactly fit the complexity theorem in Giles (2015). Clearly in the case when MLMC variance decays with \(\beta >1\), we still recover the optimal complexity of order \(\mathcal {O}(\varepsilon ^{-2})\). However, in the case \(\beta =1\) following the proof by Giles (2015), one can see that the complexity becomes \(\mathcal {O}(\varepsilon ^{-2}|\log {\varepsilon }|^{3})\).

Remark 3.9

In the proof above we have assumed that K is independent of h, while we have also used crude bounds in order not to deal directly with all the individual constants, since these would be dependent on the numerical schemes used.

Example 3.10

In the case of the Euler–Maruyama method as we see from the analysisFootnote 5 in Sect. 4.1\(K=2m'-L^{2}h_{\ell }, \beta =2\), while \(\mathcal {R}_{n}=0,\mathcal {H}_{n}=L\). Here L is the Lipschitz constant of the drift \(\nabla U(x)\).

4 Examples of suitable methods

In this section we present two (out of many) numerical schemes that fulfil the conditions of Theorem 3.4. In particular, we need to verify that our scheme is \(L^2\)-regular in time, it has bounded numerical moments as in \(\mathbf H0 \), and finally that it satisfies the one-step error estimates (27)–(28). Note that for both methods discussed in this section we verify condition (25) with \(h^2\) instead of h. However, since in (25) we consider \(h \in (0,1)\), both (35) and (42) imply (25).

4.1 Euler–Maruyama method

We start by considering the explicit Euler scheme

$$\begin{aligned} S_{h,\xi }^{f}(x)=x+h\nabla U(x) +\sqrt{2h}\xi , \end{aligned}$$
(31)

while \(S^{f}=S^{c}\), that is, we are using the same numerical method for the fine and coarse paths. In order to be able to recover the integrability and regularity conditions, we will need to impose further assumptions on the potentialFootnote 6U. In particular, additionally to Assumption HU0, we assume that

HU1:

There exists constant L such that for any \(x,y \in \mathbb {R}^d\)

$$\begin{aligned} | \nabla U(x)-\nabla U(y)| \le L| x-y| \end{aligned}$$

As a consequence of this assumption, we have

$$\begin{aligned} | \nabla U(x)| \le L| x| + | \nabla U(0) | \end{aligned}$$
(32)

We can now prove the \(L^{2}\)-regularity in time of the scheme.

\(L^2\)- regularity Since regularity is a property of the numerical scheme itself and it does not relate with the coupling between fine and coarse levels, for simplicity of notation we prove things directly for

$$\begin{aligned} x_{{n+1,x_{n}}}= S^{f}_{h,\xi _{n+1}}(x_{n}) . \end{aligned}$$
(33)

In particular, the following lemma holds.

Lemma 4.1

(\(L^2\)-regularity) Let HU0 and HU1 hold. Then, the explicit Euler scheme is \(L^2\)-regular, i.e.

$$\begin{aligned} \mathbb {E}_{n-1}[|x_{n,x_{n-1}} - x_{n,y_{n-1}}|^2]&\le (1 - (2m-L^2h)h ) |x_{n-1}-y_{n-1} |^2 \end{aligned}$$
(34)
$$\begin{aligned} \mathbb {E}_{n-1}[|Z_n|^2]&\le h^2 L^2 |x_{n-1} - y_{n-1} |^2 \end{aligned}$$
(35)

Proof

The difference between the Euler scheme taking values \(x_{n-1}\) and \(y_{n-1}\) at time \(n-1\) is given by

$$\begin{aligned} x_{n,x_{n-1}} - x_{n,y_{n-1}}&= x_{n-1} - y_{n-1} \\&\quad + h( \nabla U(x_{n-1}) - \nabla U(y_{n-1})). \end{aligned}$$

This, along with HU0 and HU1, leads to

$$\begin{aligned}&\mathbb {E}_{n-1}[(x_{n,x_{n-1}} - x_{n,y_{n-1}})^2]= |x_{n-1} - y_{n-1}|^2\\&\qquad + 2h \left\langle \nabla U(x_{n-1}) - \nabla U(y_{n-1}),x_{n-1}-y_{n-1}\right\rangle \\&\qquad + | \nabla U(x_{n-1}) - \nabla U(y_{n-1}) |^{2} h^2 \\&\quad \le |y_{n-1}-x_{n-1}|^2 (1 - 2mh + L^2 h^2) \\&\quad = |y_{n-1}-x_{n-1}|^2 (1 - (2m-L^2h)h ). \end{aligned}$$

This proves the first part of the lemma. Next, due to HU1

$$\begin{aligned} \mathbb {E}_{n-1}[|Z_n|^2]&= h^2 \mathbb {E}_{n-1}[| \nabla U(x_{n-1}) - \nabla U(y_{n-1}) |^2] \\&\le h^2 L^2 |x_{n-1} - y_{n-1} |^2. \end{aligned}$$

\(\square \)

Integrability In the Lipschitz case we only require mean square integrability. This will become apparent when we analyse the one-step error and (27) and (28) will hold with \(p=1\).

Lemma 4.2

(Integrability) Let HU0 and HU1 hold. Then,

$$\begin{aligned} \mathbb {E}[|x_{n}|^2]&\le \mathbb {E}|x_{0}|^2 \exp \{-(2 m' - L^2h) nh \} \\&\quad + 2 (b | \nabla U(0) |^2+h) \frac{1-\exp \{-(2 m' - L^2h) nh \}}{(2 m' - L^2h) h } \end{aligned}$$

Proof

We have

$$\begin{aligned} |x_{n}|^2&= |x_{n-1}|^2 + |\nabla U(x_{n-1})|^2 h^2 +2h \xi ^{T}\xi \\&\quad + 2 h x^\mathrm{T}_{n-1} \nabla U(x_{n-1}) +\sqrt{2h}x^\mathrm{T}_{n-1} \xi \\&\quad + \sqrt{2}h^{3/2} \xi ^\mathrm{T} \nabla U(x_{n-1}) \end{aligned}$$

and hence, using (11)

$$\begin{aligned} \mathbb {E}|x_{n}|^2 \le \mathbb {E}|x_{n-1}|^2 (1 - 2 m' h + L^2 h^2) + 2b | \nabla U(0) |^2+2dh. \end{aligned}$$

We can now use Lemma 3.5

$$\begin{aligned} \mathbb {E}|x_{n}|^2&\le \mathbb {E}|x_{0}|^2 \exp \{-(2 m' - L^2h) nh \} \\&\quad + 2 (b | \nabla U(0) |^2+dh) \frac{1-\exp \{-(2 m' - L^2h) nh \}}{(2 m' - L^2h) h } \end{aligned}$$

The proof for \(q>2\) can be done in similar way by using the binomial theorem. \(\square \)

One-step errors estimates Having proved \(L^{2}\)-regularity and integrability for the Euler scheme, we are now left with the task of proving inequalities (27) and (28) for Euler schemes coupled as in Algorithm 1. It is enough to prove the results for \(n=1\). We note that both \(x^{f}_{0}=x^{c}_{0}=x\) and we have the following lemma.

Lemma 4.3

(One-step errors) Let HU0 and HU1 hold. Then, the weak one-step distance between Euler schemes with time steps \(\frac{h}{2}\) and h, respectively, is given by

$$\begin{aligned}&| \mathbb {E}[ x^{f}_{{1,x}}- x^{c}_{{1,x}} ] | \nonumber \\&\quad \le \frac{h^{3/2}}{2} L \left( \mathbb {E}\left[ \frac{\sqrt{h}}{2} \left( L|x| + | \nabla U(0) |\right) \right] + \sqrt{\frac{2d}{\pi }} \right) . \end{aligned}$$
(36)

The one-step \(L^2\) distance can be estimated as

$$\begin{aligned} \mathbb {E}| x^{f}_{{1,x}}- x^{c}_{{1,x}} | ^{2} \le h^3 \frac{L^2}{4}\left( \frac{h}{2} ( |x|^{2} + | \nabla U(0) |^2 ) + d \right) \end{aligned}$$
(37)

If in addition to HU0 and HU1, \(U\in C^3\) andFootnote 7

$$\begin{aligned} | \partial ^{2}U(x) | + | \partial ^{3} U(x) | \le C, \quad \forall x\in \mathbb {R}^d, \end{aligned}$$

then the order in h of the weak error bound can be improved, i.e.

$$\begin{aligned} | \mathbb {E}[ x^{f}_{{1,x}}- x^{c}_{{1,x}} ] |&\le C h^2 \mathbb {E}\big [ |x | + h|x |^2 \nonumber \\&\quad + |\nabla U(0)| + h |\nabla U(0)|^2 + d \big ]. \end{aligned}$$
(38)

Proof

We calculate

$$\begin{aligned}&x^{f}_{{1,x}}- x^{c}_{{1,x}} \nonumber \\&\quad = x+\frac{h}{2}\nabla U(x)+\sqrt{h}\xi _{1}+\frac{h}{2}\nabla U\left( x+\frac{h}{2}\nabla U(x)+\sqrt{h}\xi _{1}\right) \nonumber \\&\qquad +\sqrt{h}\xi _{2} -\left( x+h\nabla U(x)+\sqrt{h}\left( \xi _{1}+\xi _{2}\right) \right) \nonumber \\&\quad = \frac{h}{2}\nabla U \left( x+\frac{h}{2}\nabla U(x)+\sqrt{h}\xi _{1}\right) -\frac{h}{2}\nabla U(x). \end{aligned}$$
(39)

It then follows from HU1 that

$$\begin{aligned} | \mathbb {E}[ x^{f}_{{1,x}}- x^{c}_{{1,x}} ] | \le \frac{h^{3/2}}{2} L \mathbb {E}| \frac{\sqrt{h}}{2}\nabla U(x)+\xi _{1}|. \end{aligned}$$

Furthermore, if we use (32), the triangle equality and the fact that \(\mathbb {E}| \xi _{1} |=\sqrt{\frac{2d}{\pi }}\), we obtain (36). If we now assume that \(U\in C^3\), then for \(\delta _t = x + t(\frac{h}{2}\nabla U(x)+\sqrt{h}\xi _{1})\), \(t \in [0,1]\), we write

$$\begin{aligned}&\nabla U \left( x+\frac{h}{2}\nabla U(x)+\sqrt{h}\xi _{1}\right) = \nabla U(x) \\&\quad + \sum _{|\alpha |=1}\partial ^{\alpha } \nabla U(x) \left( \frac{h}{2}\nabla U(x)+\sqrt{h}\xi _{1}\right) ^{\alpha } \\&\quad + \sum _{|\alpha |=2}\int _0^1 (1-t) \partial ^{\alpha } \nabla U(\delta _t)\, dt \, \left( \frac{h}{2}\nabla U(x)+\sqrt{h}\xi _{1}\right) ^{\alpha }, \end{aligned}$$

where we used multi-index notation. Consequently,

$$\begin{aligned}&\mathbb {E}\left[ \nabla U \left( x+\frac{h}{2}\nabla U(x)+\sqrt{h}\xi _{1} \right) -\nabla U(x) \right] \\&\quad \le C h^2 \mathbb {E}\left[ (|x | + h|x |^2 + |\nabla U(0)| + h |\nabla U(0)|^2 + |\xi _{1}|^{2} |) \right] , \end{aligned}$$

which, together with \(\mathbb {E}[|\xi _{1} |^2]=d\), gives (38). Equation (37) trivially follows from (39) by observing that

$$\begin{aligned}&\mathbb {E}| x^{f}_{{1,x}}- x^{c}_{{1,x}} | ^{2}\\&\quad \le L^{2}\frac{h^{2}}{4}\mathbb {E}| \frac{h}{2}\nabla U(x)+\sqrt{h}\xi _{1}| ^{2}\\&\quad \le h^3 \frac{L^2}{4}\left( \frac{h}{2} ( |x|^{2} + | \nabla U(0) |^2 ) + d \right) \end{aligned}$$

\(\square \)

Remark 4.4

In the case of log-concave target the bias of MLMC using the Euler method can be explicitly quantified using the results from Durmus and Moulines (2016).

4.2 Non-Lipschitz setting

In the previous subsection we found out that in order to analyse the regularity and the one-step error of the explicit Euler approximation, we had to impose an additional assumption about \(\nabla U(x)\) being globally Lipschitz. This is necessary since in the absence of this condition, Euler method is shown to be transient or even divergent (Roberts and Tweedie 1996; Hutzenthaler et al. 2018). However, in many applications of interest this is a rather restricting condition. An example of this is the potentialFootnote 8

$$\begin{aligned} U(x) = -\frac{x^4}{4} -\frac{x^2}{2}. \end{aligned}$$

A standard way to deal with this is to use either an implicit scheme or specially designed explicit schemes (Hutzenthaler and Jentzen 2015; Szpruch and Zhang 2018). Here we will study only the case of implicit Euler.

4.2.1 Implicit Euler method

Here we will focus on the implicit Euler scheme

$$\begin{aligned} x_{n}=x_{n-1}+h\nabla U(x_{n})+\sqrt{2h}\xi _{n} \end{aligned}$$

We will assume that Assumption HU0 holds and moreover replace HU1 with

HU1’:

Let \(k\ge 1\). For any \(x,y \in \mathbb {R}^d\) there exists constant L s.t

$$\begin{aligned} | \nabla U(x)-\nabla U(y)| \le L\big (1+ |x|^{k-1} + |y|^{k-1} \big )| x-y| \end{aligned}$$

As a consequence of this assumption, we have

$$\begin{aligned} | \nabla U(x)| \le L| x|^k + | \nabla U(0) | \end{aligned}$$
(40)

Integrability Uniform-in-time bounds on the pth moments of \(x_n\) for all \(p \ge 1\) can be easily deduced from the results in Mao and Szpruch (2013a, b). Nevertheless, for the convenience of the reader we will present the analysis of the regularity of the scheme, where the effect of the implicitness of the scheme on the regularity should become quickly apparent.

\(L^2\)-regularity

Lemma 4.5

(\(L^2\)-regularity) Let HU0 and HU1’ hold. Then, an implicit Euler scheme is \(L^2\)-regular, i.e.

$$\begin{aligned} \mathbb {E}_{n-1}[(x_{n,x_{n-1}} - x_{n,y_{n-1}})^2]&\le (1 - 2mh ) (y_{n-1} - x_{n-1} )^2 \nonumber \\&\quad + \mathcal {R}_{n-1}, \end{aligned}$$
(41)

and

$$\begin{aligned} \sum _{k=0}^{\infty }\mathcal {R}_k \le 0. \end{aligned}$$

Moreover,

$$\begin{aligned} \mathbb {E}_{n-1} [ |Z_n|^2 ] \le h^2 \mathcal {H}_{n-1} | x_{n-1}-y_{n-1} |^2 \,, \end{aligned}$$
(42)

where \(\mathcal {H}_{n-1}\) is defined by (43).

Proof

The difference between the implicit Euler scheme taking values \(x_{n-1}\) and \(y_{n-1}\) time \(n-1\) is given by

$$\begin{aligned} x_{n,x_{n-1}} - x_{n,y_{n-1}} = x_{n-1} - y_{n-1} + h( \nabla U(x_{n}) - \nabla U(y_{n})). \end{aligned}$$

This, along with HU0 and HU1, leads to

$$\begin{aligned}&| x_{n,x_{n-1}} - x_{n,y_{n-1}} |^2= | x_{n-1} - y_{n-1} |^2 \\&\qquad + 2h \left\langle \nabla U(x_{n}) - \nabla U(y_{n}),x_{n}-y_{n}\right\rangle \\&\qquad - | \nabla U(x_{n}) - \nabla U(y_{n}) |^{2} h^2 \\&\quad \le | x_{n-1} - y_{n-1} |^2 - 2mh | x_{n,x_{n-1}}-x_{n,y_{n-1}} |^2 \\ \end{aligned}$$

This implies

$$\begin{aligned} | x_{n,x_{n-1}} - x_{n,y_{n-1}} |^2&\le | x_{n-1}-y_{n-1}|^2 \frac{1}{ 1+ 2mh} \\&\le | x_{n-1}-y_{n-1}|^2 \left( 1 - \frac{2mh}{1+2mh}\right) . \end{aligned}$$

Next, we take

$$\begin{aligned} | x_{n,x_{n-1}} - y_{n,y_{n-1}} |^2&\le | x_{n-1} - y_{n-1} |^2 - 2mh | x_{n}-y_{n} |^2 \\&= ( 1 - 2mh )| x_{n-1} - y_{n-1} |^2 \\&\quad - 2mh( | x_{n}-y_{n} |^2 - | x_{n-1}-y_{n-1} |^2 ). \end{aligned}$$

In view of Definition 3.3, we define

$$\begin{aligned} \mathcal {R}_k := - 2mh( | x_{k}-y_{k} |^2 - | x_{k-1}-y_{k-1} |^2 ), \end{aligned}$$

and notice that

$$\begin{aligned} \sum _{k=1}^{n}\mathcal {R}_k = -2mh | x_{n}-y_{n} |^2 \le 0. \end{aligned}$$

Hence, the proof of the first statement in the lemma is completed. Now, due to HU1’

$$\begin{aligned}&|Z_n|^2 = h^2 | \nabla U(x_{n}) - \nabla U(y_{n}) |^2 \\&\quad \le h^2 L^2(1+ |x_{n}|^{k-1} + |y_n|^{k-1} )^2| x_n-y_n |^2 \\&\quad \le h^2\left( 1 - \frac{2mh}{1+2mh} \right) L^2(1+ |x_{n}|^{k-1} + |y_n|^{k-1} )^2| x_{n-1}-y_{n-1} |^2. \end{aligned}$$

Observe that

$$\begin{aligned} \mathbb {E}_{n-1}[| x_{n} |^2]&= |x_{n-1}|^2 \\&\quad + \mathbb {E}_{n-1}[ 2h \left\langle \nabla U(x_{n}) ,x_{n} \right\rangle - | \nabla U(x_{n}) |^{2} h^2 ] + h\\&\le | x_{n-1} |^2 - m h | x_{n}|^2 + h(| \nabla U (0)|^2 + 1). \end{aligned}$$

Consequently,

$$\begin{aligned} \mathbb {E}_{n-1}[| x_{n} |^2] =&\frac{1}{1+mh}\left( |x_{n-1} |^2 + h(| \nabla U (0)|^2 + 1) \right) . \end{aligned}$$

Similarly, it can be shown that \(\mathbb {E}_{n-1}[| x_{n} |^k]\) can be expressed as a function of \(|x_{n-1}|^k\) for \(k > 2\), cf. Mao and Szpruch (2013a, b). This in turn implies that there exists a constant \(C > 0\) s.t.

$$\begin{aligned} \mathcal {H}_{n-1}&= \mathbb {E}_{n-1} \left[ L^2 \left( 1 - \frac{2mh}{1+2mh} \right) (1+ |x_{n}|^{k-1} + |y_n|^{k-1})^2 \right] \nonumber \\&\le C \big (1 + |x_{n-1}|^{2(k-1)} + |y_{n-1}|^{2(k-1)}\big ). \end{aligned}$$
(43)

Due to uniform integrability of the implicit Euler scheme, (26) holds. \(\square \)

One-step errors estimates Having established integrability, estimating the one-step error follows exactly the same line of the argument as in Lemma 4.3 and therefore, we skip it.

5 MLMC for SGLD

In this section we discuss the multi-level Monte Carlo method for Euler schemes with inaccurate (randomised) drifts. Namely, we consider

$$\begin{aligned} S_{h,\xi ,\tau }(x) = x + hb(x,\tau ) + \sqrt{2h}\xi \,, \end{aligned}$$
(44)

where \(b: \mathbb {R}^d \times \mathbb {R}^k \rightarrow \mathbb {R}^d\) and an \(\mathbb {R}^k\)-valued random variable \(\tau \) are such that

$$\begin{aligned} \mathbb {E}[b(x,\tau )] = \nabla U(x) \text { for any } x \in \mathbb {R}^d. \end{aligned}$$
(45)

Our main application to Bayesian inference will be discussed in Sect. 5.1. Let us now take a sequence \((\tau _n)_{n=1}^{\infty }\) of mutually independent random variables satisfying (45). We assume that \((\tau _n)_{n=1}^{\infty }\) are also independent of the i.i.d. random variables \((\xi _n)_{n=1}^{\infty }\) with \(\xi _n \sim \mathcal {N}(0,I)\). By analogy to the notation we used for the Euler scheme in (33), we will denote

$$\begin{aligned} \bar{x}_{n+1, \bar{x}_n} = S^f_{h,\xi _{n+1},\tau _{n+1}}(\bar{x}_n). \end{aligned}$$
(46)

In the sequel we will perform a one-step analysis of the scheme defined in (46) by considering the random variables

$$\begin{aligned} \bar{x}^f_{1,x}&=S^f_{\frac{h}{2},\xi _{2},\tau ^{f,2}}\circ S^f_{\frac{h}{2},\xi _{1},\tau ^{f,1}}(x) \nonumber \\ \bar{x}^{c}_{1,x}&=S^c_{h,\frac{1}{\sqrt{2}}\left( \xi _{1}+\xi _{2}\right) ,\tau ^c}(x) \,, \end{aligned}$$
(47)

where \(\xi _1\), \(\xi _2 \sim \mathcal {N}(0,I)\) and \(\tau ^{f,1}\), \(\tau ^{f,2}\) and \(\tau ^{c}\) are \(\mathbb {R}^k\)-valued random variables satisfying (45). In particular, \(\tau ^{f,1}\) and \(\tau ^{f,2}\) are assumed to be independent, but \(\tau ^c\) is not necessarily independent of \(\tau ^{f,1}\) and \(\tau ^{f,2}\). We note that in (47) we have coupled the noise between the fine and the coarse paths synchronously, i.e. as in Algorithm 1. One question that naturally occurs now is how one should choose to couple the random variables \(\tau \) at different levels. In particular, in order for the condition with the telescopic sum to hold, one needs to have

$$\begin{aligned} \mathcal {L}\left( \tau ^{f,1}\right) =\mathcal {L}\left( \tau ^{f,2}\right) =\mathcal {L}\left( \tau ^{c}\right) . \end{aligned}$$
(48)

We can of course just take \(\tau ^c\) independent of \(\tau ^{f,1}\) and \(\tau ^{f,2}\), but other choices are also possible; see Sect. 5.1 for the discussion in the context of the SGLD applied to Bayesian inference.

figure b

In order to bound the global error for our algorithm, we make the following assumptions on the function b in (44).

Assumption 2

  1. (i)

    There is a constant \(\bar{L} > 0\) such that for any \(\mathbb {R}^k\)-valued random variable \(\tau \) satisfying (45) and for any x, \(y \in \mathbb {R}^d\) we have

    $$\begin{aligned} \mathbb {E} [|b(x,\tau ) - b(y,\tau )|] \le \bar{L} |x-y|. \end{aligned}$$
    (49)
  2. (ii)

    There exist constants \(\alpha _c\), \(\sigma \ge 0\) such that for any \(\mathbb {R}^k\)-valued random variable \(\tau \) satisfying (45) and for any \(h > 0\), \(x \in \mathbb {R}^d\) we have

    $$\begin{aligned} \mathbb {E} [|b(x,\tau ) - \nabla U(x)|^2] \le \sigma ^2 (1 + |x|^2)h^{\alpha _c}. \end{aligned}$$
    (50)

Observe that conditions (49), (50) and Assumption HU1 imply that for all random variables \(\tau \) satisfying (45) and for all \(x \in \mathbb {R}^d\) we have

$$\begin{aligned} \mathbb {E}[|b(x,\tau )|^2] \le \bar{L}_0(1 + |x|^2) \end{aligned}$$
(51)

with \(\bar{L}_0 := \sigma ^2 h^{\alpha _c} + 2 \max \left( L^2 , |\nabla U(0)|^2 \right) \), cf. Section 2.4 in Majka et al. (2018). For a discussion on how to verify condition (50) for a subsampling scheme, see Example 2.15 in Majka et al. (2018). By following the proofs of Lemmas 4.1 and 4.2, we see that the \(L^2\) regularity and integrability conditions proved therein hold for the randomised drift scheme given by (44) as well, under Assumptions HU0 and (49). Hence, in order to be able to apply Theorem 3.4 to bound the global error for (46), we only have to estimate the one step errors, i.e. we need to verify conditions (27) and (28) in an analogous way to Lemma 4.3 for Euler schemes.

Lemma 5.1

Under Assumptions 2 and HU1, there is a constant \(C_1 = C_1(h,x)> 0\) given by

$$\begin{aligned} C_1 := \frac{1}{4} L \bar{L}_0^{1/2} h^{1/2} (1+|x|^2)^{1/2} + \frac{1}{2}L \sqrt{d} \end{aligned}$$

such that for all \(h > 0\) we have

$$\begin{aligned} \mathbb {E}[\bar{x}_{1,x}^f - \bar{x}_{1,x}^c] \le C_1 h^{3/2}. \end{aligned}$$
(52)

Moreover, under the same assumptions there is a constant \(C_2 = C_2(h,x)> 0\) given by

$$\begin{aligned} C_2&:= \frac{1}{4}\bar{L}^2 \bar{L}_0 h^{1 + (1-\alpha _c)^{+}}(1+|x|^2) + d\bar{L}^2 h^{(1-\alpha _c)^{+}} \\&\quad + 8 \sigma ^2(1+|x|^2)h^{(\alpha _c - 1)^{+}} \end{aligned}$$

such that for all \(h > 0\) we have

$$\begin{aligned} \mathbb {E} [|\bar{x}_{1,x}^f - \bar{x}_{1,x}^c|^2] \le C_2 h^{2 + \min (1, \alpha _c)}. \end{aligned}$$
(53)

Proof

Note that we have

$$\begin{aligned}&\bar{x}_{1,x}^f - \bar{x}_{1,x}^c = x + \frac{h}{2} b\big (x, \tau _1^f\big ) + \sqrt{h} \xi _1 \nonumber \\&\qquad + \frac{h}{2} b\left( x + \frac{h}{2} b\big (x, \tau _1^f\big ) + \sqrt{h} \xi _1 , \tau _2^f\right) \nonumber \\&\qquad + \sqrt{h} \xi _2 - x - hb\big (x, \tau _1^c\big ) -\sqrt{h} (\xi _1 + \xi _2) \nonumber \\&\quad = \frac{h}{2} b\big (x, \tau _1^f\big ) + \frac{h}{2} b\left( x + \frac{h}{2} b\big (x, \tau _1^f\big ) + \sqrt{h} \xi _1 , \tau _2^f\right) \nonumber \\&\qquad - hb\big (x, \tau _1^c\big ). \end{aligned}$$
(54)

By conditioning on all the sources of randomness except for \(\tau _2^f\) and using its independence of \(\tau _1^f\) and \(\xi _1\), we show

$$\begin{aligned}&\mathbb {E}[\bar{x}_{1,x}^f - \bar{x}_{1,x}^c] \\&\quad = \frac{h}{2} \mathbb {E}\left[ \nabla U\left( x + \frac{h}{2}b(x,\tau _1^f) + \sqrt{h} \xi _1 \right) \right] -\frac{h}{2} \nabla U(x). \end{aligned}$$

Hence, we have

$$\begin{aligned} \mathbb {E}[\bar{x}_{1,x}^f - \bar{x}_{1,x}^c] \le \frac{h}{2}L \mathbb {E} \left[ \left| \frac{h}{2}b(x,\tau _1^f) + \sqrt{h} \xi _1 \right| \right] \end{aligned}$$

and thus, using (51) and Jensen’s inequality, we obtain (52). We now use (54) to compute

$$\begin{aligned}&\mathbb {E} [|\bar{x}_{1,x}^f - \bar{x}_{1,x}^c|^2] \nonumber \\&\quad = h^2 \mathbb {E} \Big |\frac{1}{2}b(x,\tau ^f_1) + \frac{1}{2}b(x,\tau ^f_2) - \frac{1}{2}b(x,\tau ^f_2) \nonumber \\&\qquad + \frac{1}{2}b\left( x + \frac{h}{2} b\big (x, \tau _1^f\big ) + \sqrt{h} \xi _1 , \tau _2^f\right) - b(x,\tau _1^c)\Big |^2 \nonumber \\&\quad \le 2h^2 \mathbb {E}\left| \frac{1}{2}b(x,\tau ^f_1) + \frac{1}{2}b(x,\tau ^f_2) - b(x,\tau _1^c)\right| ^2 \nonumber \\&\qquad + \frac{1}{2} h^2 \mathbb {E} \left| b\left( x + \frac{h}{2} b\big (x, \tau _1^f\big ) + \sqrt{h} \xi _1 , \tau _2^f\right) - b(x,\tau ^f_2)\right| ^2 \end{aligned}$$
(55)

Observe now that due to condition (49) the second term above can be bounded by

$$\begin{aligned}&\frac{1}{2}h^2 \bar{L}^2 \mathbb {E}\left| \frac{h}{2} b\big (x, \tau _1^f\big ) + \sqrt{h} \xi _1 \right| ^2 \\&\quad \le \frac{1}{2}h^2 \bar{L}^2 \left( \frac{h^2}{2}\mathbb {E}|b(x,\tau _1^f)|^2 + 2h \mathbb {E}|\xi _1|^2 \right) \\&\quad \le \frac{1}{2}h^2 \bar{L}^2 \left( \frac{h^2}{2}\bar{L}_0(1 + |x|^2) + 2hd \right) \,, \end{aligned}$$

where in the last inequality we used (51). Moreover, the first term on the right-hand side of (55) is equal to

$$\begin{aligned}&2h^2 \mathbb {E}\Big |\frac{1}{2}b(x,\tau ^f_1) - \frac{1}{2}\nabla U(x) + \frac{1}{2}\nabla U(x) - \frac{1}{2}b(x,\tau _1^c) \\&\qquad +\frac{1}{2}b(x,\tau ^f_2) - \frac{1}{2}\nabla U(x) + \frac{1}{2}\nabla U(x) - \frac{1}{2}b(x,\tau _1^c)\Big |^2 \\&\quad \le 2h^2 \Big ( \mathbb {E}|b(x,\tau _1^f) - \nabla U(x)|^2 + 2\mathbb {E}|b(x,\tau _1^c) - \nabla U(x)|^2 \\&\qquad + \mathbb {E}|b(x,\tau _2^f) - \nabla U(x)|^2 \Big ) \\&\quad \le 8 \sigma ^2 (1+ |x|^2) h^{2 + \alpha _c} \,, \end{aligned}$$

where in the last inequality we used (50). This finishes the proof of (53). \(\square \)

Corollary 5.2

If \(\alpha _c = 0\) in (50), then Algorithm 2 based on the coupling given in Eq. (47) with appropriately chosen \(t_{i}\) has complexity \(\varepsilon ^{-2}|\log (\varepsilon )|^3.\) If \(\alpha _c > 0\), then the algorithm has complexity \(\varepsilon ^{-2}\).

Proof

Because of Lemma 5.1 we can apply the results of Sect. 3.2. In particular, if we choose \(T_{\ell }\) according to Lemma 3.7 we thus for \(\alpha _c = 0\) have \(\beta =1\) in Theorem 3.4 and then the complexity follows from Remark 3.8. Similarly, for \(\alpha _c > 0\) we have \(\beta > 1\) and Remark 3.8 concludes the proof. \(\square \)

5.1 Bayesian inference using MLMC for SGLD

The main computational task in Bayesian statistics is the approximation of expectations with respect to the posterior. The a priori uncertainty in a parameter x is modelled using a probability density \(\pi _{0}(x)\) called the prior. Here, we consider the case where for a fixed parameter x the data \(\left\{ y_{i}\right\} _{i=1,\ldots ,N}\) are supposed to be i.i.d. with density \({\pi (y|x)}\). By Bayes’ rule the posterior is given by

$$\begin{aligned} \pi (x)\propto \pi _{0}(x)\prod _{i=1}^{N}{\pi (y_{i}|x)}. \end{aligned}$$

This distribution is invariant for the Langevin equation (2) with

$$\begin{aligned} \nabla U(x)=\nabla \log {\pi _{0}}(x)+\sum _{i=1}^{N}\nabla \log {\pi (y_{i}|x)}. \end{aligned}$$
(56)

Provided that appropriate assumptions are satisfied for U, we can thus use Algorithm 1 with Euler or implicit Euler schemes to approximate expectations with respect to \(\pi (\hbox {d}x)\). For large N the sum in Eq. (56) becomes a computational bottleneck. One way to deal with this is to replace the gradient by a lower cost stochastic approximation. In the following we apply our MLMC for SGLD framework to the recursion in Eq. (4)

$$\begin{aligned} x_{k+1}= & {} x_{k}+h\left( \nabla \log {\pi _{0}(x_{k})}+\frac{N}{s}\sum _{i=1}^{s}\nabla \log {\pi (y_{\tau _{i}^{k}}\vert x_{k})}\right) \\&+ \sqrt{2h}\xi _{k} \,, \end{aligned}$$

where we take \(\tau _{i}^{k}\overset{\text {i.i.d.}}{\sim }\mathcal {U}\left( \{1,\ldots ,N\}\right) \text { for }i=1,\ldots ,s\) where by \(\mathcal {U}\left( \{1,\ldots ,N\}\right) \) we denote the uniform distribution on \(1,\ldots ,N\) which corresponds to sampling s items with replacement from \(1,\ldots ,N\). Notice that each step only costs s instead of N. We make the following assumptions on the densities \({\pi (y|x)}\) and \(\pi _0(x)\).

Assumption 3

  1. (i)

    Lipschitz conditions for prior and likelihood: There exist constants \(L_0\), \(L_1 > 0\) such that for all i, x, y

    $$\begin{aligned}&| \nabla \log \pi \left( y_{i}\mid x\right) -\nabla \log \pi \left( y_{i}\mid y\right) | \le L_1| x-y| \\&| \nabla \log \pi _{0}\left( x\right) -\nabla \log \pi _{0}\left( y\right) | \le L_{0}| x-y|. \end{aligned}$$
  2. (ii)

    Convexity conditions for prior and likelihood: There exist constants \(m_{0} \ge 0\) and \(m_{y_i}\ge 0\) for \(i = 1, \ldots , N\) such that for all i, x, y

    $$\begin{aligned}&\log \pi _{0}(y) \le \log \pi _{0}(x)+\left\langle \nabla \log \pi _{0}\left( x\right) ,y-x\right\rangle \\&\quad -\frac{m_{0}}{2}| x-y| ^{2}\\&\log \pi \left( y_{i}\mid y\right) \le \log \pi \left( y_{i}\mid x\right) \\&\quad +\left\langle \nabla \log \pi \left( y_{i}\mid x\right) ,y-x\right\rangle -\frac{m_{y_{i}}}{2}| x-y| ^{2} \end{aligned}$$

    with \(\inf _{i}(m_{0}+m_{y_{i}})>0.\)

We note that these conditions imply that the scheme given by (47) with

$$\begin{aligned} b(x,\tau ) := \nabla \log {\pi _{0}(x)}+\frac{N}{s}\sum _{i=1}^{s}\nabla \log {\pi (y_{\tau _{i}}\vert x)} \end{aligned}$$

for \(x \in \mathbb {R}^d\), \(\tau \in \mathbb {R}^s\), satisfies Assumptions HU0, HU1 and (49). The value of the variance \(\sigma \) of the estimator of the drift in (50) depends on the number of samples s, cf. Example 2.15 in Majka et al. (2018).

Regarding the coupling of \(\tau ^{f,1}\), \(\tau ^{f,2}\) and \(\tau ^c\), we have several possible choices. We first take s independent samples \(\tau ^{f,1}\) on the first fine step and another s independent samples \(\tau ^{f,2}\) on the second fine step. The following three choices of \(\tau ^c\) ensure that Eq. (48) holds.

  1. (i)

    an independent sample of \(\left\{ 1,\ldots ,N\right\} \) without replacement denoted as \(\tau ^c_{\mathrm{ind}}\) and called independent coupling;

  2. (ii)

    a draw of s samples without replacement from \((\tau ^{f,1},\)\(\tau ^{f,2})\) denoted as \(\tau ^c_{\mathrm{union}}\) and called union coupling;

  3. (iii)

    the concatenation of a draw of \(\frac{s}{2}\) samples without replacement from \(\tau ^{f,1}\) and a draw of \(\frac{s}{2}\) samples without replacement from \(\tau ^{f,2}\) (provided that s is even) denoted as \(\tau ^c_{\mathrm{strat}}\) and called stratified coupling.

We stress that any of these couplings can be used in Algorithm 2. The problem of coupling the random variables \(\tau \) between different levels in an optimal way will be further investigated in our future work.

6 Numerical investigations

In this section we perform numerical simulations that illustrate our theoretical findings. We start by studying an Ornstein–Uhlenbeck process in Sect. 6.1 using the explicit Euler method, while in Sect. 6.3 we study a Bayesian logistic regression model using the SGLD.

Fig. 2
figure 2

MLMC results for (57) for \(g(x)=x^{2}\) and \(\kappa =0.4\)

Fig. 3
figure 3

MLMC results for (59) for \(g(x)=x^{2}\)

6.1 Ornstein Uhlenbeck process

We consider the SDE

$$\begin{aligned} \hbox {d}X_{t}=-\kappa X_{t}\hbox {d}t+\sqrt{2}\hbox {d}W_{t}, \end{aligned}$$
(57)

and its discretisation using the Euler method

$$\begin{aligned} x_{n+1}=S_{h,\xi }(x_{n}), \quad S_{h,\xi }(x)=x-h\kappa x+\sqrt{2h}\xi . \end{aligned}$$
(58)

Equation (57) is ergodic with its invariant measure being \(\mathcal {N}(0,\kappa ^{-1})\). Furthermore, it is possible to show that the Euler method (58) is similarly ergodic with its invariant measure (Zygalakis 2011) being \(\mathcal {N}\left( 0,\frac{2}{2\kappa -\kappa ^{2}h}\right) \). In Fig. 2, we plot the outputs of our numerical simulations using Algorithm 1. The parameter of interest here is the variance of the invariant measure \(\kappa ^{-1}\) which we try to approximate for different mean square error tolerances \(\varepsilon \).

More precisely, in Fig. 2a we see the allocation of samples for various levels with respect to \(\varepsilon \), while in Fig. 2b we compare the computational cost of the algorithm as a function of the parameter \(\varepsilon \). As we can see, the computational complexity grows as \(\mathcal {O}(\varepsilon ^{-2})\) as predicted by our theory [here \(\alpha =\beta =2\) in (27) and (28)].

Finally, in Fig. 2c we plot the approximation of the variance \(\kappa ^{-1}\) from our algorithm. Note that this coincides with the choice \(g(x)=x^{2}\) since the mean of the invariant measure is 0. As we can see, as \(\varepsilon \) becomes smaller, even though the estimator is in principle biased, we get perfect agreement with the true value of the variance.

6.2 Non-Lipschitz

We consider the SDE

$$\begin{aligned} \hbox {d}X_{t}=-\left( X^{3}_{t} + X_{t} \right) \hbox {d}t+\sqrt{2}\hbox {d}W_{t}, \end{aligned}$$
(59)

and its discretisation using the implicit Euler method

$$\begin{aligned} x_{n+1}=x_{n}-h\left( x^{3}_{n+1}+x_{n+1} \right) +\sqrt{2h}\xi _{n}. \end{aligned}$$
(60)

In Fig. 3, we plot the outputs of our numerical simulations using Algorithm 1. The parameter of interest here is the second moment of the invariant measure \(\int _{\mathbb {R}}x^{2} \exp \left( -\frac{1}{4}x^{4}-\frac{1}{2}x^{2}\right) \hbox {d}x\) which we try to approximate for different mean square error tolerances \(\varepsilon \).

More precisely, in Fig. 3a we see the allocation of samples for various levels with respect to \(\varepsilon \), while in Fig. 3b we compare the computational cost of the algorithm as a function of the parameter \(\varepsilon \). As we can see, the computational complexity grows as \(\mathcal {O}(\varepsilon ^{-2})\) as predicted by our theory [here \(\alpha =\beta =2\) in (27) and (28)].

Finally, in Fig. 3c we plot the approximation of the second moment of the invariant measure from our algorithm. As we can see as \(\varepsilon \) becomes smaller, even though the estimator is in principle biased, we get perfect agreement with the true valueFootnote 9 of the second moment.

6.3 Bayesian logistic regression

In the following we present numerical simulations for a binary Bayesian logistic regression model. In this case the data \(y_{i}\in \{-1,1\}\) are modelled by

$$\begin{aligned} p(y_{i}\vert \iota _{i},x)=f(y_{i}x^{t}\iota _{i}) \end{aligned}$$
(61)

where \(f(z)=\frac{1}{1+\exp (-z)}\in [0,1]\) and \(\iota _{i}\in \mathbb {R}^{d}\) are fixed covariates. We put a Gaussian prior \(\mathcal {N}(0,C_{0})\) on x, for simplicity we use \(C_{0}=I\) subsequently. By Bayes’ rule the posterior density \(\pi (x)\) satisfies

$$\begin{aligned} \pi (x)\propto \exp \left( -\frac{1}{2}| x|_{C_{0}}^{2}\right) \prod _{i=1}^{N}f(y_{i}x^{T}\iota _{i}). \end{aligned}$$

We consider \(d=3\) and \(N=100\) data points and choose the covariate to be

$$\begin{aligned} \iota =\left( \begin{array}{c@{\quad }c@{\quad }c} \iota _{1,1} &{} \iota _{1,2} &{} 1\\ \iota _{2,1} &{} \iota _{2,2} &{} 1\\ \vdots &{} \vdots &{} \vdots \\ \iota _{100,1} &{} \iota _{100,2} &{} 1 \end{array}\right) \end{aligned}$$

for a fixed sample of \(\iota _{i,j}\overset{\text {i.i.d.}}{\sim }\mathcal {N}\left( 0,1\right) \) for \(i=1,\ldots , 100\) and \(j = 1, 2\).

In Algorithm 2 we can choose the starting position \(x_0\). It is reasonable to start the path of the individual SGLD trajectories at the mode of the target distribution (heuristically this makes the distance \(\mathbb {E}| x_{0}^{(c,\ell )}-x_{0}^{(f,\ell )} |\) in step 2 in Algorithm 2 small). That is, we set the \(x_0\) to be the maximum a posteriori estimator (MAP)

$$\begin{aligned} x_{0}=\text {argmax}\,\exp \left( -\frac{1}{2}| x |_{C_{0}} ^{2}\right) \prod _{i=1}^{N}f(y_{i}x^{T}\iota _{i}) \end{aligned}$$

which is approximated using the Newton–Raphson method. Our numerical results are described in Fig. 4. In particular, in Fig. 4a we illustrate the behaviour of the coupling by plotting an estimate of the average distance during the joint evolution in step 2 of Algorithm 2. The behaviour in this figure agrees qualitatively with the statement of Theorem 3.4, as T grows, there is an initial exponential decay up to an additive constant. For the simulation we used \(h_0=0.02\), \(T_\ell =3(\ell +1)\) and \(s=20\). Furthermore, in Fig. 4b we plot \(\text {CPU-time}\times \varepsilon ^2 \) against \(\varepsilon \) for the estimation of the mean. The objective here is to estimate the mean square distance from the MAP estimator \(x_0\) and the posterior that is \(\int | x-x_0 |^2 \pi (x)\hbox {d}x\). Again, after some initial transient where \(\text {CPU-time}\times \varepsilon ^2\) decreases, we see that we get a quantitative agreement with our theory since the \(\text {CPU-time}\times \varepsilon ^2\) increases in a logarithmic way in the limit of \(\varepsilon \) going to zero.

Fig. 4
figure 4

a Illustration of the joint evolution in step 2 of Algorithm 2 for the union coupling, b cost of MLMC (sequential CPU time) SGLD for Bayesian logistic regression for decreasing accuracy parameter \(\varepsilon \) and different couplings