Non-stationary phase of the MALA algorithm

The Metropolis-Adjusted Langevin Algorithm (MALA) is a Markov Chain Monte Carlo method which creates a Markov chain reversible with respect to a given target distribution, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pi ^N$$\end{document}πN, with Lebesgue density on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^N$$\end{document}RN; it can hence be used to approximately sample the target distribution. When the dimension N is large a key question is to determine the computational cost of the algorithm as a function of N. The measure of efficiency that we consider in this paper is the expected squared jumping distance (ESJD), introduced in Roberts et al. (Ann Appl Probab 7(1):110–120, 1997). To determine how the cost of the algorithm (in terms of ESJD) increases with dimension N, we adopt the widely used approach of deriving a diffusion limit for the Markov chain produced by the MALA algorithm. We study this problem for a class of target measures which is not in product form and we address the situation of practical relevance in which the algorithm is started out of stationarity. We thereby significantly extend previous works which consider either measures of product form, when the Markov chain is started out of stationarity, or non-product measures (defined via a density with respect to a Gaussian), when the Markov chain is started in stationarity. In order to work in this non-stationary and non-product setting, significant new analysis is required. In particular, our diffusion limit comprises a stochastic PDE coupled to a scalar ordinary differential equation which gives a measure of how far from stationarity the process is. The family of non-product target measures that we consider in this paper are found from discretization of a measure on an infinite dimensional Hilbert space; the discretised measure is defined by its density with respect to a Gaussian random field. The results of this paper demonstrate that, in the non-stationary regime, the cost of the algorithm is of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal {O}}}(N^{1/2})$$\end{document}O(N1/2) in contrast to the stationary regime, where it is of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathcal {O}}}(N^{1/3})$$\end{document}O(N1/3).


Context
Metropolis-Hastings algorithms are Markov Chain Monte Carlo (MCMC) methods used to sample from a given probability measure, referred to as the target measure. The basic mechanism consists of employing a proposal transition density q(x, y) in order to produce a reversible Markov chain {x k } ∞ k=0 for which the target measure π is invariant [11]. At step k of the chain, a proposal move y k is generated by using q(x, y), i.e. y k ∼ q(x k , ·). Then such a move is accepted with probability α(x k , y k ): α x k , y k = min 1, π y k q y k , x k π x k q x k , y k . (1.1) The computational cost of this algorithm when the state space has high dimension N is of practical interest in many applications. The measure of computational cost considered in this paper is the expected squared jumping distance, introduced in [19] and related works. Roughly speaking [we will be more precise about this in the next Sect. 1.2, see comments before (1.8)], if the size of the proposal moves is too large, i.e. if we propose moves which are too far away from the current position, then such moves tend to be frequently rejected; on the other hand, if the algorithm proposes moves which are too close to the current position, then such moves will be most likely accepted, however the chain will have not moved very far away. In either extreme cases, the chain tends to get stuck and will exhibit slow mixing, and this is more and more true as the dimension N of the state space increases. It is therefore clear that one needs to strike a balance between these two opposite scenarios; in particular, the optimal size of the proposed moves (i.e., the proposal variance) will depend on N . If the proposal variance scales with N like N −ζ , for some ζ > 0, then we will say that the cost of the algorithm, in terms of ESJD, is of the order N ζ .
A widely used approach to tackle this problem is to study diffusion limits for the algorithm. Indeed the scaling used to obtain a well defined diffusion limit corresponds to the optimal scaling of the proposal variance (see Remark 1.1). This problem was first studied in [19], for the Random Walk Metropolis algorithm (RWM); in this work it is assumed that the algorithm is started in stationarity and that the target measure is in product form. In the case of the MALA algorithm, the same problem was considered in [20,21], again in the stationary regime and for product measures. In this setting, the cost of RWM has been shown to be O(N ), while the cost of MALA is O(N 1 3 ). The same O(N 1 3 ) scaling for MALA, in the stationary regime, was later obtained in the setting of non-product measures defined via density with respect to a Gaussian random field [17]. In the paper [6] extensions of these results to non-stationary initializations were considered, however only for the Gaussian targets. For Gaussian targets, RWM was shown to scale the same in and out of stationarity, whilst MALA scales like O(N 1 2 ) out of stationarity. In [12,13] the RWM and MALA algorithms were studied out of stationarity for quite general product measures and the RWM method shown again to scale the same in and out of stationarity. For MALA the appropriate scaling was shown to differ in and out of stationarity and, crucially, the scaling out of stationarity was shown to depend on a certain moment of the potential defining the product measure. In this paper we contribute further understanding of the MALA algorithm when initialized out of stationarity by considering non-product measures defined via density with respect to a Gaussian random field. Considering such a class of measures has proved fruitful, see e.g. [15,17]. Relevant to this strand of literature, is also the work [5].
In this paper our primary contribution is the study of diffusion limits for the the MALA algorithm, out of stationarity, in the setting of general non-product measures, defined via density with respect to a Gaussian random field. Significant new analysis is needed for this problem because the work of [17] relies heavily on stationarity in analyzing the acceptance probability, whilst the work of [13] uses propagation of chaos techniques, unsuitable for non-product settings.
The challenging diffusion limit obtained in this paper is relevant both to the picture just described and, in general, due to the widespread practical use of the MALA algorithm. The understanding we obtain about the MALA algorithm when applied to realistic non-product targets is one of the main motivations for the analysis that we undertake in this paper. The diffusion limit we find is given by an SPDE coupled to a one-dimensional ODE. The evolution of such an ODE can be taken as an indicator of how close the chain is to stationarity (see Remark 1.1 for more details on this). The scaling adopted to obtain such a diffusion limit shows that the cost of the algorithm is of order N 1/2 in the non-stationary regime, as opposed to what happens in the stationary phase, where the cost is of order N 1/3 . It is important to recognize that, for measures absolutely continuous with respect to a Gaussian random field, algorithms exist which require O(1) steps in and out of stationarity; see [7] for a review. Such methods were suggested by Radford Neal in [16], and developed by Alex Beskos for conditioned stochastic differential equations in [4], building on the general formulation of Metropolis-Hastings methods in [23]; these methods are analyzed from the point of view of diffusion limits in [18]. It thus remains open and interesting to study the MALA algorithm out of stationarity for non-product measures which are not defined via density with respect to a Gaussian random field; however the results in [12] demonstrate the substantial technical barriers that will exist in trying to do so. An interesting starting point of such work might be the study of non i.i.d. product measures as pioneered by Bédard [2,3].

Setting and the main result
Let (H, ·, · , · ) be an infinite dimensional separable Hilbert space and consider the measure π on H, defined as follows: That is, π is absolutely continuous with respect to a Gaussian measure π 0 with mean zero and covariance operator C. Ψ is some real valued functional with domainH ⊆ H, Ψ :H → R. Measures of the form (1.2) naturally arise in Bayesian nonparametric statistics and in the study of conditioned diffusions [10,22]. In Sect. 2 we will give the precise definition of the spaceH and identify it with an appropriate Sobolev-like subspace of H (denoted by H s in Sect. 2).The covariance operator C is a positive, self-adjoint, trace class operator on H, with eigenbasis {λ 2 j , φ j }: and we assume that the set {φ j } j∈N is an orthonormal basis for H. We will analyse the MALA algorithm designed to sample from the finite dimensional projections π N of the measure (1.2) on the space spanned by the first N eigenvectors of the covariance operator. Notice that the space X N is isomorphic to R N . To clarify this further, we need to introduce some notation. Given a point x ∈ H, P N (x) := n j=1 φ j , x φ j is the projection of x onto the space X N and we define the approximations of functional Ψ and covariance operator C: (1.5) With this notation in place, our target measure is the measure π N (on X N ∼ = R N ) defined as where M Ψ N is a normalization constant. Notice that the sequence of measures {π N } N ∈N approximates the measure π (in particular, the sequence {π N } N ∈N converges to π in the Hellinger metric, see [22,Section 4] and references therein). In order to sample from the measure π N in (1.6), we will consider the MALA algorithm with proposal and δ > 0 is a positive parameter. We rewrite y k,N as The proposal defines the kernel q and enters in the accept-reject criterion α, which is added to preserve detailed balance with respect to π N (more details on the algorithm will be given in Sect. 2.2). The proposal is a discretization of a π N -invariant diffusion process with time step δ; in the MCMC literature δ is often referred to as the proposal variance. The accept-reject criterion compensates for the discretization, which destroys the π N -reversibility. A crucial parameter to be appropriately chosen in order to optimize the performance of the algorithm is δ; such a choice will depend on the dimension N of the state space. To be more precise, set δ = N −ζ , where , ζ are two positive parameters, the latter being, for the time, the most relevant to this discussion. As explained when outlining the context of this paper, if ζ is too large (so that δ is too small) then the algorithm will tend to move very slowly; if ζ is too big, then the proposed moves will be very large and the algorithm will tend to reject them very often. In this paper we show that, if the algorithm is started our of stationarity then, in the non-stationary regime, the optimal choice of ζ is ζ = 1/2. In particular, if then the acceptance probability is O(1). Furthermore, starting from the Metropolis-Hastings chain {x k,N } k∈N , we define the continuous interpolant This process converges weakly to a diffusion process. (1.11)

In the above the initial datum S(0) is assumed to be finite and W (t) is a H s -valued
Brownian motion with covariance C s .
The functions α , h , b : R → R in the previous statement are defined as follows: (1.14) Remark 1. 1 We make several remarks concerning the main result.
-Since the effective time-step implied by the interpolation (1.9) is N −1/2 , the main result implies that the number of steps required by the Markov chain in its nonstationary regime is O(N 1/2 ). A more detailed discussion on this fact can be found in Sect. 4. -Notice that Eq. (1.11) evolves independently of Eq. (1.10). Once the MALA algorithm (2.14) is introduced and an initial state x 0 ∈H is given such that S(0) is finite, the real valued (double) sequence S k,N , (1. 16) In Theorem 4.1 we prove that S (N ) (t) converges in probability in C([0, T ]; R) to the solution of the ODE (1.11) with initial condition S 0 := lim N →∞ S N 0 . Once such a result is obtained, we can prove that x (N ) (t) converges to x(t). We want to stress that the convergence of S (N ) (t) to S(t) can be obtained independently of the convergence of x (N ) (t) to x(t).
-Let S(t) : R → R be the solution of the ODE (1.11). We will prove (see Theorem 3.1) that S(t) → 1 as t → ∞; this is also consistent with the fact that, in stationarity, S k,N converges to 1 as N → ∞ (for every k > 0), see Remark 4.1. In view of this and the above comment, S(t) (or S k,N ) can be taken as an indication of how close the chain is to stationarity. Moreover, notice that h (1) = ; heuristically one can then argue that the asymptotic behaviour of the law of x(t), the solution of (1.10), is described by the law of the following infinite dimensional SDE: It was proved in [9,10] that (1.17) is ergodic with unique invariant measure given by (1.2). Our deduction concerning computational cost is made on the assumption that the law of (1.10) does indeed tend to the law of (1.17), although we will not prove this here as it would take us away from the main goal of the paper which is to establish the diffusion limit of the MALA algorithm. -In [12,13] the diffusion limit for the MALA algorithm started out of stationarity and applied to i.i.d. target product measures is given by a non-linear equation of McKean-Vlasov type. This is in contrast with our diffusion limit, which is an infinite-dimensional SDE. The reason why this is the case is discussed in detail in [14,Section 1.2]. The discussion in the latter paper is in the context of the Random Walk Metropolis algorithm, but it is conceptually analogous to what holds for the MALA algorithm and for this reason we do not spell it out here. -In this paper we make stronger assumptions on Ψ than are required to prove a diffusion limit in the stationary regime [17]. In particular we assume that the first derivative of Ψ is bounded, whereas [17] requires only boundedness of the second derivative. Removing this assumption on the first derivative, or showing that it is necessary, would be of interest but would require different techniques to those employed in this paper and we do not address the issue here.

Remark 1.2
The proposal we employ in this paper is the standard MALA proposal. It can be seen as a particular case of the more general proposal introduced in [4, equation (4. 2)] see also [1]; in our notation this proposal can be written as (1. 18) In the above, θ ∈ [0, 1] is a parameter. The choice θ = 0 corresponds to our proposal. When θ = 1/2, the resulting algorithm is well posed in infinite dimensions; as a consequence a diffusion limit is obtained, in and out of stationarity, without scaling δ with respect to N ; see Remark 4.3. When θ = 1/2 the algorithms all suffer from the curse of dimensionality: it is necessary to scale δ inversely with a power of N to obtain an acceptable acceptance probability. In this paper we study how the efficiency decreases with N when θ = 0; results analogous to the one we prove here will hold for any θ = 1/2, but proving them at this level of generality would lengthen the article without adding insight. Furthermore, for non-Gaussian priors practitioners might use the algorithm with θ = 0 and so our results shed light on that case; if the prior is actually Gaussian practitioners should use the algorith with θ = 1 2 . There is no reasons to use any other values of θ in practice, as far as we are aware.

Structure of the paper
The paper is organized as follows. In Sect. 2 we introduce the notation and the assumptions that we use throughout this paper. In particular, Sect. 2.1 introduces the infinite dimensional setting in which we work, Sect. 2.2 discusses the MALA algorithm and the assumptions we make on the functional Ψ and on the covariance operator C. Section 3 contains the proof of existence and uniqueness of solutions for the limiting Eqs. (1.10) and (1.11). With these preliminaries in place, we give, in Sect. 4, the formal statement of the main results of this paper, Theorems 4.1 and 4.2. In this section we also provide heuristic arguments outlining how the main results are obtained. The complete proof of these results builds on a continuous mapping argument presented in Sect. 5. The heuristics of Sect. 4 are made rigorous in Sects. 6-8. In particular, Sect. 6 contains some estimates of the size of the chain's jumps and the growth of its moments, as well as the study of the acceptance probability. In Sects. 7 and 8 we use these estimates and approximations to prove Theorems 4.1 and 4.2, respectively. Readers interested in the structure of the proofs of Theorems 4.1 and 4.2 but not in the technical details may wish to skip the ensuing two sections (Sects. 2 and 3) and proceed directly to the statement of these results and the relevant heuristics discussed in Sect. 4.

Notation, algorithm, and assumptions
In this section we detail the notation and the assumptions (Sects. 2.1 and 2.3, respectively) that we will use in the rest of the paper.

Notation
Let (H, ·, · , · ) denote a real separable infinite dimensional Hilbert space, with the canonical norm induced by the inner-product. Let π 0 be a zero-mean Gaussian measure on H with covariance operator C. By the general theory of Gaussian measures [8], C is a positive, trace class operator. Let {φ j , λ 2 j } j≥1 be the eigenfunctions and eigenvalues of C, respectively, so that (1.3) holds. We assume a normalization under which {φ j } j≥1 forms a complete orthonormal basis of H. Recalling (1.4), we specify the notation that will be used throughout this paper: x and y are elements of the Hilbert space H; -the letter N is reserved to denote the dimensionality of the space X N where the target measure π N is supported; x N is an element of X N ∼ = R N (similarly for y N and the noise ξ N ); -for any fixed N ∈ N, x k,N is the kth step of the chain {x k,N } k∈N ⊆ X N constructed to sample from π N ; x k,N i is the ith component of the vector x k,N , that is x k,N i := x k,N , φ i (with abuse of notation).
For every x ∈ H, we have the representation x = j≥1 x j φ j , where x j := x, φ j . Using this expansion, we define Sobolev-like spaces H s , s ∈ R, with the innerproducts and norms defined by The space (H s , ·, · s ) is also a Hilbert space. Notice that H 0 = H. Furthermore H s ⊂ H ⊂ H −s for any s > 0. The Hilbert-Schmidt norm · C associated with the covariance operator C is defined as and it is the Cameron-Martin norm associated with the Gaussian measure N (0, C). Such a norm is induced by the scalar product x, y C : Similarly, C N defines a Hilbert-Schmidt norm on X N , which is induced by the scalar product For s ∈ R, let L s : H → H denote the operator which is diagonal in the basis {φ j } j≥1 with diagonal entries j 2s , The operator L s lets us alternate between the Hilbert space H and the interpolation spaces H s via the identities: x, y s = L If j λ 2 j j 2s < ∞, then y can be equivalently written as For a positive, self-adjoint operator D : H → H, its trace in H is defined as We stress that in the above {φ j } j∈N is an orthonormal basis for (H, ·, · ). Therefore, ifD : H s → H s , its trace in H s is Since Trace H s (D) does not depend on the orthonormal basis, the operatorD is said to be trace class in H s if Trace H s (D) < ∞ for some, and hence any, orthonormal basis of H s . Because C is defined on H, the covariance operator 1 is defined on H s . Thus, for all the values of r such that Trace H s (C s ) = j λ 2 j j 2s < ∞, we can think of y as a mean zero Gaussian random variable with covariance operator C in H and C s in H s [see (2.2) and (2.3)]. In the same way, if Trace H s (C s ) < ∞, then where {w j (t)} j≥1 a collection of i.i.d. standard Brownian motions on R, can be equivalently understood as an H-valued C-Brownian motion or as an H s -valued C s -Brownian motion.
We will make use of the following elementary inequality, (2.6) Throughout this paper we study sequences of real numbers, random variables and functions, indexed by either (or both) the dimension N of the space on which the target measure is defined or the chain's step number k. In doing so, we find the following notation convenient. and B k,N = B k,N (x), the same inequality must hold with K independent of x, for all x where the A k,N s and B k,N s are defined.
As is customary, R + := {s ∈ R : s ≥ 0} and for all b ∈ R + we let [b] = n if n ≤ b < n + 1 for some integer n. Finally, for time dependent functions we will use both the notations S(t) and S t interchangeably.

The algorithm
A natural variant of the MALA algorithm stems from the observation that π N is the unique stationary measure of the SDE where W N is an X N -valued Brownian motion with covariance operator C N . The algorithm consists of discretising (2.7) using the Euler-Maruyama scheme and adding a Metropolis accept-reject step so that the invariance of π N is preserved. The variant on MALA which we study is therefore a Metropolis-Hastings algorithm with proposal We stress that the Gaussian random variables ξ k,N i are independent of each other and of the current position x k,N . Motivated by the considerations made in the introduction (and that will be made more explicit in Sect. 4.1), in this paper we fix the choice If at step k the chain is at x k,N , the algorithm proposes a move to y k,N defined by Eq. (2.8). The move is then accepted with probability where, for any x N , y N ∈ R N X N , If the move to y k,N is accepted then x k+1,N = y k,N , if it is rejected the chain remains where it was, i.e. x k+1,N = x k,N . In short, the MALA chain is defined as follows: Equivalently, we can write , independent of x k,N and ξ k,N . For fixed N , the chain {x k,N } k≥1 lives in X N ∼ = R N and samples from π N . However, in view of the fact that we want to study the scaling limit of such a chain as N → ∞, the analysis is cleaner if it is carried out in H; therefore, the chain that we analyse is the chain {x k } k ⊆ H defined as follows: the first N components of the vector x k ∈ H coincide with x k,N as defined above; the remaining components are not updated and remain equal to their initial value. More precisely, using (2.8) and (2.12), the chain x k can be written in a component-wise notation as follows: and For the sake of clarity, we specify that From the above it is clear that the update rule (2.14) only updates the first N coordinates (with respect to the eigenbasis of C) of the vector x k . Therefore the algorithm evolves in the finite-dimensional subspace X N . From now on we will avoid using the notation {x k } k for the "extended chain" defined in H, as it can be confused with the notation x N , which instead is used throughout to denote a generic element of the space X N . We conclude this section by remarking that, if x k,N is given, the proposal y k,N only depends on the Gaussian noise ξ k,N . Therefore the acceptance probability will be interchangeably denoted by α N x N , y N or α N x N , ξ N .

Assumptions
In this section, we describe the assumptions on the covariance operator C of the Gaussian measure π 0 D ∼ N (0, C) and those on the functional Ψ . We fix a distinguished exponent s ≥ 0 and assume that Ψ : H s → R and Trace H s (C s ) < ∞. In other words, H s is the space that we were denoting withH in the introduction. Since for some constant C > 0 independent of j.
For each x ∈ H s the derivative ∇Ψ (x) is an element of the dual L(H s , R) of H s , comprising the linear functionals on H s . However, we may identify L(H s , R) = H −s and view ∇Ψ (x) as an element of H −s for each x ∈ H s . With this identification, the following identity holds To avoid technical complications we assume that the gradient of Ψ (x) is bounded and globally Lipschitz. More precisely, throughout this paper we make the following assumptions.

Assumption 2.1
The functional Ψ and covariance operator C satisfy the following: 1. Decay of Eigenvalues λ 2 j of C: there exists a constant κ > s + 1 2 such that 2. Domain of Ψ : the functional Ψ is defined everywhere on H s . 3. Derivatives of Ψ : The derivative of Ψ is bounded and globally Lipschitz: s satisfies all of the above.

Remark 2.2
Our assumptions on the change of measure (that is, on Ψ ) are less general than those adopted in [14,17] and related literature (see references therein). This is for purely technical reasons. In this paper we assume that Ψ grows linearly. If Ψ was assumed to grow quadratically, which is the case in the mentioned works, finding bounds on the moments of the chain {x k,N } k≥1 (much needed in all of the analysis) would become more involved than it already is, see Remark C.1. However, under our assumptions, the measure π (or π N ) is still, generically, of non-product form.
We now explore the consequences of Assumption 2.1. The proofs of the following lemmas can be found in Appendix A.

The function Ψ (x) is globally Lipschitz and therefore also
is globally Lipschitz: Before stating the next lemma, we observe that by definition of the projection operator P N we have that We stress that in (2.24)-(2.26) the constant implied by the use of the notation " " (see end of Sect. 2.1) is independent of N . Lastly, in what follows we will need the fact that, due assumptions on the covariance operator,

Existence and uniqueness for the limiting diffusion process
The main results of this section are Theorems 3.1, 3.2 and 3.3. Theorems 3.1 and 3.2 are concerned with establishing existence and uniqueness for Eqs. (1.10) and (1.11), respectively. Theorem 3.3 states the continuity of the Itô maps associated with Eqs. (1.10) and (1.11). The proofs of the main results of this paper (Theorems 4.1 and 4.2) rely heavily on the continuity of such maps, as we illustrate in Sect. 5. Once Lemma 3.1 below is established, the proofs of the theorems in this section are completely analogous to the proofs of those in [14,Section 4]. For this reason, we omit them and refer the reader to [14]. In what follows, recall that the definition of the functions α , h and b has been given in (1.12), (1.13) and (1.14), respectively. In the case of (1.11) we have the following. Consider the deterministic equations and where S is the solution of (1.11), z 0 ∈ H s , S 0 ∈ R, and ζ and w are functions in are continuous.

Main theorems and heuristics of proofs
In order to state the main results, we first set where we recall that in the above x i := x, φ i .

Theorem 4.1 Let Assumption 2.1 hold and let
For the following theorem recall that the solution of (1.10) is interpreted precisely through Theorem 3.2 as a process driven by an H s −valued Brownian motion with covariance C s , and solution in C([0, T ]; H s ).
Both Theorems 4.1 and 4.2 assume that the initial datum of the chains x k,N is assigned deterministically. From our proofs it will be clear that the same statements also hold for random initial data, as long as (i) x 0,N is not drawn at random from the target measure π N or from any other measure which is a change of measure from π N (i.e. we need to be starting out of stationarity) and (ii) S 0,N and x 0,N have bounded moments (bounded uniformly in N ) of sufficiently high order and are independent of all the other sources of noise present in the algorithm. Notice moreover that the convergence in probability of Theorem 4.1 is equivalent to weak convergence, as the limit is deterministic.
The rigorous proof of the above results is contained in Sects. 5-8. In the remainder of this section we give heuristic arguments to justify our choice of scaling δ ∝ N −1/2 and we explain how one can formally obtain the (fluid) ODE limit (1.11) for the double sequence S k,N and the diffusion limit (1.10) for the chain x k,N . We stress that the arguments of this section are only formal; therefore, we often use the notation " ", to mean "approximately equal". That is, we write A B when A = B+ "terms that are negligible" as N tends to infinity; we then justify these approximations, and the resulting limit theorems, in the following Sects. 5-8.

Heuristic analysis of the acceptance probability
As observed in [17, equation (2.21)], the acceptance probability (2.10) can be expressed as where, using the notation (2.1), the function Q N (x, ξ) can be written as We do not give here a complete expression for the terms r N (x N , ξ N ) and r N Ψ (x N , ξ N ). For the time being it is sufficient to point out that where I N 2 and I N 3 will be defined in (6.10) and (6.11), respectively. Because I N 2 and I N 3 depend on Ψ , r N Ψ contains all the terms where the functional Ψ appears; moreover r N Ψ vanishes when Ψ = 0. The analysis of Sect. 6 (see Lemma 6.4) will show that with our choice of scaling, δ = /N 1/2 , the terms r N and r N Ψ are negligible (for N large). Let us now illustrate the reason behind our choice of scaling. To this end, set δ = /N ζ and observe the following two simple facts: the latter fact being true by the Law of Large Numbers. Neglecting the terms containing Ψ , at step k of the chain we have, formally, (4.10) The above approximation (which, we stress again, is only formal and will be made rigorous in subsequent sections) has been obtained from (4.4) by setting δ = /N ζ and using (4.6) and (4.7), as follows: Looking at the decomposition (4.8)-(4.10) of the function Q N , we can now heuristically explain the reason why we are lead to choose ζ = 1/2 when we start the chain out of stationarity, as opposed to the scaling ζ = 1/3 when the chain is started in stationarity. This is explained in the following remark. -If we start the chain in stationarity, i.e. x N 0 ∼ π N (where π N has been defined in (1.6)), then x k,N ∼ π N for every k ≥ 0. As we have already observed, π N is absolutely continuous with respect to the Gaussian measure π N 0 ∼ N (0, C N ); because all the almost sure properties are preserved under this change of measure, in the stationary regime most of the estimates of interest need to be shown only for x N ∼ π N 0 . In particular if x N ∼ π N 0 then x N can be represented as N (0, 1). Therefore we can use the law of large numbers and observe that x N 2 -Suppose we want to study the algorithm in stationarity and we therefore make the choice ζ = 1/3. With the above point in mind, notice that if we start in stationarity then by the Law of Large numbers N −1 N i=1 |ρ i | 2 = S k,N → 1 (as N → ∞, with speed of convergence N −1/2 ). Moreover, if x N ∼ π N 0 , by the Central Limit Theorem the term and converges to a standard Gaussian. With these two observations in place we can then heuristically see that, with the choice ζ = 1/3 the term in (4.10) are negligible as N → ∞ while the terms in (4.9) are O(1). The term in (4.8) can be better understood by looking at the LHS of (4.11) which, with ζ = 1/3 and x N ∼ π N 0 , can be rewritten as (4.12) The expected value of the above expression is zero.  .9); therefore one has the heuristic approximation For more details on the stationary case see [17]. -If instead we start out of stationarity the choice ζ = 1/3 is problematic. Indeed in [6, Lemma 3] the authors study the MALA algorithm to sample from an Ndimensional isotropic Gaussian and show that if the algorithm is started at a point x 0 such that S(0) < 1, then the acceptance probability degenerates to zero. Therefore, the algorithm stays stuck in its initial state and never proceeds to the next move, see [6, Figure 2] (to be more precise, as N increases the algorithm will take longer and longer to get unstuck from its initial state; in the limit, it will never move with probability 1). Therefore the choice ζ = 1/3 cannot be the optimal one (at least not irrespective of the initial state of the chain) if we start out of stationarity. This is still the case in our context and one can heuristically see that the root of the problem lies in the term (4.8). Indeed if out of stationarity we still choose ζ = 1/3 then, like before, (4.9) is still order one and (4.10) is still negligible. However, looking at (4.8), if x 0 is such that S(0) < 1 then, when k = 0, (4.8) tends to minus infinity; recalling (4.2), this implies that the acceptance probability of the first move tends to zero. To overcome this issue and make Q N of order one (irrespective of the initial datum) so that the acceptance probability is of order one and does not degenerate to 0 or 1 when N → ∞, we take ζ = 1/2; in this way the terms in (4.8) are O(1), all the others are small. Therefore, the intuition leading the analysis of the non-stationary regime hinges on the fact that, with our scaling, N ,ξ k,N ) ) α S k,N , (4.14) where the function α on the RHS of (4.14) is the one defined in (1.12). The approximation (4.13) is made rigorous in Lemma 6.4, while (4.14) is formalized in Sect. 6.1 (see in particular Proposition 6.1). -Finally, we mention for completeness that, by arguing similarly to what we have done so far, if ζ < 1/2 then the acceptance probability of the first move tends to zero when S(0) < 1. If ζ > 1/2 then Q N → 0, so the acceptance probability tends to one; however the size of the moves is small and the algorithm explores the phase space slowly.

Remark 4.2
Notice that in stationarity the function Q N is, to leading order, independent of ξ ; that is, Q N and ξ are asymptotically independent (see [17,Lemma 4.5]). This can be intuitively explained because in stationarity the leading order term in the expression for Q N is the term with δ 3 x 2 . We will show that also out of stationarity Q N and ξ are asymptotically independent. In this case such an asymptotic independence can, roughly speaking, be motivated by the approximation (4.13), (as the interpolation of the chain S k,N converges to a deterministic limit). The asymptotic correlation of Q N and the noise ξ is analysed in Lemma 6.5.

Remark 4.3
When one employs the more general proposal (1.18), assuming Ψ ≡ 0, the expression for Q N becomes So, if θ = 1/2, the acceptance probability would be exactly one (for every N ), i.e. the algorithm would be sampling exactly from the prior hence there is no need of rescaling δ with N .

Heuristic derivation of the weak limit of S k,N
Let Y be any function of the random variables ξ k,N and U k,N (introduced in Sect. 2.2), for example the chain x k,N itself. Here and throughout the paper we use E x 0 [Y ] to denote the expected value of Y with respect to the law of the variables ξ k,N 's and U k,N 's, with the initial state x 0 of the chain given deterministically; in other words, E x 0 (Y ) denotes expectation with respect to all the sources of randomness present in Y . We will use the notation E k [Y ] for the conditional expectation of Y given x k,N , E k [Y ] := E x 0 Y x k,N (we should really be writing E N k in place of E k , but to improve readability we will omit the further index N ). Let us now decompose the chain S k,N into its drift and martingale parts: In this subsection we give the heuristics which underly the proof, given in subsequent sections, that the approximate drift b k,N = b k,N x k,N converges to b (S k,N ), 2 where b is the drift of (1.11), while the approximate diffusion D k,N tends to zero. This formally gives the result of Theorem 4.1. Let us formally argue such a convergence result. By (4.6) and (2.12), (4.18) Therefore, again by (4.6), where the second equality is a consequence of the definition of γ k,N (with a reasoning, completely analogous to the one in [14, last proof of Appendix A], see also (4.24). Using (4.3) (with δ = / √ N ), the fact that r N is negligible and the approximation (4.13), the above gives The above approximation is made rigorous in Lemma 7.5. As for the diffusion coefficient, it is easy to check (see proof of Lemma 7.2) that Hence the approximate diffusion tends to zero and one can formally deduce that (the interpolant of) S k,N converges to the ODE limit (1.11).

Heuristic analysis of the limit of the chain x k,N .
The drift-martingale decomposition of the chain x k,N is as follows: and L k, is the approximate diffusion. In what follows we will use the notation Θ(x, S) for the drift of Eq. (1.10), i.e.
with F(x) defined in Lemma 2.1. Again, we want to formally argue that the approximate drift Θ k,N x k,N tends to Θ(x k,N , S k,N ) 3 and the approximate diffusion L k,N tends to the diffusion coefficient of Eq. (1.10).

Approximate drift
As a preliminary consideration, observe that see [14, equation (5.14)]. This fact will be used throughout the paper, often without mention. Coming to the chain x k,N , a direct calculation based on (2.8) and on (2.12) gives The addend in (4.26) is asymptotically small (see Lemma 6.5 and notice that this addend would just be zero if Q N and ξ k,N were uncorrelated); hence, using the heuristic approximations (4.13) and (4.14), (4.27) the right hand side of the above is precisely the limiting drift Θ(x k,N , S k,N ).

Approximate diffusion
We now look at the approximate diffusion of the chain x k,N : By definition, (4.28) By (4.27) the second addend in the above is asymptotically small. Therefore The above quantity is carefully studied in Lemma 6.6. However, intuitively, the heuristic approximation (4.14) (and the asymptotic independence of Q N and ξ that (4.14) is a manifestation of) suffices to formally derive the limiting diffusion coefficient [i.e. the diffusion coefficient of (1.10)]:

Continuous mapping argument
In this section we outline the argument which underlies the proofs of our main results. In particular, the proofs of Theorems 4.1 and 4.2 hinge on the continuous mapping arguments that we illustrate in the following Sects. 5.1 and 5.2, respectively. The details of the proofs are deferred to the next three sections: Sect. 6 contains some preliminary results that we employ in both proofs, in Sect. 7 contains the the proof of Theorem 4.1 and Sect. 8 that of Theorem 4.2.

Continuous mapping argument for (3.3)
Let Iterating the above we obtain The expression for S (N ) (t) can then be rewritten as having setŵ N (t) := e N (t) + w N (t),

Equation (5.2) shows that
where J 2 is the Itô map defined in the statement of Theorem 3.3. By the continuity of the map J 2 , if we show thatŵ N converges in probability in C([0, T ]; R) to zero, then S (N ) (t) converges in probability to the solution of the ODE (1.11). We prove convergence ofŵ N to zero in Sect. 7. In view of (5.3), we show the convergence in probability ofŵ N to zero by proving that both e N (Lemma 7.1) and w N (Lemma 7.2) converge in L 2 (Ω; C([0, T ]; R)) to zero. Because {S 0,N } N ∈N is a deterministic sequence that converges to S 0 , we then have that (S 0,N ,ŵ N ) converges in probability to (S 0 , 0).

Continuous mapping argument for (3.2)
We now consider the chain {x k,N } k∈N ⊆ H s , defined in (2.14). We act analogously to what we have done for the chain {S k,N } k∈N . So we start by recalling the definition of the continuous interpolant x (N ) , Eq. (1.9) and the notation introduced at the beginning of Sect. 4.3. An argument analogous to the one used to derive (5.2) shows that for any t ∈ [t k , t k+1 ) and Because {x 0,N } N ∈N is a deterministic sequence that converges to x 0 , the above three steps (and Slutsky's Theorem) imply that (x 0,N ,η N ) converges weakly to (x 0 , η).

Preliminary estimates and analysis of the acceptance probability
This section gathers several technical results. In Lemma 6.1 we study the size of the jumps of the chain. Lemma 6.2 contains uniform bounds on the moments of the chains {x k,N } k∈N and {S k,N } k∈N , much needed in Sects. 7 and 8. In Section 6.1 we detail the analysis of the acceptance probability. This allows us to quantify the correlations between γ k,N and the noise ξ k,N , Sect. 6.2. Throughout the paper, when referring to the function Q N defined in (4.3), we use interchangeably the notation Q N (x k,N , y k,N ) and Q N (x k,N , ξ k,N ) (as we have already remarked, given x k,N , the proposal y k,N is only a function of ξ k,N ). Therefore, and Proof of Lemma 6.1 By definition of the proposal y k,N , Eq. (2.8), Thus, using (2.25) and (2.27), we have which proves (6.1). Equation (6.2) follows similarly: j ) 2 has chi-squared law, applying Stirling's formula for the Gamma function Γ : R → R we obtain Hence, using (2.26), the desired bound follows. Finally, recalling the definition of the chain, Eq. (2.12), the bounds (6.3) and (6.4) are clearly a consequence of (6.1) and (6.2), respectively, since either x k+1,N = y k,N (if the proposed move is accepted) or x k+1,N = x k,N (if the move is rejected). Proof of Lemma 6.2 The proof of this lemma can be found in Appendix C.

Acceptance probability
The main result of this section is Proposition 6.1, which we obtain as a consequence of Lemma 6.3 (below) and Lemma 6.2. Proposition 6.1 formalizes the heuristic approximation (4.14).

Lemma 6.3 (Acceptance probability) Let Assumption 2.1 hold and recall the Definitions (4.2)
and (1.12). Then the following holds: Before proving Lemma 6.3, we state Proposition 6.1.

Proposition 6.1 If Assumption 2.1 holds then
Proof This is a corollary of Lemmas 6.3 and 6.2.
Proof of Lemma 6.3 The function z → 1∧e z on R is globally Lipschitz with Lipschitz constant 1. Therefore, by (1.12) and (4.2), The result is now a consequence of (6.15) below.
To analyse the acceptance probability it is convenient to decompose Q N as follows: (6.11) Lemma 6.4 Let Assumption 2.1 hold. With the notation introduced above, we have: Therefore, Proof of Lemma 6. 4 We consecutively prove the three bounds in the statement.
• Proof of (6.12). Using (2.8), we rewrite I N 1 as Expanding the above we obtain: (6.16) where the difference (r N Ψ − r N ) is defined in (4.5) and we set For the reader's convenience we rearrange (4.5) below: We come to bound all of the above terms, starting from (6.19). To this end, let us observe the following: Moreover, From (6.19), (6.20), (2.26) and the above, By (6.17), where in the last equality we have used the fact that {ξ k,N i : i = 1, . . . , N } are independent, zero mean, unit variance normal random variables (independent of x k,N ) and (4.6). As for r N x , Lastly,r Since N j=1 ξ 2 j has chi-squared law, E k r N 2 V ar N −1 N j=1 ξ 2 j N −1 , by (6.5). Combining all of the above, we obtain the desired bound.

Correlations between acceptance probability and noise ξ k,N
Recall the definition of γ k,N , Eq. (2.13), and let The study of the properties of ε k.N is the object of the next two lemmata, which have a central role in the analysis: Lemma 6.5 (and Lemma 6.2) establishes the decay of correlations between the acceptance probability and the noise ξ k,N . Lemma 6.6 formalizes the heuristic arguments presented in Sect. 4.3.2.

Lemma 6.5 If Assumption 2.1 holds, then
Therefore, s . (6.26) Lemma 6.6 Let Assumption 2.1 hold. Then, with the notation introduced so far, The proofs of the above lemmata can be found in Appendix B. Notice that if ξ k,N and γ k,N (equivalently ξ k,N and Q N ) were uncorrelated, the statements of Lemmas 6.5 and 6.6 would be trivially true.

Analysis of the drift
In view of what follows, it is convenient to introduce the piecewise constant interpolant of the chain {S k,N } k∈N :S Proof of Lemma 7.1 From (7.1), for any t k ≤ t < t k+1 we have With this observation, we can then decompose e N (t) as The result is now a consequence of Lemmas 7.3 and 7.4 below, which we first state and then consecutively prove.
Using Lemma 7.5 below, we obtain Taking expectations on both sides and applying Lemma 6.2 completes the proof.
Proof of Lemma 7.5 Define Then, from (4.19), (4.2), (1.12) and (1.14), we obtain Since |α N x k,N , y k,N | ≤ 1 andỸ N k is a function of x k,N only, we can further estimate the above as follows: (7.4) From the definition of I N 1 , Eq. (6.9), we have Therefore, As for the second addend in (7.4), Lemma 6.3 gives Combining the above two bounds and (7.4) gives the desired result.
Proof of Lemma 7.4 By Jensen's inequality, Since b is globally Lipschitz, From (4.18) and (4.6), Combining the above with (6.12) we obtain Taking expectations and applying Lemma 6.2 concludes the proof.

Analysis of noise
Proof of Lemma 7.2 Notice that we can write w N as the linear interpolation of the array It follows from the definition of D k,N in (4.17) and Lemma 6.2 that {M k,N } k≥1 is a discrete-time P x 0 -martingale with respect to the filtration generated by {x k,N } k≥1 . Since, Doob's L p inequality implies that where the equality follows from the independence of the increments of {M k,N } k≥1 . From the definition of D k,N , Eq. (4.17), we have that where the last inequality is a consequence of (7.6) and Lemma 6.2. The result follows immediately.

Proof of Theorem 4.2
The idea behind the proof is the same as in the previous Sect. 7. First we introduce the piecewise constant interpolant of the chain {x k,N } k∈N

Analysis of drift
Therefore, we can decompose d N (t) as Then Proof of Lemma 8.6 Recalling (4.26) and (6.24), we have where the function F that appears in the above has been defined in Lemma 2.1. The term on the RHS of (8.3) has been studied in Lemma 6.5. To estimate the addend in (8.4) we use (2.25), the boundedness of α and Lemma 6.3. A straightforward calculation then gives From the definition of Ψ N and ∇Ψ N , Eqs. (1.5) and (2.23), respectively, having used (2.24) in the last inequality. The statement is now a consequence of Lemma 6.2.
Proof of Lemma 8.4 Following the analogous steps to those taken in the proof of Lemma 7.3, the proof is a direct consequence of Lemma 8.6, after observing that the summation ∞ j=N +1 (λ j j s ) 4 is the tail of a convergent series hence it tends to zero as N → ∞.
Proof of Lemma 8.5 By the definition of Θ, Eq. (4.23), we have Applying (2.20) and (2.25) and using the fact h is globally Lipschitz and bounded, we get Thus, from the definitions (1.16), (7.1), (1.9) and (8.1), if t k ≤ t < t k+1 , we have Applying (6.3) and (7.6) one then concludes The remainder of the proof is analogous to the proof of Lemma 7.4.
Proof of Lemma 8.2 For any arbitrary but fixed ε > 0, we need to argue that Using (2.21) and the fact that x (N ) (t) s ≤ x k,N s + x k+1,N s (which is a simple consequence of (1.9)), for any t ∈ [t k , t k+1 ) Using Markov's inequality and Lemma 6.2, given any δ > 0, it is straightforward to find constant M such that P u N > M ≤ δ for every N ∈ N. Thus P sup Given that the δ was arbitrary, the result then follows from the fact that S (N ) converges in probability to S (Theorem 4.1).

Analysis of noise
The proof of Lemma 8.3 is based on [14,Lemma 8.9]. For the reader's convenience, we restate [14,Lemma 8.9] below as Lemma 8.7. In order to state such a lemma let us introduce the following notation and definitions. Let holds in probability. By (4.28), From the above, if we prove and that then (8.5) follows. We start by proving (8.6): where the last inequality follows from (2.25) and (6.25). The above and (6.7) prove (8.6). We now come to (8.7): The first two addends tend to zero in L 1 as N tends to infinity due to (2.25), (2.27) and Lemma 6.2. As for the third addend, we decompose it as follows Trace H s (C s )α S k,N h (S(u))du . (8.8) Convergence to zero in L 1 of the first term in the above follows from Lemmas 6.2 and 6.6. As for the term in (8.8), we use the identity to further split it, obtaining: Convergence (in L 1 ) of (8.9) to zero follows with the same calculations leading to (7.6), the global Lipschitz property of h , and Lemma 6.2. The addend in (8.10) tends to zero in probability since S (N ) tends to S in probability in C([0, T ]; R) (Theorem 4.1) and the third addend is clearly small. The limit (8.7) then follows. (ii) Condition (ii) of Lemma 8.7 can be shown to hold with similar calculations, so we will not show the details. (iii) Using (6.3), the last bound follows a calculation completely analogous to the one in [14,Section 8.2]. We omit the details here.
As for (2.26), using (2.17): B Appendix: Proofs of Lemmas 6.5 and 6.6 To prove Lemmas 6.5 and 6.6 we decompose Q N (x k,N , ξ k,N ) into the sum of a term that depends on ξ k,N j (the jth component of ξ k,N ), Q N j and one that is independent of ξ j , Q N j,⊥ : We recall that I N 2 and I N 3 have been defined in Sect. 6. Therefore, using (6.8), having set Proof of Lemma 6.5 (6.26) is a consequence of the definition (6.24) and the estimate (6.25). Thus, all we have to do is establish the latter. Recalling that {φ j } j∈N := { j −s φ j } j∈N is an orthonormal basis for H s , we act as in the proof of [17,Lemma 4.7] and obtain N 2 where Q N j has been defined in (B.1). Thus where the second inequality follows from the boundedness of the sequence {λ j }, (6.13) and (6.14). Summing over j and applying (2.24) we obtain (6.25).
Proof of Lemma 6.6 By definition of ε k,N , and because γ k,N = [γ k,N ] 2 (as γ k,N can only take values 0 or 1) Using the above, the Lipschitzianity of the function s → 1 ∧ e s , (B.2) and the independence of Q N j,⊥ and ξ k,N j , we write We now proceed to bound the addends in (B.4) and (B.5), starting with the latter. where the last inequality follows from (2.25), (2.16), the boundedness of the sequence {λ j } j∈N and by using Young's inequality (more precisely, the so-called Young's inequality "with "), as follows: This concludes the analysis of the term (B.5). As for the term (B Exploiting the fact that s → 1 ∧ e s is globally Lipschitz, using Lemma 6.4 and manipulations of the same type as in (B.7), it follows that 1 ∧ e Q N j,⊥ − α S k,N 1 + S k,N + x k,N 2 s √ N .
C Appendix: Uniform bounds on the moments of S k,N and x k,N Proof of Lemma 6.2 To prove both bounds, we use a strategy analogous to the one used in [18, Proof of Lemma 9]. Let {A k : k ∈ N} be any sequence of real numbers. Suppose that there exists a constant C ≥ 0 (independent of k) such that We start by showing that if the above holds then A k ≤ e CT (A 0 + CT ), uniformly over k = 0, . . . , [T √ N ]. Indeed, from (C.1), Thus, for all k = 0, . . . , [T √ N ], Since 1 + z ≤ e z for any z ∈ R, With this preliminary observation, we can now prove (6.6)-(6.7).
(i) Proof of (6.6). To prove (6.6) we only need to show that (C.1) holds (for some constant C > 0 independent of N and k) for the sequence A k = E x 0 S k,N q . By the definition of S k,N , we have Therefore, Thus, to establish (C.1) it is enough to argue that each of the terms in the right-hand side of the above is bounded by (C/ √ N )(1 + E S k,N q ). To this end, set By the Cauchy-Schwartz inequality for the scalar product ·, · C N , Putting all of the above together (and using Young's inequality) we obtain N m+l/2 + 1 N (m+l/2)/2 . Now observe that (m + l/2)/2 ≥ 1/2 except when (n, m, l) = (q, 0, 0) or (n, m, l) = (q − 1, 0, 1). Therefore we have shown the desired bound for all the terms in the expansion (C.2), except the one with (n, m, l) = (q − 1, 0, 1). To study the latter term, we recall that γ k,N ∈ {0, 1}, and use the definition of the chain [Eqs. (2.8) and (2.12)] to obtain Combining (2.26) with the Cauchy-Schwartz inequality we have where in the last inequality we used the following observation Recalling that x k,N , (C N ) 1/2 ξ k,N C N , conditioned on x k,N , is a linear combination of zero-mean Gaussian random variables, we have Putting the above together and taking expectations we can then conclude and (6.6) follows. (ii) Proof of (6.7). This is very similar to the proof of (6.6), so we only sketch it. Just as before, it is enough to establish the following bound The above gives us the desired bound for all (n, m, l) except for (n, m, l) = (q − 1, 0, 1). Like before, to study the latter case we observe where penultimate inequality follows from the Cauchy-Schwartz inequality, (2.25), and the fact that γ k,N ∈ {0, 1}, and the last inequality follows from Lemma 6.5. This concludes the proof.
Remark C.1 In [17] the authors derived the diffusion limit for the chain under weaker assumptions on the potential Ψ than those we use in this paper. Essentially, they assume that Ψ is quadratically bounded, while we assume that it is linearly bounded. If Ψ was quadratically bounded the proof of Lemma 6.5 would become considerably more involved. We observe explicitly that the statement of Lemma 6.5 is of paramount importance in order to establish the uniform bound on the moments of the chain x k contained in Lemma 6.2. In [17] obtaining such bounds is not an issue, since the authors study the chain in its stationary regime. In other words, in [17] the law of x k,N is independent of k, and thus the uniform bounds on the moments of x k,N and S k,N are automatically true for target measures of the form considered there (see also the first bullet point of Remark 4.1).