1 Introduction

A popular family of methods for Bayesian parameterization in data analytics are derived as Markov chain Monte Carlo (MCMC) methods, including Hamiltonian (or hybrid) Monte Carlo (HMC)(Duane et al. 1987; Neal 2011; Monnahan et al. 2016), or the Metropolis adjusted Langevin algorithm (MALA)(Rossky et al. 1978; Bou-Rabee and Vanden-Eijnden 2010; Roberts and Tweedie 1996). These methods involve proposals that are based on approximating a continuous-time (stochastic) dynamics that exactly preserves the target (posterior) density \(\pi \), followed by an accept/reject step to correct for approximation errors.

Efficient parameterization of the stochastic differential equations used in these procedures has the potential to greatly accelerate their convergence, particularly when the target density is poorly scaled, i.e. when the Hessian matrix of the logarithm of the density has a large condition number (an example is given in “Appendix 1”). In precise analogy with well-established strategies in optimization (see e.g. Sun and Yuan 2006), the solution to conditioning problems in the sampling context is to find a well-chosen change of variables (preconditioning) for the system, such that the natural scales of the transformed system are roughly commensurate.

In this article, we discuss an approach to dynamic preconditioning based on simultaneously evolving an ensemble of parallel MCMC simulations, each of which is referred to as a “walker” or “particle”. As we will show, the walkers provide information that can greatly improve the efficiency of MCMC methods. There is a long history of using multiple parallel simulations to improve MCMC calculations (see e.g. (Gilks et al. 1994; ter Braak 2006; Goodman and Weare 2010; Andrés Christen and Fox 2010; Jasra et al. 2007; Cappé et al. 2004; Iba 2001; Hairer and Weare 2014; Hammersley and Morton 1954; Liu 2002; Rosenbluth and Rosenbluth 1955)). Many of these methods rely on occasional duplication or removal of walkers and reweighting of samples to speed sampling of densities with multiple modes or to compute tail averages. The schemes proposed in this article are more similar to methods introduced in (Gilks et al. 1994; ter Braak 2006; Andrés Christen and Fox 2010; Goodman and Weare 2010) that address conditioning issues using walker proposal moves informed by the positions of other walkers in the ensemble. These methods are not designed to directly address multimodality and do not involve any reweighting of samples. Our approach differs in that proposal moves are derived from time discretization of an SDE whose solutions exactly preserve \(\pi \) (or more precisely the joint density of an ensemble of independent random variables drawn from \(\pi \)). This results in ensemble MCMC schemes that converge rapidly on poorly conditioned distributions even in relatively high-dimensional sample spaces and when the details of the conditioning problems depend on position in sample space.

Our starting point is the discrete approximation of a system of SDEs for a state vector \(x\in \mathcal {D}\subset \mathbb {R}^D\),

$$\begin{aligned} \dot{x}&= (J(x)+S(x))\nabla \log \pi (x) \nonumber \\&+\,\mathrm {div}( J(x) + S(x) ) + \sqrt{ 2S(x)}\, \eta (t) \end{aligned}$$
(1)

where J(x) and S(x) are skew-symmetric and symmetric positive semi-definite \(D\times D\) matrices, respectively, with \(\eta (t)\) representing a vector of independent Gaussian white noise components. In our sampling schemes, each walker generates a discrete-time approximation of (1) with its own particular choice of J which corresponds to a notion of the localized and regularized sample covariance matrix across the ensemble of walkers and incorporates information about the target density \(\pi \) into the evolution of each walker.

Many existing sampling methods can be characterized as time discretizations of (1) (Ma et al. 2015). The matrix S is sometimes referred to as a mass matrix (though we reserve that term for a different matrix) and is often chosen to be diagonal. More general modifications of S (with \(J=0\)) to improve convergence have been considered in the Monte Carlo literature, dating at least to (Bennett 1975). This idea has been the focus of renewed attention in statistics, and several recent approaches concerning this or related ideas have been proposed (Martin et al. 2012; Girolami and Calderhead 2011a). Though modification of S appears to be much more common in practice, several authors have considered the effect that the choice of J and S has on the ergodic properties of the solution to (1) from a more theoretical perspective (see e.g. (Rey-Bellet and Spiliopoulos 2015; Duncan et al. 2016; Hwang et al. 2005, 1993)). In this paper, we are concerned with motivating and presenting a particular choice of S and J based on the ensemble framework mentioned above and yielding practical and efficient sampling schemes. We demonstrate that the choice of J and S has important ramifications for the stability of the discretization scheme as well as for the overall sampling efficiency. This interplay will be explored in future work

2 Preconditioning strategies for sampling

As in any MCMC scheme, the goal is to estimate the average \(\mathrm {E}[f] = \int f(x) \pi (x)\mathrm {d}x\) by a trajectory average of the form

$$\begin{aligned} \overline{f}_N = \frac{1}{N}\sum _{n=0}^{N-1} f(x^{(n)}), \end{aligned}$$

for large N. In many cases, we can expect the error in an MCMC scheme to satisfy a central limit theorem: \( \sqrt{N}\left( \overline{f}_N - \mathrm {E}[f] \right) \xrightarrow {dist} N(0, \tau \sigma ^2), \) where \(\sigma ^2\) is the variance of f under \(\pi \) (and is independent of the particular MCMC scheme), the \(\tau \) the integrated autocorrelation time (IAT) which is often used to quantify the efficiency of an MCMC approach (see “Appendix 1”).

To emphasize an analogy with optimization, for the moment assume that \(J=0\). The steepest descent algorithm of optimization corresponds to an Euler–Maruyama discretization of the so-called overdamped Langevin (or Brownian) dynamics (Milstein and Tretyakov 2004; Pavliotis 2014),

$$\begin{aligned} x^{(n+1)} = x^{(n)} + {\delta t}\, \nabla \log (\pi (x^{(n)})) + \sqrt{2 {\delta t}}\, \mathrm {R}^{(n)} \end{aligned}$$
(2)

where \(\mathrm {R}\sim N(0,I)\). Discretization introduces an \(O({\delta t})\) error in the sampled invariant distribution so a Metropolis–Hastings accept/reject step may be incorporated in order to recover the correct statistics (see the MALA algorithm (Rossky et al. 1978)) when time discretization error dominates sampling error. Reducing \({\delta t}\) gives a more accurate approximation of the evolution of the dynamics and boosts the acceptance rate.

When \(\pi \) is Gaussian with covariance \({\varSigma }\), one can easily show that the cost to achieve a fixed accuracy depends on the condition number \(\kappa = \lambda _\mathrm{max}/\lambda _\mathrm{min}\) where \(\lambda _\mathrm{max}\) and \(\lambda _\mathrm{min}\) are the largest and smallest eigenvalues of \({\varSigma }\). Indeed, one finds that the worst-case IAT \(\tau \) for the scheme in (2) over observables of the form \(v^\text { T} x\) is \(\tau = \kappa -1\) (see “Appendix 1”). In this formula, the eigenvalue \(\lambda _{min}\) arises due to the discretization stability constraint on the stepsize parameter \({\delta t}\) and \(\lambda _\mathrm{max}\) appears because the direction of the corresponding eigenvector is slowest to relax for the continuous-time process. The presence of \(\lambda _\mathrm{min}\) in this formula indicates that analysis of the continuous-time scheme (1) (i.e. neglect of the discretization stability constraint) can be misleading when considering the effects of poor conditioning on sampling efficiency. Since the central limit theorem suggests that the error after N steps of the scheme is roughly proportional to \( \sqrt{\tau /N}\), the cost to achieve a fixed accuracy is again roughly proportional to \(\kappa \).

Continuing to use \(J=0\), taking \(S(x)=-(\nabla ^2 \log (\pi (x)) )^{-1}\) in (1) and discretizing with timestep \({\delta t}>0\), we obtain a stochastic analogue of Newton’s method:

$$\begin{aligned} x^{(n+1)}&= \,x^{(n)} + {\delta t}\,S(x^{(n)}) \nabla \log (\pi (x^{(n)})) \nonumber \\&+ {\delta t}\,\mathrm {div}\left( S(x^{(n)})\right) + \sqrt{2 {\delta t}S (x^{(n)})}\, \mathrm {R}^{(n)}. \end{aligned}$$
(3)

Schemes of a similar form though neglecting the \(\mathrm {div}(S)\) term (and therefore requiring Metropolization) have been explored recently in (Martin et al. 2012). Metropolization may also be used to correct the \(O({\delta t})\) sampling bias introduced by the discretization. It can be shown that the scheme is affine invariant in the sense that when it is applied to sampling \(\pi _{A,v}\) it generates a sequence of samples \(y^{(n)}\) so that \(x^{(n)} = Ay^{(n)} +v\) has exactly the same distribution as the sequence of samples generated by the method when applied to \(\pi \) (see Goodman and Weare 2010 for a detailed discussion of the role of affine invariance in the design of MCMC methods for poorly conditioned problems). We therefore expect that when this method can be applied (e.g. when the Hessian is positive definite), it should be effective on poorly scaled problems. This affine invariance property is shared by the deterministic Newton’s method (obtained from (3) by dropping the noise and matrix divergence terms) and is responsible for its good performance when applied to optimizing poorly scaled functions (e.g. when the condition number of the Hessian is large). We stress that the key to the usefulness of either the deterministic or stochastic Newton’s method is that one does not need to make an explicit choice of the matrix A or the vector v. As the performance is independent of the choice of A and v, we can assume that A or v is chosen to improve the conditioning of the problem.

Due to the presence of the divergence term in the continuous dynamics, discretization will require evaluation of first-, second- and third-order derivatives of \(\log (\pi (x))\), making it prohibitively expensive for many models. To avoid this difficulty, one can estimate the divergence term using an extra evaluation of the Hessian (see “Appendix 6”), or omit the divergence term and rely on a Metropolization step to ensure correct sampling. Regardless of how this term is handled, the system (3), unlike (2), is based on multiplicative noise (where the magnitude of the noise process depends upon the state of the system) which is known to introduce complexity (and reduce accuracy) in numerical discretization (Milstein and Tretyakov 2004).

More fundamentally, complex sampling problems will exhibit regions of substantial probability where the Hessian fails to be positive definite. A simple (and often more robust alternative) is \(S = {\varSigma }\), where \({\varSigma }\) is the covariance matrix of \(\pi \) (even when \(\pi \) is not Gaussian) and is positive definite. It can be shown that the iteration in (3) is again affine invariant. The resulting scheme, which can be regarded as a simple quasi-Newton type approach, is closely related to adaptive MCMC approaches (Roberts and Rosenthal 2007; Haario et al. 2001). On the other hand, because this choice of S does not depend on position, the scheme can be expected to perform poorly on problems for which the conditioning is dramatically different in different regions of space (e.g. the Hessian has high condition number and its eigenvectors are strongly position dependent), see Fig. 1. These observations suggest a choice of S corresponding to a notion of local covariance.

Fig. 1
figure 1

We plot three examples of posterior distribution functions that might be described as poorly scaled. The distribution a has a scaling that can be removed through a linear change of variables, whereas useful scaling information in distributions b and c depends on the location in space. Proposals can benefit from taking into account local scaling behaviour over the global covariance information (the condition number of their covariance matrices in both b and c are unity)

While a notion of local covariance will be central to the schemes we eventually introduce, we choose to incorporate that information not through S in (1), but through the skew-symmetric matrix J in that equation. In the remainder of this section, we discuss how the choices of S described so far, and the corresponding properties of (3), have analogues in choices of J and a family of so-called underdamped Langevin schemes that we next introduce Pavliotis (2014), Leimkuhler and Matthews (2015).

A popular way to obtain an MCMC scheme with decreased IAT relative to the overdamped scheme in (2) is to introduce “inertia”. We extend the space by writing our state \(x=(q,p)^\text { T} \in \mathcal {D}\times \mathbb {R}^D\subset \mathbb {R}^{2D}\), with the target distribution

$$\begin{aligned} \hat{\pi }(x) = \hat{\pi }(q,p) = \pi (q) \varphi (p),\quad \int \hat{\pi }(q,p) \mathrm {d}p = \pi (q). \end{aligned}$$
(4)

The distribution of interest \(\pi (q)\) is recovered from \(\hat{\pi }(q,p)\) as the marginal distribution of the position vector q. For the distribution \(\varphi (p)\) we will follow common practice and use \(\varphi (p) \propto \exp (-\Vert p\Vert ^2/2)\). With this extension of the space, we recover the standard underdamped form of Langevin dynamics using

$$\begin{aligned} J = \left[ \begin{array}{cc} 0 &{} -I_D\\ I_D&{} 0 \end{array}\right] , \qquad S = \left[ \begin{array}{cc} 0 &{} 0 \\ 0 &{} \gamma I_D\end{array}\right] \end{aligned}$$
(5)

in equation (1), where \(I_D\) is the \(D\times D\) identity matrix and \(\gamma \) is a positive constant (Milstein and Tretyakov 2004). Recent work Dalalyan (2016), Durmus and Moulines (2016), especially in connection with molecular dynamics (Leimkuhler et al. 2015), has examined efficient ways to discretize Langevin dynamics while minimizing the error in sampling \(\pi (q)\) .

To incorporate information such as the Hessian matrix or the covariance matrix (or local covariance matrices) in the underdamped Langevin scheme, we focus on choices of J and S as follows:

$$\begin{aligned} J(x) = \left[ \begin{array}{cc} 0 &{} -B(q) \\ B(q)^\text { T} &{} 0 \end{array}\right] , \qquad S = \left[ \begin{array}{cc} 0 &{} 0 \\ 0 &{} \gamma I_D\end{array}\right] , \end{aligned}$$

where \(B(q)B^{T}{(q)}\) is a symmetric positive definite matrix, resulting in the system

$$\begin{aligned} \dot{q}&= B(q) p, \nonumber \\ \dot{p}&= B(q)^\text { T} \nabla \log (\pi (q)) + \mathrm {div}( B(q)^\text { T} )- \gamma p + \sqrt{2\gamma } \eta (t). \end{aligned}$$
(6)

Discretization of the stochastic system may be derived by mimicking the BAOAB scheme (Leimkuhler et al. 2015). Given a stepsize \({\delta t}>0\), define \(\alpha = \exp (-\gamma {\delta t})\) and approximate the step from \(t_n\) to \(t_{n+1}=t_n + {\delta t}\) by the formulas

(7a)
(7b)
(7c)
(7d)
(7e)

where \(\mathrm {R}\sim N(0,I_D)\) and \(F(q) = B(q)^\text { T}\nabla \log \pi (q)\), with an implicit equation in (7b). The choice of matrix \(B B^\text { T}\) introduced in the next section will be a sum of the identity and a small (relative to the dimension \(D\)) number of rank 1 matrices, alleviating storage demands and reducing the cost of all calculations involving B to linear in \(D\). As described in “Appendix 2”, schemes of the form in (7) can also be used to generate proposals in a Metropolis–Hastings framework to strictly enforce a condition that, like detailed balance, guarantees that \(\pi \) is exactly preserved.

Suppose that, when applied to sampling the density \(\pi _{A,v}\), an underdamped Langevin scheme of the form in (7) generates a sequence \((q^{(n)}, p^{(n)})\). The scheme will be referred to as affine invariant if the transformed sequence \((A q^{(n)}+v, p^{(n)})\) has the same distribution as the sequence generated by the method when applied to sample \(\pi \). As for (3) one can demonstrate that the choices \(B(q)B^\text { T}(q) = - (\nabla ^2 \log (\pi (q)))^{-1} \) and \(B(q)B^\text { T}(q) = {\varSigma }\), yield affine invariant sampling schemes (see “Appendix 5” for details). Recall that the choice of \(S(x)=-(\nabla ^2 \log (\pi (x)) )^{-1}\) in (3) also gave an affine invariant scheme, but that there the S matrix appears multiplying the noise (making it multiplicative).

Before proceeding to the important issue of selecting a practically useful choice of B, we observe the following important properties of our formulation: (i) the stochastic dynamical system (6) exactly preserves the target distribution (see Ma et al. 2015) and thus, if discretization error is well controlled, Metropolis correction is not necessarily needed for the computation, and (ii) the formulation, with appropriate choice of B, is affine invariant, even under discretization (see “Appendix 5”), a property which ensures the stability of the method under change of coordinates. By contrast, we emphasize that schemes that modify S (instead of J) in (5) or that are based on a q-dependent normal distribution \(\varphi \) in (4) (e.g. within HMC as in (Girolami and Calderhead 2011a)), cannot be made affine invariant in the same sense, though they can be made to satisfy an alternative notion of affine invariance (see “Appendix 5”).

With the general stochastic quasi-Newton form in (7) as a template, one may consider many possible choices of B. Just as in optimization, in MCMC the question is not whether one should precondition, but rather how can one precondition in an affordable and effective way. Unfortunately, practical and effective quasi-Newton approaches for optimization do not have direct analogues in the sampling context, leaving a substantial gap between un-preconditioned methods and often impractical preconditioning approaches. In the next section, we suggest an alternative strategy to fill this gap: using multiple copies of a simulation to incorporate local scaling information in the B matrix in (7).

3 Ensemble quasi-Newton (EQN) schemes

We next describe an efficient MCMC approach in which information from an ensemble of walkers provides an estimate of a modified local covariance matrix. We consider a system of L walkers (independent copies evolving under the same dynamics) with state \(x_i=(q_i,p_i)^\text { T}\), where subscripts now indicate the walker index. Each walker has position \(q_i\) and momentum \(p_i\) for \(i=1,\cdots ,L\), and we define the vectors \(Q=(q_1, q_2, \ldots , q_L)^\text { T}\in \mathcal {D}^L\) and \(P=(p_1,p_2,\ldots ,p_L)^\text { T} \in \mathbb {R}^{DL}\). We seek to sample the product measure \(\bar{\pi }\) whose marginals give copies of the distribution of interest \(\pi \):

$$\begin{aligned} \bar{\pi }(Q,P) = \prod _{i=1}^L \hat{\pi }(q_i,p_i), \qquad \int \bar{\pi }(Q,P) \mathrm {d}P = \prod _{i=1}^L \pi (q_i). \end{aligned}$$

A simple strategy is for each walker to sample \(\bar{\pi }\) by evolving each \(x_i\) independently using an equation such as (2) or (5). Such a method scales perfectly in parallel when initial conditions are drawn from the target distribution, but no use is made of the local observed geometry or inter-walker information. Alternatively we may use the dynamics (6) to introduce walker information through the B(q) preconditioning matrix in order to scale the dynamics based upon information from the other walkers. This preconditioning enters into the dynamics but not the invariant distribution which remains \(\bar{\pi }\). A popular alternative preconditioning strategy is to modify the mass matrix, i.e. the covariance of the Gaussian distribution \(\varphi \) in (4) (see e.g. Girolami and Calderhead 2011a or “Appendix 5”). In our context of ensemble-based schemes, this strategy would introduce substantial (and costly) communication between walkers at each evolution step.

Using L walkers, the global state \(x=(Q,P)\) consists of \(2DL\) total variables and B(Q) is a \(DL \times DL\) matrix. We will use \(B(Q)=\mathrm {diag}(B_1(Q),B_2(Q),\ldots ,B_L(Q))\) with each \(B_i(Q)\in \mathbb {R}^{D\times D}\) so that the position and momentum \((q_i,p_i)\) of walker i evolve according to (7) with B(q) replaced by \(B_i(Q)\). Note that the divergence and gradient terms in the equation for each walker are taken with respect to the \(q_i\) variable.

Within this quasi-Newton framework, there are many potential choices for the \(B_i\) matrix, with \(B_i=I_D\) reducing to the simulation of L independent copies of underdamped Langevin dynamics. Before exploring the possibilities, we remark that, in order to exploit parallelism, we will divide our L walkers into several groups of equal size in an approach similar to the emcee package (Foreman-Mackey et al. 2013). Walkers in the same group g(i) as walker i will not appear in \(B_i\) so that the walkers in any single group can be advanced in parallel independently. The fact that \(B_i\) is independent of walkers in the same group as walker i is vital when we introduce the Metropolis step to exactly preserve the target distribution (see “Appendix 2”).

We set \( Q_{[i]} = \{q_j\,|\,g(j)\ne g(i)\} \) and let K be the common size of these sets. For example, if we have 16 cores available we may wish to use ten groups of 16 walkers (so \(L=160\) and \(K=144\)). If walker j is designated as belonging to group 1, it evolves under the dynamics given in equation (7) but the set \(Q_{[j]}\) only includes walkers in groups \(2,\ldots ,10\). We may then iterate over the groups of walkers sequentially, moving all the walkers in a particular group in parallel with the others.

One choice for the preconditioning matrix (not yet the one we employ) is to use the sample covariance of the ensemble

$$\begin{aligned} B_i(Q) = \sqrt{\mathrm {cov}( Q_{[i]} )}, \end{aligned}$$
(8)

where the square root of a matrix is taken in the Cholesky sense. Note that \(\mathrm {div}( B_i(Q)^\text { T} )\equiv 0\), simplifying the Metropolization of the scheme. In order for \(B_i(Q)\) to be positive definite, we need at least \(D\) linearly independent walker positions, which at minimum requires that \(L>D\).

With the choice of \(B_i\) in (8), the ensemble scheme applied to the density

$$\begin{aligned} \bar{\pi }_{A,v}(Q,P) = \prod \hat{\pi }_{A,v}(q_i,p_i)=\prod \hat{\pi }(Aq_i+v,p), \end{aligned}$$
(9)

for some invertible matrix A and vector v, generates a sequence of vectors \((q_1^{(n)},\dots , q_L^{(n)}, p_1^{(n)},\dots ,p_L^{(n)})\) with the property that the transformed sequence \((A q_1^{(n)} +v, \dots , Aq_L^{(n)}+v, p_1^{(n)},\dots , p_L^{(n)})\) has exactly the same distribution as the sequence generated by the ensemble scheme applied to \(\bar{\pi }\) (see “Appendix 5”). Just as choosing B as the square root of global covariance of \(\pi \) in (6) yields an affine invariant scheme, choosing the \(B_i\) as the square root of the ensemble covariance yields an affine invariant ensemble scheme. This affine invariance property suggests that ensemble schemes with \(B_i\) chosen as in (8) should perform well when the covariance of \(\pi \) has a large condition number. A related choice in the context of an overdamped formulation appears in Greengard (2015) and is shown to be affine invariant. An ensemble version of the HMC scheme using a mass matrix inspired by the BFGS optimization scheme appears in Zhang and Sutton (2011) though the relationship between that mass matrix and an approximation of the Hessian of \(\log (\pi )\) or its inverse seems unclear because the method does not evaluate the derivative of \(\log (\pi )\) at nearby points.

Using (8) in our ensemble schemes is problematic for several reasons. For high-dimensional problems, the requirement that \(L>D\) may render the memory demands of the methods prohibitive. This problem can be easily remedied by only approximating and rescaling in the space spanned by the eigenvectors corresponding to the largest eigenvalues of the ensemble covariance matrix. While such a scheme can be implemented in a reasonably efficient manner, we find that simply blending the sample covariance matrix with the identity via the choice

$$\begin{aligned} B_i(Q) = \sqrt{I_D+ \eta \, \mathrm {cov}(Q_{[i]})}, \end{aligned}$$
(10)

for some fixed parameter \(\eta \ge 0\) is just as effective and much simpler. There are several other ways to combine the identity and sample covariance matrix (e.g. a convex combination), but our choice in (10) means that we do not need to additionally scale the stepsize with \(\eta \), as for modest \(\eta \) the slowest motions of the system are not dramatically altered. The combination with the identity allows \(L\le D\) but destroys affine invariance. On the other hand as demonstrated in Sect. (4), the method is still capable of dramatically alleviating scaling issues.

Having resolved the rank deficiency issue by moving to the choice of \(B_i\) in (10), one difficulty remains. As described in the previous section, for many problems we might expect that the global covariance of \(\pi \) is reasonably well scaled but that the sampling problem is still poorly scaled (the Hessian of \(-\log \pi \) has large condition number in highly probable regions of the sample space). To address problems of this type, we define a localized covariance matrix that better approximates the Hessian at a point \(q_i\) while retaining full rank. We weight samples in the covariance matrix based on their distance (scaled by the global covariance) to a walker’s current position, i.e. we use

$$\begin{aligned} B_i(Q) = \sqrt{ I_D+ \eta \, \mathrm {wcov}(Q_{[i]},\omega _\lambda (Q_{[i]},q_i) ) }, \end{aligned}$$
(11)

for parameters \(\eta ,\,\lambda >0\), where now \(\mathrm {wcov}(x,w)\) is a weighted covariance matrix of \(K<L\) samples \(q \in \mathbb {R}^{K \times D}\) with potentially unnormalized weights \(w \in \mathbb {R}_+^K\):

$$\begin{aligned} \left( \mathrm {wcov}(q,w)\right) _{ij}= & {} \sum _{k=1}^K \frac{w_k}{W} ( q_{k,i} - \bar{q}_{i} ) ( q_{k,j} - \bar{q}_j ),\\ \bar{q}_i= & {} \sum _{k=1}^K \frac{w_k}{W} q_{k,i} \end{aligned}$$

with \(W=\sum _k w_k\) and

$$\begin{aligned} \left( \omega _\lambda (Q,q)\right) _j = \exp \left( -\frac{\lambda }{2} \Vert Q_j - q\Vert ^2 \right) . \end{aligned}$$

Note that using \(Q_{[i]}\) and not Q in (11) is essential for preserving the validity of the scheme. Choosing \(\lambda =0\) reduces (11) to (10), whereas a large value of \(\lambda \) gives more refined estimation of the local scaling properties of the system. The divergence term in (7) can be computed explicitly by computing partial derivatives of \(B_i(q)\), making use of the formula for the derivative of the square root of a matrix: \(\partial _i M(x) = M \Phi (M^{-1} (\partial _i(MM^T) ) M^{-T})\), where \(\Phi (M)=\text {lower}(M) + \text {diag}(M)/2\). Note that the matrices \(B_i B_i^\text { T}\) for \(B_i\) in (11) are sums of the identity and L rank one matrices so that all manipulations involving \(B_i\) can be accomplished in linear cost in the dimension \(D\). In “Appendix 2”, we detail a Metropolis–Hastings step that can be implemented (if needed) to correct any introduced bias. Because our ensemble scheme preserves \(\pi \) exactly when \({\delta t}\) is small, one can also use the scheme absent of any Metropolis–Hastings step, improving the prospects for it to scale to very high dimension. Omission of the Metropolis–Hastings step for Langevin type methods is common practice in molecular dynamics MCMC simulations (see Leimkuhler and Matthews 2015 ) and has been considered in the context of computational statistics in (Dalalyan 2016; Durmus and Moulines 2016; Welling and Teh 2011).

We write out explicit pseudocode for the scheme in Algorithm 1. We consider dividing L walkers into G groups, where walker w is in group number g(w). We also choose the number of steps to take between parallel communication, \(T\le N\), and initialize the momentum vector for each walker w so \(p_w\sim N(0,I_D)\).

figure a

Typically it is most efficient to choose the size of each group to be a multiple of the number of available cores, in order to make the parfor loop efficient. The step function uses one new evaluation of the force \(\nabla \log (\pi )\) each time it is called, as well as a new evaluation of the B matrix and its derivative. We can minimize parallel communication by setting T large to infrequently broadcast the new walker data.

4 Numerical tests

We consider two numerical experiments to demonstrate the potential improvements that this method offers. A python package with example implementations of the code is available at Matthews (2016).

4.1 Gaussian mixture model

We use the model presented in Chopin et al. (2012) which involves fitting the distribution of a dataset y to a univariate mixture model as the sum of three Gaussian distributions. The state vector is described by the means, precisions and weights of the three Gaussian distributions, denoted \(\mu _i\), \(\lambda _i\) and \(z_i\), respectively. Due to the sum of the weights equalling unity, this gives us eight variables describing the mixture model. We also include a hyperparameter \(\beta \) that describes the rate parameter in the prior distribution on the precisions, giving \(D=9\) for the state overall. A full description of the problem is available in “Appendix 3”.

Fig. 2
figure 2

We plot a maximum likelihood state (left) with the three component densities coloured as red, green and blue, with their sum in black, along with the original stamp data y as a histogram. The six modes due to label switching can be seen when looking at the log posterior plot (right) in \(\mu _1\) and \(\mu _2\). (Color figure online)

We consider the Hidalgo stamps benchmark dataset, studied in (Izenman and Sommer 1988), as the data y with 485 datapoints. This example is well suited to the local covariance approach we present above, due to the invariance of the likelihood under a permutation of components (the label-switching problem). Thus, the system admits sets of \(3!=6\) equivalent modes, see Fig. 2, each with a local scaling matrix that has the same eigenvalues with permuted eigenvectors.

Though strictly speaking the problem is multimodal, the high barriers between modes make hopping between the basins extremely unlikely (we did not observe any switching in any simulations). Thus, this problem effectively tests the exploration rate within one well, with the symmetry between the modes guaranteeing the same challenges in each basin. The walkers may initialize in the neighbourhood of different local modes so that using a “global” preconditioning strategy would be sub-optimal. The best preconditioning matrix for the current position of a walker depends on which mode is closest to the walker. Instead, we use the covariance information from proximal walkers as in (11) to determine the optimal scaling.

We test the EQN scheme against the standard HMC scheme and a Metropolized version of Langevin dynamics. We used \(L=64\) walkers for the each scheme and compare the computed integrated autocorrelation times for an ensemble mean of quantities that vary slowly, shown in Table 1. The autocorrelation times are computed using the ACOR package (Goodman 2009).

Table 1 Computed autocorrelation times for slow variables, with the variable with the slowest motion marked in bold for each method

We consider all three methods as equivalent in cost, as they require the same number of evaluations of \(\nabla \log (\pi )\) per step, and scale similarly with the size of the data vector y. Comparing the slowest motions of the system, the EQN scheme is about 100 times more efficient compared to Langevin dynamics and 350 times more efficient than HMC. We found that removing the divergence term in the EQN scheme had no significant impact on the results.

4.2 Log Gaussian Cox model

To illustrate the method in a high-dimensional setting, we compare results for inference in the Log Gaussian Cox point process as in (Christensen et al. 2005). We aim to infer the latent variable field X from given observation data Y.

Fig. 3
figure 3

The synthetic observed intensity Y (left) and the true Gaussian field X (right)

We make use of the RMHMC Matlab code template in our experiments (Girolami and Calderhead 2011b). In the model, we discretize the unit square into a \(32\times 32\) grid, with the observed intensity in each cell denoted \(Y_{i,j}\) and Gaussian field \(X_{i,j}\). We use two hyperparameters \(\sigma ^2\) and \(\beta \) to govern the priors, making the dimensionality of the problem \(D= 32^2+2=1026\) dimensions. Full details of the model are provided in “Appendix 4”.

As the evaluation of the derivative of the likelihood is significantly cheaper with respect to the latent x variables (tests showed computing the hyperparameter’s derivatives to be about one hundred times slower), we employ a partial resampling strategy to first sample the latent variables using multiple steps and then perform one iteration for the hyperparameter distribution.

We generate synthetic test data Y, plotted in Fig. 3, and compare the HMC and Langevin dynamics schemes to EQN (using 160 walkers) and the RMHMC scheme (Girolami and Calderhead 2011a). We additionally compare the results using the Langevin dynamics and EQN scheme without Metropolization, as the dynamics themselves sample \(\pi \), and the Metropolis step only serves to remove discretization error (which is dominated by the sampling error in this example). RMHMC uses Hessian information to obtain scaling data for the distribution. This gives it a significant increase in cost, but improves the rate at which the sampler decorrelates. For the model, the RMHMC scheme requires approximately 2.2s per step, whereas the other schemes require approximately 0.35s per step.

In Table 2, we give the integrated autocorrelation times for ensemble averages of the hyperparameters \(\beta \) and \(\sigma ^2\), along with the autocorrelation time for the slowest component of the x variables. The efficiency is also shown, calculated as the wall time required per step divided by the autocorrelation time for the slowest hyperparameter (then normalized with respect to the HMC result). The slowest hyperparameter is compared instead of the slowest component of x because evolving the x dynamics requires less computation, hence it is trivial to reduce the autocorrelation time of x without significantly impacting the wall time.

Table 2 Maximum autocorrelation times for each variable using each scheme

In the results, the EQN scheme significantly outperforms the other methods, with the slowest motion of the system (the \(\beta \) hyperparameter) decorrelating more rapidly than the HMC or Langevin schemes for approximately the same cost. The RMHMC scheme’s requires significant extra computation, making it much less efficient than the standard HMC scheme in this example.

5 Conclusion

We have presented a sampling algorithm that utilizes information from an ensemble of walkers to make more efficient moves through space, by discretizing a continuous ergodic quasi-Newton dynamics sampling the target distribution \(\pi (x)\). The information from the other walkers can be introduced in several ways, and we give two examples using either local or global covariance information. The two forms of the \(B_i\) preconditioning matrix are then tested on benchmark test cases, where we see significant improvement compared to standard schemes.

The EQN scheme is cheap to implement, requiring no extra evaluations of \(\nabla \log \pi (x)\) compared to schemes like MALA, and needing no higher derivative or memory terms. The scheme is also easily parallelizable, with communication between walkers being required infrequently. The dynamics (6) is novel in their approach to the introduction of the scaling information, and we build on previous work using walkers running in parallel to provide a cheap alternative to Hessian data.

The full capabilities of the EQN method, in the context of complex data science challenges, remain to be explored. It is likely that more sophisticated choices of \(B_i\) are merited for particular types of applications. The propagation of an ensemble of walkers also suggests natural extensions of the method to sensitivity analysis and to estimation of the sampling error in the MCMC scheme. Also left to be explored is the estimation of the convergence rate as a function of the number of walkers, which may be possible for simplified model problems.