Ensemble preconditioning for Markov chain Monte Carlo simulation

We describe parallel Markov chain Monte Carlo methods that propagate a collective ensemble of paths, with local covariance information calculated from neighboring replicas. The use of collective dynamics eliminates multiplicative noise and stabilizes the dynamics thus providing a practical approach to difficult anisotropic sampling problems in high dimensions. Numerical experiments with model problems demonstrate that dramatic potential speedups, compared to various alternative schemes, are attainable.


Introduction
A popular family of methods for Bayesian parameterization in data analytics are derived as Markov chain Monte Carlo (MCMC) methods, including Hamiltonian (or hybrid) Monte Carlo (HMC) [7], or the Metropolis adjusted Langevin algorithm (MALA) [27]. These methods involve proposals that are based on approximating a continuous time (stochastic) dynamics that exactly preserves the target (posterior) density, followed by an accept/reject step to correct for approximation errors. In some cases the Metropolis-Hastings correction may be omitted, provided the consequent approximation bias is well controlled. Efficient parameterization of the stochastic differential equations used in these procedures has the potential to greatly accelerate their convergence, particularly when the target density is poorly scaled, i.e., when the Hessian matrix of the logarithm of the density has a large condition number. In precise analogy with well established strategies in optimization, the solution to conditioning problems in the sampling context is to find a well chosen change of variables (preconditioning) of the system, such that the natural scales of the transformed system are roughly commensurate.
In this article we discuss an approach to dynamic preconditioning based on simultaneously evolving an ensemble of parallel MCMC simulations, each of which is referred to as a "walker." As we will show, the walkers provide information that can greatly improve the efficiency of MCMC methods. Our work is motivated by the method in [15] and earlier similar schemes that use multiple parallel simulations to improve proposal densities in MCMC simulations (see e.g. [10,3,5]). The authors of [15] highlight the notion of affine invariance which means, loosely, that the performance of the method when applied to sample the probability distribution with density π is identical to its performance when applied to sample π A,v (x) ∝ π(Ax + v) regardless of the choice of invertible matrix A and vector v. Affine invariant methods are advantageous when sampling a severely anisotropic density, or when a suitable parameterization length scale for the problem is unknown. In this article, we however intentionally deviate from affine invariance to overcome practical limitations of earlier schemes. The methods in this paper also differ from those in [15] in that, like the methods listed in the opening paragraph, they require the first derivative of π, giving high accuracy even in the absence of a Metropolis-Hastings step, and making the methods scalable to high dimension.
Our starting point is the discrete approximation of a system of stochastic differential equations (SDEs), x = [J(x) + S(x)]∇ log π(x) + div(J(x) + S(x)) + 2S(x) η(t) (1) where J(x) and S(x) are skew-symmetric and symmetric positive semi-definite N × N matrices, respectively, with η(t) representing a vector of independent Gaussian white noise components. In our sampling schemes, each walker generates a discrete time approximation of (1) with its own particular choice of J which corresponds to a notion of the localized and regularized sample covariance matrix across the ensemble of walkers and incorporates information about the target density π into the evolution of each walker. Many optimization methods can be characterized as time discretizations of (1), with J = 0 and choices of S that encapsulate scaling information based upon higher derivatives of the model or a history of previously computed points. Modification of S to improve convergence also has been considered in the Monte Carlo literature, dating at least to [1]. This idea has been the focus of renewed attention in statistics and several new schemes of this or closely related types have been proposed [22,12]. From a more theoretical point of view, several authors have recently considered the effects of the choice of J and S on the ergodic properties of the solution (1) (see for example [25,8,19,18]). In this paper we are concerned with motivating and presenting a particular choice of S and J that yields practical and efficient sampling schemes and we will leave a more detailed analysis to later work. Nonetheless, our development reveals the central role of discretization stability properties in determining the efficacy of a particular choice of J or S.

Preconditioning strategies for sampling
As in any MCMC scheme, the goal is to estimate the average E[f ] = f (x)π(x)dx by a trajectory average of the form for large N . In many cases we can expect the error in an MCMC scheme to satisfy a central limit theorem: where σ 2 is the variance of f under π (and is independent of the particular MCMC scheme), the quantity τ being the integrated auto-correlation time (IAT) which is often used to quantify the efficiency of an MCMC approach (see Appendix A).
To emphasize an analogy with optimization, for the moment assume that J = 0. The steepest descent algorithm of optimization corresponds to the so-called overdamped Langevin (or Brownian) dynamics, where here we have used an Euler-Maruyama discretization [24] and R ∼ N (0, I).
Discretization introduces an O(δt) error in the sampled invariant distribution so a Metropolis-Hastings accept/reject test may be incorporated in order to recover the correct statistics (see the MALA algorithm [27]) when time discretization error dominates sampling error. Reducing δt gives a more accurate approximation of the evolution of the dynamics, and boosts the acceptance rate. When π is Gaussian with covariance Σ, one can easily show that the cost to achieve a fixed accuracy depends on the condition number κ = λ max /λ min where λ max and λ min are the largest and smallest eigenvalues of Σ. Indeed, one finds that the worst case IAT for the scheme in (2) over observables of the form v T x is τ = κ − 1 (see Appendix A). In this formula, the eigenvalue λ min arises due to the discretization stability constraint on the step-size parameter δt and λ max appears because the direction of the corresponding eigenvector is slowest to relax for the continuous time process. The presence of λ min in this formula indicates that analysis of the continuous time scheme (1) (i.e. neglect of the discretization stability constraint) can be misleading when considering the effects of poor conditioning on sampling efficiency. Since the central limit theorem suggests that the error after N steps of the scheme is roughly proportional to τ /N , the cost to achieve a fixed accuracy is again roughly proportional to κ. Continuing to use J = 0 and taking, for example, S(x) = −(∇ 2 log(π(x))) −1 in (1) and discretizing, we obtain a stochastic analogue of Newton's method, (3) Schemes of similar form though neglecting the divS term (and therefore requiring Metropolization) have been explored recently in [22]. Metropolization may also be used to correct the O(δt) sampling bias introduced by the discretization. It can be shown that the scheme is affine invariant in the sense that when it is applied to sampling π A,v it generates a sequence of samples y (n) so that x (n) = Ay (n) + v has exactly the same distribution as the sequence of samples generated by the method when applied to π. We therefore expect that when this method can be applied (e.g. when the Hessian is positive definite), it should be effective on poorly scaled problems. This affine invariance property is shared by the deterministic Newton's method (obtained from (3) by dropping the noise term) and is responsible for its good performance when applied to optimize poorly scaled functions (e.g. when the condition number of the Hessian is large). We stress that the key to the usefulness of either the deterministic or stochastic Newton's method is that one does not need to make an explicit choice of the matrix A or the vector v. As the performance is independent of the choice of A and v, we can assume that A or v are chosen to remove poor scaling within the problem.
Due to the presence of the divergence term in the continuous dynamics, discretization will require evaluation of first, second and third order derivatives of log(π(x)), making it prohibitively expensive for many models. To avoid this difficulty, one can estimate the divergence term using an extra evaluation of the Hessian (see Appendix F), or omit the divergence term and rely on a Metropolization step to ensure correct sampling. Regardless of how this term is handled, the system (3), unlike (2), is based on multiplicative noise which is known to introduce complexity (and reduce accuracy) in numerical discretization [24].
More fundamentally, complex sampling problems will exhibit regions of substantial probability where the Hessian fails to be positive definite. A choice that is both less costly and more robust is S = Σ, where we assume that Σ is the covariance matrix of π (even when π is not Gaussian) and is positive definite. It can be shown that the iteration in (3) is again affine invariant. The resulting scheme, which can be regarded as a simple quasi-Newton type approach, is closely related to adaptive MCMC approaches [26,17]. On the other hand, because this choice of S does not depend on position, the scheme can be expected to perform poorly on problems for which the scaling behavior is dramatically different in different regions of space (e.g. the Hessian has high condition number and its eigenvectors are strongly position dependent), see Figure 1. This observation along with our comments on the method corresponding to S(x) = −(∇ 2 log(π(x))) −1 suggest a choice of S corresponding to a notion of local covariance. While a notion of local covariance will be central to the schemes we eventually introduce, we choose to incorporate that information not through S in (1), but through the skew-symmetric matrix J in that equation. In the remainder of this section we discuss how the choices of S described so far, and the corresponding properties of (3), have analogues in choices of J and a family of so-called underdamped Langevin schemes that we next introduce.
A popular way to obtain an MCMC scheme with decreased IAT relative to the overdamped scheme in (2) is to introduce "inertia." We extend the space by writing our state x = (q, p) T ∈ D × R N ⊂ R 2N , with the target distributionπ written as the product with the distribution of interest π(q) recovered as the marginal distribution of the position vector q. For the distribution ϕ(p) we will follow common practice and use ϕ(p) ∝ exp(− p 2 /2). With this extension of the space, we can obtain the standard underdamped form of Langevin dynamics for the choice in equation (1), where I N is the N × N identity matrix and γ is a positive constant [24]. Previous work, especially in connection with molecular dynamics [21], has examined efficient ways to discretize the Langevin dynamics system while minimizing the error in sampling π(q).
To incorporate information such as the Hessian matrix or the covariance matrix (or local covariance matrices) in the underdamped Langevin scheme, we focus on choices of J and S as follows: Discretization of the stochastic system may be derived by mimicking the BAOAB scheme [21]. Given a stepsize δt > 0, define α = exp(−γδt) and approximate the step from t n to t n+1 = t n + δt by the formulas The choice of matrix BB T introduced in the next section will be a sum of the identity and a small (relative to the dimension N ) number of rank 1 matrices, alleviating storage demands and reducing the cost of all calculations involving B to linear in N. As described in Appendix B, schemes of the form in (7) can also be used to generate proposals in a Metropolis-Hastings framework to strictly enforce a condition that, like detailed balance, guarantees that π is exactly preserved. Suppose that, when applied to sampling the density π A,v , an underdamped Langevin scheme of the form in (7) generates a sequence (q (n) , p (n) ). The scheme will be referred to as affine invariant if the transformed sequence (Aq (n) + v, p (n) ) has the same distribution as the sequence generated by the method when applied to sample π. As for (3) one can demonstrate that the choices B(q)B T (q) = −(∇ 2 log(π(q))) −1 and B(q)B T (q) = Σ, yield affine invariant sampling schemes (see Appendix E for details).
Before proceeding to the important issue of selecting a practically useful choice of B, we observe the following important properties of our formulation: (i) the stochastic dynamical system (7) exactly preserves the target distribution and thus, if discretization error is well controlled, Metropolis correction is not likely to be needed, and (ii) the formulation, with appropriate choice of B, is affine invariant, even under discretization (see Appendix E), a property which ensures the stability of the method under change of coordinates. By contrast, we emphasize that schemes that modify S (instead of J) in (5) or that are based on a q-dependent normal distribution ϕ in (4) (e.g. within HMC as in [12]), cannot be made affine invariant in the same sense (Appendix E).
With the general stochastic quasi-Newton form in (7) as a template, one may consider many possible choices of B. Just as in optimization, in MCMC the question is not whether one should precondition, but rather how can one precondition in an affordable and effective way. Unfortunately, practical and effective quasi-Newton approaches for optimization do not have direct analogues in the sampling context leaving a substantial gap between un-preconditioned methods and often impractical preconditioning approaches. In the next section we suggest an alternative strategy to fill this gap: using multiple copies of a simulation to incorporate local scaling information in the B matrix in (7).

Ensemble quasi-Newton (EQN) schemes
We next describe an efficient approach to the sampling problem in which information from an ensemble of walkers provides an estimate of the local covariance matrix. We consider a system of L walkers (independent copies evolving under the same dynamics) with state x i = (q i , p i ) T , where subscripts now indicate the walker index. Each walker has position q i and momentum p i for i = 1, · · · , L, and with Q = (q 1 , q 2 , . . . , q L ) T ∈ D L and P = (p 1 , p 2 , . . . , p L ) T ∈ R N L . We seek to sample the product measureπ whose marginals give copies of the distribution of interest π:π A simple sampling strategy is for each walker to sampleπ by evolving each x i independently using an equation such as (2) or (5). Such a method scales perfectly in parallel when initial conditions are drawn from the target distribution, but no use is made of the local observed geometry or inter-walker information.
Alternatively we may use the dynamics (6) to introduce walker information through the B(q) matrix in order to precondition based on information from the other walkers. This preconditioning enters into the dynamics but not the distributionπ, ensuring it remains the product of disjoint distributions. Inserting the preconditioning matrix into the distribution in some form can have computational drawbacks associated to the communication of a large amount of information between steps.
Using L walkers, we have the global state x = (Q, P ) with 2N L total variables, with B(Q) an N L×N L matrix. We will use B(Q) = diag(B 1 (Q), B 2 (Q), . . . , B L (Q)) with each B i (Q) ∈ R N ×N so that the position and momentum (q i , p i ) of walker i evolve according to (7) with B(q) replaced by B i (Q). Note that the divergence and gradient terms in the equation for each walker are are taken with respect to the q i variable.
With this quasi-Newton framework there are many potential choices for the B i matrix, with B i = I N reducing us to running L independent copies of underdamped Langevin dynamics. Before exploring the possibilities we remark that, in order to exploit parallelism, we will divide our L walkers into several groups of equal size. Walkers in the same group g(i) as walker i will not appear in B i so that the walkers in any single group can be advanced in parallel independently. We set Q [i] = {q j | g(j) = g(i)} and let K be the common size of these sets. For example, if we have 16 cores available we may wish to use ten groups of 16 walkers (so L = 160 and K = 144). If walker j is designated as belonging to group 1, it evolves under the dynamics given in equation (7) but the set Q [j] only includes walkers in groups 2, · · · , 10. We may then iterate over the groups of walkers sequentially, moving all the walkers in a particular group in parallel with the others. This compartmentalized parallelism is inspired by a similar approach in the emcee package [9] and provides high parallel efficiency.
One choice for the preconditioning matrix (not yet the one we employ) is to use the sample covariance of the ensemble Note that div(B i (Q) T ) ≡ 0, simplifying the Metropolization of the scheme. In order for B i (Q) to be positive definite, we need at least N linearly independent walker positions, which at minimum requires that L > N.
With the choice of B i in (8), the ensemble scheme applied to the densitȳ for some invertible matrix A and vector v, generates a sequence of vectors (q  (8) should perform well when the covariance of π has a large condition number. Note that a related choice in the context of an overdamped formulation and without the measurecorrecting div(S) term (therefore reliant on an accept/reject step) appears in [16] and is shown to be affine invariant. An ensemble version of the HMC scheme using a mass matrix that seems to approximate the expectation of the Hessian matrix of log π appears in [28] but we have been unable to verify that the method as described preserves the target density (either exactly or up to a time-discretization error).
Using (8) in our ensemble schemes is problematic for several reasons. For high dimensional problems, the requirement that L > N may render the memory demands of the methods prohibitive. This problem can be easily remedied by only approximating and rescaling in the space spanned by the eigenvectors corresponding to the largest eigenvalues of the ensemble covariance matrix. While such a scheme can be implemented in a reasonably efficient manner, we find that simply blending the sample covariance matrix with the identity via the choice for some fixed parameter µ ≥ 0 is just as effective and much simpler. This choice allows L ≤ N but destroys affine invariance. On the other hand as demonstrated in Section (4), the method is still capable of dramatically alleviating scaling issues.
Having resolved the rank deficiency issue by moving to the choice of B i in (10), one difficulty remains. As described in the previous section, for many problems we might expect that the global covariance of π is reasonably well scaled but that the sampling problem is still poorly scaled (the Hessian of − log π has large condition number). To address problems of this type, we define a localized covariance matrix that better approximates the Hessian at a point q i while retaining full rank. We weight samples in the covariance matrix based on their distance (scaled by the global covariance) to a walker's current position, i.e. we use for parameters µ, λ > 0, where now wcov(x, w) is a weighted covariance matrix of K < L samples q ∈ R K×N with potentially unnormalized weights w ∈ R K where y 2 X = y T Xy. Choosing λ = 0 reduces to (10), whereas a large value of λ gives more refined estimation of the local scaling properties of the system. The divergence term in (7) (11) are sums of the identity and L rank one matrices so that all manipulations involving B i can be accomplished in linear cost in the dimension N. In Appendix B we detail a Metropolis-Hastings test that can be implemented (if needed) to correct any introduced bias. Because our ensemble scheme preserves π exactly when δt is small, one can also use the scheme absent of any Metropolis-Hastings test, improving the prospects for it to scale to very high dimension.

Numerical tests
We consider two numerical experiments to demonstrate the potential improvements that this method offers. A python package with example implementations of the code is available at [23].

Gaussian mixture model
We use the model presented in [4] focussing on fitting the distribution of a data vector y to univariate mixture model as the sum of three Gaussian distributions. The state vector is described by the means, precisions and weights of the three Gaussian distributions, denoted µ i , λ i and z i respectively. Due to the sum of the weights equalling unity, this gives us N = 8 variables describing the mixture model. We also include a hyperparameter β that describes the rate parameter in the prior distribution on the precisions, giving a nine dimensional state. A full description of the problem is available in the Appendix C. We consider the Hidalgo stamps benchmark dataset, studied in [20], as the data y with 485 datapoints. This example is well-suited to the local covariance approach we present above, due to the invariance of the likelihood under a permutation of components (the label-switching problem). Thus the system admits sets of 3! = 6 equivalent modes, see Figure 2, each with a local scaling matrix that has the same eigenvalues with permuted eigenvectors.
As the walkers may initialize in the neighborhood of different local modes, using a 'global' preconditioning strategy would be sub-optimal as the best preconditioning matrix for the current position of a walker depends on which mode is closest to the walker. Instead, we use the covariance information from proximal walkers as in (11) to determine the optimal scaling.
We test the EQN scheme against the standard HMC scheme and a Metropolized version of Langevin dynamics. We used L = 64 walkers for the each scheme and compare the computed integrated autocorrelation times for an ensemble mean of quantities that vary slowly, shown in Table 1. The autocorrelation times are computed using the ACOR package [13]. We consider all three methods as equivalent in cost, as they require the same number of evaluations of ∇ log(π) per step, and scale similarly with the size of the data vector y. The EQN scheme is about 100 times more efficient compared to Langevin dynamics, and 350 times more efficient than HMC. We found that removing the divergence term in the EQN scheme had no significant impact on the results. To illustrate the method in a high-dimensional setting we compare results for inference in the Log Gaussian Cox point process as in [6]. We aim to infer the latent variable field from observation data. We make use of the RMHMC code template [11]. In the model, we discretize the unit square into a 32 × 32 grid, with the observed intensity in each cell denoted Y i,j and Gaussian field X i,j . We use two hyperparameters σ 2 and β to govern the priors, making the dimensionality of the problem N = 32 2 + 2 = 1026 dimensions. Full details of the model are provided in Appendix D.

Log Gaussian Cox model
As the evaluation of the derivative of the likelihood is significantly cheaper with respect to the latent variables (tests showed computing the hyperparameter's derivatives to be about one hundred times slower), we employ a Gibbs scheme to first sample the latent variables using multiple steps, then perform one iteration for the hyperparameter distribution.
We generate synthetic test data Y , plotted in Figure 3, and compare the HMC and Langevin dynamics schemes to EQN (using 160 walkers) and the RMHMC scheme [12]. We additionally compare the results using the Langevin dynamics and EQN scheme without Metropolization, as the dynamics themselves sample π, and the Metropolis test only serves to remove discretization error (which is dominated by the sampling error in this example). RMHMC uses Hessian information to obtain scaling data for the distribution. This gives it a significant increase in cost, but improves the rate at which the sampler decorrelates. For the model, the RMHMC scheme requires about 2.2s per step whereas the other schemes require about 0.35s per step.
The EQN scheme significantly outperforms the other methods, with the slowest motion of the system (the β hyperparameter) decorrelating more rapidly than the HMC or Langevin schemes for approximately the same cost. The RMHMC scheme's requires significant extra computation, making it much less efficient than the standard HMC scheme in this example.

Conclusion
We have presented a sampling algorithm that utilizes information from an ensemble of walkers to make more efficient moves through space, by discretizing a continuous ergodic quasi-Newton dynamics sampling the target distribution π(x). The information from the other walkers can be introduced in several ways, and we give two examples using either local or global covariance information.
The two forms of the B i preconditioning matrix are then tested on benchmark test cases, where we see significant improvement compared to standard schemes. The EQN scheme is cheap to implement, requiring no extra evaluations of ∇ log π(x) compared to schemes like MALA, and needing no higher derivative or memory terms. The scheme is also easily parallelizable, with communication between walkers being required infrequently. The dynamics (6) are novel in their approach to the introduction of the scaling information and we build on previous work using walkers running in parallel to provide a cheap alternative to Hessian data.
The full capabilities of the EQN method, in the context of complex data science challenges, remain to be explored. It is likely that more sophisticated choices of B i are merited for particular types of applications. The propagation of an ensemble of walkers also suggests natural extensions of the method to sensitivity analysis and to estimation of the sampling error in the MCMC scheme. Also left to be explored is the estimation of the convergence rate as a function of the number of walkers, which may be possible for simplified model problems.
We would expect that using a larger value of δt would lead to a more rapid decorrelation between samples. We can quantify this by computing the covariance between samples, where from the definition of the update scheme. Similarly for a test function with integrated autocorrelation time Assuming the linear stability condition is satisfied, we can rewrite this expression as Plugging in v = vmin, the eigenvector corresponding to the minimum eigenvalue of M −1 (with eigenvalue λmin) we find that for the particular observable f * ( and given the constraint δt < 2/λmax this gives so that even choosing the largest timestep possible, the rate of convergence will be slow whenever M has a wide range of eigenvalues.

B Metropolization of discretized scheme
In order to improve stability of the scheme, or to correct for numerical bias, we may seek to impose a Metropolis condition on the discretization of the dynamics (6). The discretization we use is given in (7), which we rewrite here for clarity: with α = exp(−γδt), R (n) ∼ N (0, I) and F (q) = B(q) T ∇ log π(q). Note that the step in (13b) must be solved implicitly, likely requiring many evaluations of the matrix B. However, as this requires no communication between walkers and no evaluations of ∇ log π(q) we consider this a "cheap" operation. The ratio of transition probabilities necessary for the acceptance rule is with derivatives taken with respect to q and where .
In an efficient implementation, the calculation of the derivatives of B(q) (needed for the transition probabilities and the divergence term) is only required once per step, between (13b) and (13c), while the evaluation of the F (q) term is also once per step (after initialization) between lines (13f) and (13g). A single step can then be Metropolized using • Then if u < U we accept the move and set (q (n+1) , p (n+1) ) ← (q * , p * ), otherwise we reject the move and flip the sign of the momentum, so set (q (n+1) , p (n+1) ) ← (q (n) , −p (n) ).
where the Upd function corresponds to a step of the discretization in (13). Similarly we can perform multiple steps and then accept/reject the trajectory by multiplying the associated transition probabilities. An example implementation in Python is available at [23]. For Metropolization of overdamped schemes such as (3), see [2].

C Details of the Gaussian mixture experiment
In this section we provide the details of the Gaussian mixture experiment for fitting the Hidalgo stamp data y to the density where µ k and λ k are the center and precision of the component Gaussians, respectively, with weights z k > 0 such that z k = 1. Let θ be the parameter/hyperparameter vector for this model. The target distribution for θ is π(θ) ∝ p(θ)ρ(y | θ).
We use the prior distribution p(θ) such that for k ∈ {1, 2, 3} with hyperparameter β ∼ Gamma(g, h) and constants m = mean(y), r = range(y), κ = 4/r 2 , α = 2, g = 0.2, h = 100g/(αr 2 ). We compare the standard HMC scheme, with a Metropolized Langevin dynamics scheme and the EQN scheme presented in the paper. For each of the schemes, we tweak the stepsize until the acceptance is on average about 75 − 80%. The HMC and Langevin schemes are run by taking 50 steps per single iteration, and using a Metropolis test on the obtained trajectory, while the EQN scheme takes 5 steps per iteration. The Langevin and EQN scheme used a friction of γ = 0.01.
All schemes used 64 walkers, which amounts to 64 independent runs for the HMC and Langevin schemes. The EQN run used the localized form of the covariance matrix, with µ = 100 and λ = 12 in (11), however with the weighting kernel only using the Euclidean distance in the µi space, rather than the entirety of θ. We observed that increasing µ further reduced the acceptance probability significantly.

D Details of the log Gaussian Cox experiment
We run a larger experiment with 1024+2 total parameters to estimate. In the model, we discretize a unit square into a 32 × 32 grid, with the observed intensity in each cell denoted Yi,j and Gaussian field Xi,j.
The intensities are assumed to be conditionally independent and Poisson distributed with means mΛ(i, j) = m exp(Xi,j) for latent intensity process Λ(i, j) and m = 1/32 2 . X = {Xi,j} is a Gaussian process, where x = vec(X) we have its mean E(x) = µ and covariance matrix with parameters µ, σ 2 and β.
Our goal is to sample likelihoods for the latent variables X but also sample values for the hyperparameters β and σ 2 , whose priors we assume are exponentially distributed.
We generate synthetic data Y from a field X, created using β = 1/33, σ 2 = 1.91 and µ = log(126) − σ 2 /2 . We aim to infer X from our synthetic Y , along with the hyperparameters used for the model, using the HMC, RMHMC, Langevin dynamics and ensemble quasi-Newton methods. We make use of the RMHMC template code for the problem, available at http://www.ucl.ac.uk/statistics/research/rmhmc.
In order to run multiple highly-resolved simulations, we use a 32 × 32 grid rather than the 64 × 64 grid used in the original version of the problem. This reduces the dimension of the latent variables from 4096 to 1024, which we still consider large enough for a significant test of the samplers' abilities, but this reduction allows us to run for longer and recover more accurate statistical information. However, the change in the model requires us to alter some parameters used in the RMHMC method, for example the timestep, in order to recover good efficiency. The timestep is increased until we reach 75% acceptance (though we do not claim our choice is optimal).
We implement all of the schemes in MATLAB, with each walker running on a single core of identical architecture. For all methods, one iteration uses 50 latent variable steps for each one hyperparameter step. The ergodic property of the Langevin-dynamics based schemes allows us to run without Metropolization, if we are willing to endure some discretization bias that is not removed with further sampling. We argue that in most practical cases, when using a sensible discretization parameter sampling error should always dominate the discretization bias.
For the ensemble quasi-Newton sampler, we run using 160 walkers using the global covariance formulation for Bi (effectively λ = 0). We use a Gibbs sampling approach to sample the latent variables and hyperparameters, with the schemes using a Bi for each partition of variables. For the hyperparameters we used µ = 1, and for the latent variables we used µ = 7. We use five groups of 32 walkers with the walkers within each group running in parallel for ten thousand steps before switching to the next group. The cost of this communication is negligible as it is done so infrequently.
satisfying the invariance property as the covariance is automatically symmetric positive definite. Alternatively, one may choose B to be the inverse square root of the Hessian matrix of log π ψ , which shares this property (subject to some conditions on the Hessian).
We have shown that the discretization using this choice of B is affine invariant only up to affine transformations in the q variables. However, we may consider dynamics that are invariant under a transformatioñ which acts in both components, where as before Ai is any invertible matrix and vi is a vector. The dynamics we consider sample the distributionπ(q, p) ergodically, wherẽ so that the marginal distribution ofπ(q, p) in the momenta is π(q). The mass matrix M −1 π (q) is a symmetric positive definite matrix that may be dependent on position q. For the choice of (1) we obtain the standard underdamped Langevin dynamics, and applying it tõ π(q, p) with γ = 1 gives dq = M −1 π (q)pdt, dp = ∇ log π(q)dt + ∇ logφ(q, p)dt − pdt + 2Mπ(q)dWt, where the gradient ∇ is taken with respect to the position q. Now applying this dynamics to the transformed distribution instead πψ(q, p) ∝ π ψ (q)φ ψ (q, p) = π(Aq + v)φ(Aq + v, A −T p) we obtain dq = M −1 π ψ (q)p dt, dp = ∇ log π ψ (q)dt − 1 2 ∇ log |Mπ ψ (q)|dt − 1 2 ∇(p T M −1 π ψ (q)p)dt − pdt + 2Mπ ψ (q)dWt, where ∇ denotes gradient with respect to the position coordinates. Changing variables r = A −T p gives Then writing y = Aq + v we have π ψ (q) ∝ π(y), and if we assume M is chosen such that for some symmetric positive definite matrix C(q), then the dynamics become dy = C(y)r dt, dr = ∇ log π(y)dt + 1 2 ∇ log |C(y)|dt − 1 2 ∇(r T C(y)r)dt − rdt + 2C(y)dWt, eliminating the A scaling term in the dynamics and yielding affine invariance with respect to the transformationsψ.
Using the inverse covariance matrix as the mass is one such choice fo the M matrix, as similar to (17) we have Mπ ψ = covπ ψ (q) = covπ(ψ −1 (x)) = A −1 covπ(x)A −T satisfying (20). The inverse Hessian is another such matrix with this property.
This result applies directly to the RMHMC scheme [12] which uses a position dependent mass matrix, however it periodically redraws the momentum and sets γ = 0. If the inverse Hessian is used (assuming it remains symmetric positive definite), or another matrix such that (20) holds, then the resulting dynamics will be affine invariant under transformationsψ(q, p) = (Aq + v, A −T p).
Computationally there is no obvious benefit to including a p-scaling term in the affine transformation, as sampling π(q) is the ultimate goal of our sampling, and the inclusion of momentum is to increase efficiency. However, the usage of (18) may become practically inefficient in the case of ensemble sampling, as the joint distribution will bē π(Q, P ) = L i=1 π(qi) exp(−p T i M −1 i (Q)p/2 − log |Mi(Q)|/2).
In the target joint distribution the walker positions are no longer independent. This complicates Metropolization of the scheme which will now require calculations involving all walkers when any one walker's position is changed. Additionally in each walker's dynamics (18), evaluating the ∇q iπ (Q, P ) term requires computing derivatives of each of the L-many Mi matrices, causing a significant amount of additional computational overhead per step.

F Noisy estimation of the divergence term
The divergence of a positive definite matrix appears in many of the schemes we consider above, however the derivative is sometimes prohibitively computationally expensive to obtain, or infeasible to compute analytically. A Metropolization condition can be enforced to recover the correct sampling without this term, but we may instead approximate the divergence of a matrix M (x) using a random update. Formally, for a small constant > 0 and vector R ∈ R N we have . This gives a cheap "noisy" approximation of the divergence term that will introduce some small bias into the system (depending on the spectrum of M (x)).