Geometry-informed irreversible perturbations for accelerated convergence of Langevin dynamics

We introduce a novel geometry-informed irreversible perturbation that accelerates convergence of the Langevin algorithm for Bayesian computation. It is well documented that there exist perturbations to the Langevin dynamics that preserve its invariant measure while accelerating its convergence. Irreversible perturbations and reversible perturbations (such as Riemannian manifold Langevin dynamics (RMLD)) have separately been shown to improve the performance of Langevin samplers. We consider these two perturbations simultaneously by presenting a novel form of irreversible perturbation for RMLD that is informed by the underlying geometry. Through numerical examples, we show that this new irreversible perturbation can improve estimation performance over irreversible perturbations that do not take the geometry into account. Moreover we demonstrate that irreversible perturbations generally can be implemented in conjunction with the stochastic gradient version of the Langevin algorithm. Lastly, while continuous-time irreversible perturbations cannot impair the performance of a Langevin estimator, the situation can sometimes be more complicated when discretization is considered. To this end, we describe a discrete-time example in which irreversibility increases both the bias and variance of the resulting estimator.


Introduction
Bayesian inference often requires estimating expectations with respect to non-Gaussian distributions. To solve this problem, particularly in high dimensions, one frequently resorts to sampling methods. A commonly used class of sampling methods is based on the Langevin dynamics (LD), which uses the gradient of the log-target density to specify a stochastic differential equation (SDE) whose invariant distribution is the target (e.g., posterior) distribution of interest.
Long term averages over a single trajectory of the SDE can be then used to estimate expectations of interest by appealing to the ergodicity of the stochastic process. Other LD-based approaches that reduce the mean squared error (MSE) of such estimators include the Metropolis-adjusted Langevin algorithm (MALA) (Roberts and Tweedie 1996;Girolami and Calderhead 2011), the stochastic gradient Langevin dynamics (SGLD) (Welling and Teh 2011), and their variants.
It is also known that certain perturbations to the LD can accelerate convergence of the dynamics to the stationary distribution. In  the authors show that suitable reversible and irreversible perturbations to diffusion processes can decrease the spectral gap of the generator, as well as increase the large deviations rate function and decrease the asymptotic variance of the estimators. One widely celebrated choice of reversible perturbation is the Riemannian manifold Langevin dynamics (Girolami and Calderhead 2011), in which one defines a Riemannian metric to alter the way distances and gradients are computed. The use of irreversible perturbations to accelerate convergence has also been well studied in a variety of contexts and general settings (Rey-Bellet and Spiliopoulos 2015; Hwang et al. 2005;Spiliopoulos 2015, 2016); see also (Franke et al. 2010;Hwang et al. 1993;Bierkens 2016;Diaconis et al. 2010) and for linear systems, Lelievre et al. (2013). The authors of Ma et al. (2015) find general conditions on the drift and diffusion coefficients of an SDE so that a specified measure is the SDE's invariant measure-without, however, exploring how different choices of these coefficients impact sampling quality. Existing literature shows that augmenting the drift of the LD with a vector field that is orthogonal to the gradient of the log-target density will leave the invariant measure unchanged while decreasing the spectral gap. A convenient choice is simply to add the vector field induced by a skew-symmetric matrix applied to the gradient of the log posterior.
At the same time, traditional sampling methods for Bayesian inference are often intractable for extremely large datasets. While Langevin dynamics-based sampling methods only require access to the unnormalized posterior density, they need many evaluations of this unnormalized density and its gradient. When the dataset is extremely large, each evaluation of the density may be computationally intractable, as it requires the evaluation of the likelihood over the entire dataset. In the past decade the stochastic gradient Langevin dynamics (SGLD) has been introduced and analyzed (Welling and Teh 2011;Teh et al. 2016) to address the problem posed by large datasets. Rather than evaluating the likelihood over the entire dataset, SGLD subsamples a portion of the data (either with or without replacement) and uses the likelihood evaluated at the sampled data to estimate the true likelihood. The resulting chain can then be used to estimate ergodic averages.
In this paper we present a state-dependent irreversible perturbation of Riemannian manifold Langevin dynamics that is informed by the geometry of the manifold. This departs from existing literature, as the vector field of the resulting perturbation is not orthogonal to the original drift term. This geometry-informed irreversible perturbation accelerates convergence and, if desired, can be used in combination with the SGLD algorithm to exploit the computational savings of a stochastic gradient.
We demonstrate this approach on a variety of examples: a simple anisotropic Gaussian target, a posterior on the mean and variance parameters of a normal distribution, Bayesian logistic regression, and Bayesian independent component analysis (ICA). Generally, we observe that the geometryinformed irreversible perturbation improves the convergence rate of LD compared to a standard irreversible perturbation. The improvement tends to be more pronounced as the target distribution deviates from Gaussianity. Our numerical studies also show that introducing irreversibility can reduce the MSE of the resulting long-term average estimator, mainly by reducing variance. In many cases this reduction can be significant, e.g., 1-2 orders of magnitude.
One must, however, also take the effects of discretization into account. In the continuous-time setting, it is known theoretically that irreversible perturbations can at worst only leave the spectral gap fixed. In borderline cases, though-i.e., in cases where the continuous-time theoretical improvement is nearly zero-after accounting for discretization, stiffness can actually cause the resulting estimator to perform worse than if no irreversibility were applied at all. Indeed, we will describe in Appendix A an illustrative Gaussian example in which the standard Langevin algorithm performs better than the algorithm with the standard irreversible perturbation-that is, an example in which additional irreversibility leads to increased bias and variance of the long term average estimator (see Remark 3.2 and Remark A.1 for a theoretical explanation). Along similar lines, the idea of applying irreversible perturbations to SGLD has recently been studied in the context of nonconvex stochastic optimization (Hu et al. 2020). The authors also note that while irreversibility increases the rate of convergence, it increases the discretization error and amplifies the variance of the gradient, compared to a non-perturbed system with the same step size; see also (Brosse et al. 2018) for a related discussion on the relation of SGLD to SGD and convergence properties. This reflects the increased stiffness of irreversible SGLD relative to standard SGLD.
The rest of the paper is organized as follows. In Section 2 we review reversible and irreversible perturbations of the overdamped Langevin dynamics that may improve the efficiency of sampling from equilibrium. Then, in Section 2.3, we present our new geometry-informed irreversible perturbation. In Section 3 we present simulation studies that demonstrate the good performance of this geometric perturbation, relative to a variety of other standard reversible and irreversible choices. In several of these examples, we also demonstrate the use of stochastic gradients. Section 4 summarizes our results and outlines directions for future work. Appendix A details the simple Gaussian example showing that in "borderline" cases-i.e., when continuous-time analysis does not predict improvements from irreversible perturbations-the stiffness created by an irreversible perturbation can, after discretization, lead to poorer performance than the unperturbed case.

Improving the performance of Langevin samplers
We begin by recalling some relevant background on Langevin samplers, Riemannian manifold Langevin dynamics, perturbations of Langevin dynamics, and the stochastic gradient Langevin dynamics algorithm. Let f (θ ) be a test function on state space E ⊂ R d and let π(θ) be some unnormalized target density on E. In our experiments, π(θ) arises as a posterior density of the form π(θ) ∝ L(θ ; X )π 0 (θ ), where L(θ ; X ) is the likelihood model, X are the data, and π 0 (θ ) is the prior density. Define {θ(t)} as a Langevin process that has invariant density π(θ): where β > 0 denotes the temperature, W (t) is a standard Brownian motion in R d , and the initial condition may be arbitrary. By ergodicity, we may compute expectations with respect to the posterior by the long term average of f (θ ) over a single trajectory: For practical computations, we must approximate (2) by discretizing the Langevin dynamics and choosing a large but finite T . Applying the Euler-Maruyama method to (1) with step size h yields the following recurrence relation, where ξ k are independent standard normal random variables. The total number of steps is equal to K = T /h. The resulting estimator for (2) is This estimator is the unadjusted Langevin algorithm (ULA), which has found renewed interest in the context of high-dimensional machine learning problems (Durmus and Moulines 2019). Discretization and truncation, however, introduce bias into the estimator. Moreover, there are noted examples in which the continuous-time process and the discretized version do not have the same invariant distribution no matter the choice of the fixed, but nonzero, discretization step h; see Ganguly and Sundar (2021) for a related discussion. Certain Markov chain Monte Carlo (MCMC) methods such as MALA circumvent these issues by using the dynamics to propose new points, but accepting or rejecting them according to some rule so that the resulting discrete-time Markov chain has the target distribution as its invariant distribution (Roberts and Tweedie 1996;Girolami and Calderhead 2011). Many different SDEs can have the same invariant distribution. Therefore, there has been much study into how the standard Langevin dynamics of some target distribution can be altered to increase its rate of convergence. Some examples of this can be found in the work of Hwang et al. (2005), Rey-Bellet and Spiliopoulos (2016) and others. The standard Langevin dynamics is a reversible Markov process, meaning that the process satisfies detailed balance. The work of Rey-Bellet and Spiliopoulos (2016) studies, in general terms, how reversible and irreversible perturbations to reversible processes decrease the spectral gap, increase the large deviations rate function, and decrease the asymptotic variance. Yet how to choose such perturbations to most efficiently accelerate convergence is yet to be thoroughly studied in settings beyond linear diffusion processes (Lelievre et al. 2013). Also, with the exception of a few examples-see for instance (Duncun et al. 2017;Jianfeng and Spiliopoulos 2018)-these perturbations have mainly been studied in the continuous-time setting.

Reversible perturbations and Riemannian manifold Langevin dynamics
We only review relevant aspects of reversible perturbations and RMLD in this section. For a detailed review of RMLD and its related Monte Carlo methods, we refer the reader to Girolami and Calderhead (2011), Livingstone and Girolami (2014), Xifara et al. (2014). Let B(θ ) be a d × d symmetric positive definite matrix. A reversible perturbation on LD (1) is an SDE with multiplicative noise: Here . This is equivalent to Langevin dynamics defined on a Riemannian manifold, where the metric is G(θ ) = B(θ ) −1 (Xifara et al. 2014). A straightforward calculation shows that (5) with B(θ ) being any symmetric positive-definite matrix admits the same invariant distribution, π . The improved rate of convergence depends on the choice of the underlying metric. The work of Girolami and Calderhead (2011) argues that choosing the expected Fisher information matrix plus the Hessian of the log-prior to be the metric improves the performance of the resulting manifold MALA method. Meanwhile, Rey-Bellet and Spiliopoulos (2016) shows that under certain regularity conditions, if B(θ ) is chosen such that B(θ ) − I is positive definite, then the resulting estimator is expected to have improved performance in terms of the asymptotic variance, the spectral gap, and the large deviations rate function.

Irreversible perturbations
Consider the following Langevin dynamics dθ(t) = β∇ log π(θ(t)) + γ (θ(t)) dt + 2βdW (t). (6) When γ (θ) ≡ 0, the process is reversible and has π(θ) as its invariant distribution. If γ = 0, then the resulting process will, in general, be time-irreversible unless γ (θ) can be written as a multiple of ∇ log π(θ); see for example (Pavliotis 2014). However, an irreversible perturbation can still preserve the invariant distribution of the system. By considering the Fokker-Planck equation, one can show that if γ (θ) is chosen such that ∇ · (γ π) = 0, then π will still be the invariant distribution. A frequently used choice in the literature is γ (θ) = J∇ log π(θ), where J is a constant skew-symmetric matrix, i.e., J = −J T . The computational advantage of this choice is clear since only one additional matrix-vector multiply is needed to implement it. The optimal choice of irreversible perturbation to a linear system (i.e., that which yields fastest convergence) was completely analyzed in Lelievre et al. (2013).
The advantages of using irreversible perturbations is widely noted. The main result of Hwang et al. (2005) is that under certain conditions, the spectral gap, i.e., the difference between the leading two eigenvalues of the generator of the Markov semigroup, increases when γ = 0. In Spiliopoulos (2015, ?, 2016), the large deviations rate function is introduced as a measure of performance in the context of sampling from the equilibrium, and upon connecting it to the asymptotic variance of the long term average estimator, it is proven that adding an appropriate perturbation γ not only increases the large deviations rate function but also decreases the asymptotic variance of the estimator. The use of irreversible proposals in the MALA was studied in Ottobre et al. (2019).

Irreversible perturbations for RMLD
In this section, we will introduce our novel geometryinformed irreversible perturbation to Langevin dynamics. Suppose that we are given a diffusion process as in (5), and we want to study how to choose an irreversible perturbation that leaves the invariant distribution fixed. Indeed, our previous choice of irreversible perturbation remains valid for this system, that is, adding γ (θ) = J∇ log π(θ) for a constant skew-symmetric matrix J to the drift term of (5) will preserve the invariant density. This choice yields the following SDE: We refer to this system as Riemannian manifold Langevin with an additive irreversible perturbation (RMIrr). This choice, however, does not take into account the relevant features that the reversible perturbation may provide when constructing an irreversible perturbation. The reversible perturbation leads to a positive definite matrix (a metric, in the terminology of Riemannian geometry) that is state-dependent. In contrast, the skew-symmetric matrix J is fixed in the irreversible perturbation. The skewsymmetric matrix need not be constant, however, as an irreversible perturbation γ (θ) only needs to satisfy ∇ · (γ (θ )π(θ )) = 0. In fact, if γ (θ) = C(θ )∇ log π(θ) + ∇ · C(θ ) for C(θ ) = −C(θ ) T , then this irreversible perturbation will also leave the invariant density intact. Noting that C ii (θ ) = 0 and that C i j = −C ji , observe that We seek an irreversible perturbation that takes the reversible perturbation into account, with the possibility that C(θ ) is not a constant matrix, and investigate if it leads to any performance improvements of the long term average estimator. Note that in the literature, the above condition ∇ · (γ π ) = 0 is typically rewritten into the following sufficient conditions: ∇ · γ (θ) = 0 and γ (θ) · ∇π(θ) = 0 (Rey-Bellet and Spiliopoulos 2016). One can check, however, that when C is not constant, these conditions are not met, yet γ (θ) is still a valid irreversible perturbation. A simple choice of C(θ ) that incorporates B(θ ) is where J is a constant skew-symmetric matrix. The 1 2 factor is introduced so that if B(θ ) = I, i.e., if there is no reversible perturbation, then this perturbation reverts to the standard irreversible perturbation (Irr). We arrive at the following system: dθ(t) = (βB(θ (t)) + C(θ (t)))∇ log π(θ(t)) +∇ · (βB(θ (t))+C(θ (t))] dt + 2βB(θ (t))dW (t).

(9)
We call this choice of perturbation the geometry-informed irreversible perturbation (GiIrr). Indeed, while there are infinitely many valid choices for C(θ ), we will investigate the choice in (8) in the numerical examples. Since we will have already explicitly constructed B(θ ) and J for the other systems, the additional computational cost of computing their product will be marginal. Furthermore, as mentioned earlier, this choice reduces to Irr when B(θ ) = I.
One may wonder when does GiIrr result in improved performance over standard irreversible perturbations such as in Equation 7. Based on the numerical results and intuition, we will argue that GiIrr results in better performance if the underlying reversible perturbation already improves the sampling. Namely, if one knows that RMLD leads to improved sampling on a given problem, then employing GiIrr is expected to improve sampling even further. As we mentioned earlier, the choice of GiIrr that is made in this paper is not unique, and a further investigation of its theoretical properties is left for future work; see also the discussion in the Section 4. The goal of this paper is to present this new class of irreversible perturbations and investigate it numerically in a number of representative computational studies.

Stochastic gradient Langevin dynamics
In certain Bayesian inference problems, the data are conditionally independent of each other given the parameter value. Therefore, the likelihood model can often be factorized and the posterior density can be written as follows: where π(X i |θ) is the likelihood function for data point X i . When the dataset is extremely large, i.e., when N 1, however, ULA becomes exceedingly expensive as it requires repeatedly evaluating the likelihood over the entire dataset for each step of the trajectory. To mitigate this challenge, the stochastic gradient Langevin dynamics was presented to reduce the computational cost of evaluating the posterior density by only evaluating the likelihood over subsets of the data at each step. The true likelihood is estimated based on the likelihood function evaluated at the subsampled data (Welling and Teh 2011). Specifically, the gradient is estimated using a stochastic gradient where τ is a random subset of {1, . . . , N } of size n drawn with or without replacement. Depending on the choice of n, this approach cuts down on the computational costs dramatically with some additional variance incurred by the random subsampling of the data. The original version of this algorithm made the step size variable, approaching zero as the number of steps taken K became large. SGLD applied with a variable and shrinking step size was proven to be consistent: that is, the invariant distribution of the discretized system is equivalent to that of the continuous system . Having a decreasing step size counteracts the cost savings provided by computing the stochastic gradient, and therefore a version where the step size is fixed was presented in Vollmer et al. (2016), where theoretical characterizations of the asymptotic and finite-time bias and variance are also developed. In most of our numerical results, we use stochastic gradient version of the Langevin algorithm with fixed step size to demonstrate that SGLD can be used together with irreversible perturbations.

Numerical examples
In the following examples, we always apply the stochastic gradient version of each Langevin system unless otherwise stated. We fix β = 1/2 for all examples. The efficacy of the GiIrr perturbation does not change whether or not the stochastic gradient is used. We illustrate this explicitly in Section 3.4, where we report the results of all perturbations both with and without the stochastic gradient, for comparison.

Evaluating sample quality
Here we discuss the measures we use to evaluate sample quality for each Langevin sampler. For each example, we estimate the bias, variance, mean-squared error, and asymptotic variance of the estimators of the expectations of two observables: where [θ k ] i is the state of the i th chain at iteration k. Here, we estimate E π [φ(θ)] by applying the unadjusted Langevin algorithm with a very long simulated trajectory and small discretization step. The expected value of the estimator is computed by averaging over M = 1000 independent chains for the examples in Sections 3.2 and 3.3, and M = 100 independent chains for the examples in Sections 3.4 and 3.5. The variance of each estimator is defined and estimated as follows The mean-squared error (MSE) of each estimatorφ K follows analogously.
We also evaluate the asymptotic variance of the estimator of each observable, defined as To compute these asymptotic variances, we use the batch means method in Asmussen and Glynn (2007). After the burn-in period, we evaluate the observable over each chain. Each observable chain is then batched into twenty separate chains, and their means are evaluated. The asymptotic variance is estimated by computing the empirical variance of those means and then multiplying by the length of each of the subsampled trajectories. In addition to measuring the performance of estimators of specific observables, for each sampler we also evaluate overall sample quality by computing the recently-proposed kernelized Stein discrepancy (KSD) (Gorham and Mackey 2017). The KSD is a computable expression that can approximate certain integral probability metrics (IPMs) for a certain class of functions defined through the action of the Stein operator on a reproducing kernel Hilbert space. Letπ K = 1 K K −1 k=0 δ θ k be the empirical approximation to π based on samples {θ k } K −1 k=0 produced by some Langevin algorithm. The IPM is defined as for some function space H, where h ∈ H are functions from R d to R, and Z ∼π K and X ∼ π are random variables. When H is large enough, d H (π K , π) → 0 holds only if π K → π in distribution; see Gorham and Mackey (2017) and the references therein. This result implies that a better empirical approximation (i.e., better-quality samples) corresponds with a lower IPM value. Practically computing the IPM directly is intractable as it requires exactly knowing expectations with respect to the target distribution π (which is, after all, our original goal in sampling). By judiciously choosing the function space H, however, we can estimate the IPM by computing the KSD, using only the samples that comprise the empirical distributionπ .
As the expectation with respect to the target distribution π is intractable to compute, one seeks an H such that for and ∇· denotes the divergence operator. The space G is defined by specifying a reproducing kernel Hilbert space (RKHS) R r of functions from R d to R, with a userdefined kernel r (x, y) that is twice-continuously differentiable (Gorham and Mackey 2017;Izzatullah et al. 2020). Each function g ∈ G is made up of components g j ∈ R r , for j = 1, . . . , d, such that the vector ( g 1 R r , . . . , g d R r ) has unit norm in the dual space 2 , where · R r is the norm of the RKHS (Gorham and Mackey 2017). Hence, if we set h = Ag for g ∈ G, then E π [h(X )] = 0. Proposition 1 in Gorham and Mackey (2017) demonstrates that such a G is an appropriate domain for A so that one indeed has E π [Ag(X )] = 0 for all g ∈ G. Then the space H is defined as H = AG, i.e., the space of functions resulting from the Stein operator applied to functions in G. With this H, we define the corresponding KSD of a measure μ as Proposition 2 in Gorham and Mackey (2017) shows that the KSD admits a closed form. Indeed, for any j = 1, . . . , d and letting b j (x) = ∂ x j log π(x), define the Stein kernel Then, by Proposition 2 of Gorham and Mackey (2017) Also, Gorham and Mackey (2017) emphasizes the importance of choosing the kernel r (x, y) carefully so that convergence of the KSD to zero (for a sequence of empirical distributions) implies convergence in distribution to the target measure. As suggested by the results of Gorham and Mackey (2017), we use the inverse multiquadric kernel r (x, y) = (c 2 + x − y 2 2 ) β with β = −1/2 and c = 1. This choice has also been used in Izzatullah et al. (2020).
In our examples, μ =π K is a discrete distribution, so the KSD can be computed by evaluating the kernels over all pairs of sample points (Gorham and Mackey 2017). Namely, We use the KSD to evaluate the quality of every Langevin sampler-effectively, sampler performance over a large class of observables-to complement our bias/variance/MSE computations for particular choices of observable. For each example below and for each Langevin sampler, we compute the KSD for 25 independent chains. The mean KSDs of the chains for each sampler are then plotted as a function of the number of steps of the chain. We comment on the quality of each sampler given these KSD plots. We refer the reader to Gorham et al. (2019), Mackey (2015, 2017), Izzatullah et al. (2020) for further details on these choices and for further literature review.

Linear Gaussian example
Suppose we have data {X i } N i=1 ⊂ R d generated from a multivariate normal distribution with mean θ ∈ R d and known precision matrix X ∈ R d×d . From the data, we infer the value of θ . Endow θ with a normal prior with mean zero and precision θ ∈ R d×d . Then the posterior distribution is Gaussian with mean and precision respectively. The Euler-Maruyama discretization with constant step size h applied to the corresponding Langevin dynamics is Using stochastic gradients yields the same recurrence above except with where n ≤ N and τ k i ∈ {1, . . . , N } is randomly sampled (with or without replacement) (Welling and Teh 2011). Expectations with respect to the posterior are approximated by an long term average of the observable over the course of a trajectory. It has been shown that despite subsampling the data at each step in the dynamics, this estimator has comparable performance as the estimator produced by the regular Langevin dynamics with the full likelihood or MALA (Welling and Teh 2011;Vollmer et al. 2016). Now, we consider the case where the dynamics are perturbed by an irreversible term that preserves the invariant distribution of the dynamics. We demonstrate that this leads to a lower MSE than standard SGLD or Langevin dynamics. In this case, we replaceĀ andD k with A and D k , which are and J is a skew-symmetric matrix.
For the numerical experiments, we choose d = 3, N = 10, where the mini-batches are of size n = 2. The initial condition is the zero vector. We have X = 0.25I, θ is a precision matrix with eigenvalues 0.2, 0.01, 0.05 and eigenvectors that are randomly generated, and h = 0.005. Note that these matrices were chosen so that the resulting reversible perturbation has eigenvalues greater than one. To construct the perturbations, we choose B = −1 p and J to be for δ ∈ R. We consider the five different SDE systems presented in Table 1 and investigate how the MSE, bias, and variance differs for each case. For this example, since a constant metric is used, the geometry-informed irreversible perturbation simply produces a different constant skewsymmetric matrix than the other irreversible perturbations.
Each system is simulated for K = 10 5 steps with step size h = 5 × 10 −3 . In Figures 1 and 2, we plot the MSE of the running average for each case when the observables are   Table 2, we report the asymptotic variance of the estimator for each system. We see that irreversible perturbations definitely improve the performance of the estimators, although the improvement provided by the geometry-informed irreversible perturbation seems marginal over RMIrr when estimating the second moments.
When the reversible perturbation is chosen such that the drift matrix is exactly the identity (for example, when the matrix is chosen to be the covariance matrix of the posterior), additional irreversibility cannot widen the spectral gap of the system. This fact can be deduced from the results of Lelievre et al. (2013). The improved performance of the geometryinformed irreversible perturbation is mostly due to the fact that the norm of the corresponding skew-symmetric matrix is greater than that of simple irreversibility. Even though one can scale the skew-symmetric matrix for the other two cases to observe similar performance as geometry-informed irreversibility, GiIrr accomplishes that in a more systematic way.
In Figure 3 we plot the kernelized Stein discrepancy (KSD) for the linear-Gaussian example. We see that irreversible perturbations typically have smaller KSD compared to reversible perturbations and that in all cases the theoretical slope of K −1/2 (see Liu et al. (2016)) is achieved. Figure 3 and in all subsequent KSDrelated figures, K ranges up to 10 4 , rather than up to 10 5 or higher (as in the bias/variance/MSE plots, e.g., Figures 1  and 2 and analogous figures in subsequent examples). The reason is that KSD is expensive to compute, and we find that evaluating it up to K = 10 4 is sufficient to draw conclusions. , when considering discretization and in borderline cases (i.e., when one does not expect much or any improvement in continuous time), irreversibility may actually harm the performance of the estimator as it introduces additional stiffness into the system without resulting in faster convergence to the invariant density. A detailed exploration of this effect is presented in Appendix A, in which we compute the bias and variance of the long term average estimator for a simple linear Gaussian problem where the posterior precision is a scalar multiple of the identity matrix. As further discussed in Remark A.1, in this case, the irreversible perturbation is not expected to lead to improvement in the sampling properties from the equilibrium. Hence, the stiffness induced upon discretization has a more profound impact on the practical performance of the irreversible perturbation.

Remark 3.1 Note that in
In the current numerical study, the posterior precision is diagonal, but not a scalar multiple of the identity matrix. The eigenvalues of the resulting drift matrix are therefore distinct, and by the theory in Lelievre et al. (2013), irreversible perturbations are able to reduce the spectral gap and result in improved performance. This is in contrast with the example studied in Appendix A. In the example of the appendix, the drift matrix is taken to be proportional to the identity matrix, and as explained in Remark A.1, the spectral gap therefore cannot be widened. In that case, irreversibility leads to increased stiffness of the system, which then leads to increased bias and variance in the resulting estimator.

Parameters of a normal distribution
This example is identical to that used in Girolami and Calderhead (2011, Section 5) to demonstrate the performance of RMLD. Given a dataset of R-valued data X = {X i } N i=1 ∼ N (μ, σ 2 ), we infer the parameters μ, σ . To be clear, in this example the state is θ = [μ, σ ] T . The prior on μ, σ is chosen to be flat (and, therefore, improper). The log-posterior is The gradient is Girolami and Calderhead (2011), the authors propose using the geometry of the manifold defined by the parameter space of the posterior distribution to accelerate the resulting Metropolis-adjusted Langevin algorithm. The authors in Girolami and Calderhead (2011) suggest using the expected Fisher information matrix to define the Riemannian metric, which in the context of reversible diffusions (Rey-Bellet and Spiliopoulos 2016), is equivalent to choosing B(μ, σ ) to be the inverse of the sum of the expected Fisher information matrix and the negative Hessian of the log-prior. Straightforward computations yield As for the geometry-informed irreversible perturbation, let Then the relevant quantities are In the experiments, we have N = 30, h = 10 −3 , δ = 2. and simulate M = 1000 independent trajectories of each system up to T = 1000 for a total of K = 10 6 steps. The initial condition is chosen to be μ = 5 and σ = 20,  Observable: φ 2 (μ, σ ) = μ 2 + σ 2 , δ = 2. Stochastic gradients are computed which is consistent with the choice in Girolami and Calderhead (2011). The data are subsampled at a rate of n = 6 per stochastic gradient computation. Each trajectory is allotted a burn-in time of T b = 10. The dataset is generated by drawing samples from a normal distribution with μ true = 0 and σ true = 10. The observables we study are φ 1 (μ, σ ) = μ+σ , and φ 2 (μ, σ ) = μ 2 +σ 2 . We plot the MSE, squared bias, and variance of resulting estimators for each observable in Figures 5 and 6. Moreover, in Table 3 we report the asymptotic variance of the estimators of each of the five systems. We plot the kernelized Stein discrepancy in Figure 7. Notice that the irreversibly perturbed systems reach the K −1/2 convergence rate (see Liu et al. (2016)) faster than the reversibly perturbed system. The main takeaway is that an irreversible perturbation that is adapted to the existing reversible perturbation performs much better than if the irreversible perturbation were applied without regard to the underlying geometry. Notice that the reversible perturbation considered here still improves the performance of the long term average estimator despite the fact that B − I is not positive definite on the state space. Indeed, while B − I being positive definite is a sufficient condition to obtain improved performance, it is not a necessary one (Rey-Bellet and Spiliopoulos 2016). The reason for the reduced asymptotic variance we observed here is because the reversible perturbation B has eigenvalues larger than one where the bulk of the posterior distribution lies. Figure 4 show single and mean trajectories of the burn-in period of trajectories from each of the five systems. The plot shows that the geometry-informed irreversible perturbation is able to find the bulk of the distribution sooner than the other systems without incurring additional errors due to stiffness.
To show that the GiIrr perturbation is not intimately tied to the stochastic gradient, we also report the results for each system when the gradients are computed exactly in Table 4. We see that there is little meaningful difference in the results compared to when stochastic gradients are used.
In Figure 7 we plot the KSD for the parameters-of-anormal-distribution example. We see that GiIrr yields lower KSD than all other perturbations.

Bayesian logistic regression
Next we consider Bayesian logistic regression. Given data , where x i ∈ R d , and t i ∈ {0, 1}, we seek a logistic function, parameterized by weights w ∈ R d , that best fits the data. The weights are obtained in a Bayesian fashion, in which we endow the weights with a prior and seek to characterize its posterior distribution via sampling. Define ϕ(y) to be the logistic function ϕ(y) = (1 + exp(−y)) −1 . The log-likelihood function is The prior for the weights is normally distributed with mean zero and covariance α −1 I. The gradient of the log-posterior is This term is used in the drift part of the Langevin dynamics that fully computes the gradient of the log-likelihood at every step. If the data are subsampled as in SGLD, we instead compute We use the german data set described in Gershman et al. (2012) for the numerical experiments. In this problem, there are 20 weight parameters to be learned. The training dataset is of size N = 400 and we choose to subsample at a rate of n = 10 per likelihood computation. The time step we choose is h = 10 −4 and K = 4 × 10 5 steps. The initial condition is chosen to be the zero vector. We generate the skew-symmetric matrix by constructing a lower triangular matrix with entries randomly drawn from {1, −1} and then subtracting its transpose. The diagonal is then set to zero and the matrix is scaled to have norm one. As for the Riemannian manifold Langevin dynamics, in Girolami and Calderhead (2011) the authors use the expected Fisher information matrix plus the negative Hessian of the log-prior as the underlying metric, which in this case is equal to where is a diagonal matrix with entries ii (w) = (1 − ϕ(x T i w))ϕ(x T i w) and x i is the i-th column of X. The resulting reversible perturbation uses the inverse of G(w). This perturbation, however, does not lead to accelerated convergence to the invariant measure since the eigenvalues of G are large. This implies that the eigenvalues of G −1 are less than one and so G −1 (w) − I is not positive definite, a condition that needs to be satisfied to guarantee accelerated convergence (Rey-Bellet and Spiliopoulos 2015). To alleviate this issue, we consider the reversible perturbation B(w) = I+G −1 (w). This guarantees B(w) to be positive definite for all w, but the drawback is that computing the square root of B(w) requires explicitly computing or at least approximating the inverse of G(w) repeatedly in the simulation (and not just computing the action of the inverse). This additional computational cost is incurred for all examples that consider a geometryinformed perturbation, both reversible and irreversible. We show the result of this state-dependent perturbation in Figures 8 and 9 and report the asymptotic variance in Table 5. The geometry-informed irreversible perturbation does provide improvement over all other perturbations. We observe that the asymptotic variance is reduced by half over RM, with only little additional computational effort. Most of the computational cost of applying GiIrr is due to the evaluation of the reversible perturbation. Therefore we emphasize that if one is already applying the reversible perturbation to the Langevin dynamics, the marginal cost of applying the GiIrr perturbation is negligible. In Figure 10 we plot the KSD for the Bayesian logistic regression example. We see that GiIrr has slightly lower KSD than the other perturbations, but the differences are small. The theoretical slope of K −1/2 is still realized.

Independent component analysis
Our last example considers the problem of blind signal separation addressed in Welling and Teh (2011) and Amari et al. (1996). This problem yields a posterior that is strongly non-Gaussian and multi-modal, and we show that GiIrr has substantially better sampling performance over standard reversible and irreversible perturbations. Suppose there are m Fig. 10 Kernelized Stein discrepancy plot for Bayesian logistic regression example. Black line has slope −1/2, which denotes the expected convergence rate separate unknown independent signals s i (t) for i = 1, . . . , m that are mixed by mixing matrix M ∈ R d×d . Suppose we can observe the mixed signals X (t) = Ms(t) for N instances in time. The goal of independent component analysis is to infer a de-mixing matrix W such that the m signals are recovered up to a nonzero constant and permutation. As such, this problem is generally ill-posed, but is suitable to be considered in a Bayesian context. The ICA literature states that, based on real-world data, it is best to assume a likelihood model with large kurtosis. Following (Welling and Teh 2011;Amari et al. 1996), let p(y i ) = 1 4 sech 2 1 2 y i . The prior on the weights W i j is Gaussian with zero mean and precision λ. The posterior is equal to The gradient of the log posterior with respect to the matrix W is then It is suggested in Amari et al. (1996) that the natural gradient should be used instead of the gradient we see here above to account for the information geometry of the problem. Specifically, Teh et al. (2016), Amari et al. (1996) post-multiply the gradient by W T W and arrive at the so-called natural gradient of the system In the context of RMLD, this is equivalent to perturbing the system with a reversible perturbation withB(W) = W T W⊗ I d pre-multipled in front of the vectorized gradient. That is, we have This choice of reversible perturbation, however, may not be sufficient for accelerating the convergence of Langevin dynamics asB(W) − I d 2 is not positive definite throughout the state space (Rey-Bellet and Spiliopoulos 2016). Instead, we choose the reversible perturbation B(W) = We construct the GiIrr term as follows. To take advantage of the matrix structure of the reversible perturbation, we choose the skew-symmetric matrix such that it acts within the computation of the natural gradient. We choose where C 0 has the same sign pattern as (22) but such that J has matrix norm equal to 1. Then the geometry-informed irreversible perturbation is To simulate the RM and GiIrr systems, correction terms (such as ∇ · B(θ )) need to be computed. The correction terms are derived using the symbolic algebra toolbox in MAT-LAB. Since the perturbations are vectors of polynomials, the symbolic algebra toolbox can easily derive and efficiently evaluate the correction terms. For the numerical experiments, we synthetically generate m = 3 signals, one of which is Laplace distributed, and two are distributed according to the squared hyperbolic secant distribution. The posterior distribution is d = 9 dimensional, there are a total of N = 400 data points, and the gradient is approximated by subsampling n = 40 data points per estimate. The initial condition here is chosen to be a diagonal matrix with either +1 or −1 entries, which are chosen randomly. Since the posterior is nine-dimensional and highly multimodal, it is difficult to evaluate its marginal densities directly, i.e., without sampling. Instead, we establish a baseline reference density by simulating the standard Langevin dynamics with exact computation of the likelihood over all the data over T = 10000 with h = 10 −4 . One-and twodimensional marginals of this baseline posterior distribution are plotted in Figure 11. The two-dimensional marginals highlight the challenges of sampling from this posterior. In Figure 12, we plot trace plots of the W 11 variable for each system. By visual inspection, we see that that mixing is Fig. 11 Posterior distribution sampled with standard Langevin with a deterministic gradient with T = 10000 and h = 10 −4 . Notice that the system is very multimodal and non-Gaussian best for the geometry-informed irreversibly perturbed system. One can intuitively expect that with better mixing, the geometry-informed irreversibility should yield better estimation performance than the other systems. We assess this quantitatively below.
As in the previous example, we simulate the five systems and compute the asymptotic variances of two observables for each system. Each system is simulated independently 100 times up to time T = 2000 with h = 2 × 10 −5 . The smaller step size is to account for the additional stiffness irreversible perturbations introduce. Since the true mean of the posterior distribution is unknown, and because standard sampling methods fail to adequately sample from the posterior distribution to get a reasonable estimate for the mean, we only plot the variances of the selected observables with respect to K in Figure 13. To compute the asymptotic variance, we allot a burn-in time of T b = 20. The observables we estimate The asymptotic variance numbers confirm that the faster mixing observed in the geometry-adapted irreversible perturbation does lead to a better sampling method. The values of the asymptotic variance are reported in Table 6. The results for the asymptotic variance and variance of φ 2 are somewhat noisy, which is why GiIrr may appear to perform similarly as the other sampling methods. When estimating the posterior mean and an observable (φ 3 ) that includes crossmoments, the geometry-informed irreversible perturbation outperforms standard irreversibility applied to the reversible perturbation.
In Figure 14 we plot the convergence of KSD for the ICA example. GiIrr yields lower KSD than the other perturbations in this example, and the theoretical slope of K −1/2 is also realized. Fig. 12 Trace plots of the W 11 marginal Fig. 13 Variance of running average estimators. For the second moment (middle plot), there is less difference among the samplers as the distribution is quite symmetric, and one can properly estimate the second moment even if the samplers are stuck in a single mode. GiIrr is able to estimate the observable with cross-moments (right plot) better than the other samplers ducing an irreversible perturbation that incorporates any given underlying reversible perturbation, which can also be interpreted as defining a Riemannian metric, we have shown through numerical examples that geometry-informed irreversible perturbations outperform those that are not informed as such.
In the examples, we found that GiIrr seems to perform best when the target distribution is highly non-Gaussian. Most of our numerical examples used stochastic gradients to cut down on computational effort in sampling each trajectory. This demonstrates that SGLD can be used in conjunction with irreversibility for practical computations.
We also provided some analysis on how irreversibility interacts with discretization of the SDE systems. Irreversibility introduces additional stiffness into the system, which may lead to additional bias or variance in the estimator. For practical purposes, one can simply choose a small enough step size so that the asymptotic bias and variance are sufficiently small. At the same time, we note an example (see Appendix A) where the introduction of the irreversible term, once discretized, leads to no improvement in the long term average estimator.
Future work could study the use of novel integrators which circumvent stiffness. For example, Jianfeng and Spiliopoulos (2018) uses a multiscale integrator, but it is not readily adapted to the data-driven setting of Bayesian inference. Another direction for future work is to theoretically characterize the performance of the geometry-informed irreversible perturbation and to compare it with that of other perturbations. A starting point for such an analysis could be the general results of Rey-Bellet and Spiliopoulos (2016), in particular the large deviations Theorem 1 together with Propositions 2-4 therein. Preliminary investigation of this direction showed that it is a promising avenue for a theoretical investigation, but non-trivial work and a finer analysis are needed to demonstrate the effects of this class of irreversible perturbations. We leave this for future work, as such an analysis is outside the scope of this paper. Our goal in this paper has been to introduce the perturbation and showcase its potential through simulation examples.

Availability of data and material Not applicable.
Code availability Code can be made availble upon request.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.

Appendix A The effects of discretization
In this section, we study the effects of discretization in the setting of an irreversibly perturbed Langevin system. Results in full generality are, as yet, elusive; therefore we only consider a Gaussian example, as it still provides insight into how irreversibility interacts with discretization in impacting the asymptotic and finite sample bias and variance of the long term average estimator. While we do not present the results when a stochastic gradient is used, we note that the results are similar and can be easily extended based on what we present here. Recall that, For this analysis, all precision matrices are 2×2 scalar matrices. That is, we assume θ = σ −2 θ I, X = σ −2 X I. This is distinct from the example in Section 3.2, since the precision matrices there are diagonal but not scalar. Let b = 1 2σ 2 X and S X = N k=1 X i , so that D = b(I + J)S X . We summarize our findings here. For fixed discretization size h and scalar precision matrices as defined above, and introducing the irreversible perturbation scaled by δ, we find the following: • The asymptotic bias for linear observables is zero, that is, E[θ ∞ ] = μ p ; • The asymptotic variance for linear observables increases.
We focus on the finite time results and omit the asymptotic results, since the the former case is of more practical interest. The computations related to both are similar.

Finite time analysis: bias for linear observables.
We study how the magnitude of the irreversibility, characterized by δ, impacts the mean-squared error MSE = E θ K − μ p We approach this quantity via its bias-variance decomposition: First, we compute the expected value of the sample average For simplicity, we assume that the initial condition is always θ 0 = 0. For any k, we have This yields Since μ p = A −1 D, the bias is The norm of the bias can in fact be computed. Note that A 2 = (1 + δ 2 )a 2 I and we have bias 2 The inner matrix can be computed. Since each matrix above is simultaneously diagonalizable, we only need to consider the eigenvalues of each of the above matrices. Note that I + J is a normal matrix, so we may write the eigenvalue decomposition I + J = PDP * , where * denotes conjugate transpose, Q = diag(1 + iδ, 1 − iδ), and which implies Using the fact that (I + J) T (I + J) = (1 + δ 2 )I , and we have the following To simplify further, we write 1 − a(1 + iδ)h = re iθ where r 2 = (1 − ah) 2 + δ 2 a 2 h 2 , and tan θ = δah/(1 − ah). Then we obtain We know that r < 1, since otherwise, the numerical scheme would be unstable. It is easy to see that for large, but not infinite, K , the bias decays as O(1/(K h √ 1 + δ 2 )), so the introduction of irreversibility decreases the constant in front of the expression and therefore slightly improves the convergence of the bias.

Finite time analysis: variance for linear observables.
For simplicity, we assume θ 0 = 0. We compute TrVar(θ K ). We begin with and compute these terms separately. It is difficult to surmise a relationship between δ and Tr Var(θ K ) even with exact formulas, so we appeal to plots of the expressions to see that the variance decreases with irreversibility. We computed E[θ K ] in the previous section.
With the observation that and P is defined in the previous section. We compute that The other term is more complicated and needs to be approached more gingerly. Observe that We take each term individually. To compute E[θ k θ T k ], it is actually better to consider the covariance matrix of which imply the following recurrence relation. Assuming Let s = 1 − 2ah + h 2 a 2 (1 + δ 2 ), then, by recalling that A + A T = 2aI and AA T = a 2 (1 + δ 2 )I, the above sum is equal to 1−s k 1−s hI. Therefore, Meanwhile note that Therefore, We now take the sum for each expression from k = 0 to K − 1. We have For the cross-terms, observe that we may write which can be simplified further. First note that Plugging this expression into the double sum above, we have Taking expectations, we have Carrying out the computation for the first term, we have For the second term we have, The summations are difficult to compute precisely, so we compute them by direct evaluation instead. For simplicity, we assume that μ p = E[S X ] = [0, 0] T , σ X and σ θ are chosen such that a = 1. For this scenario, the bias is zero and only the variance contributes to the MSE. The variance is Figure 15 we plot the variance for varying choices of δ. In this plots, h = 0.001, K = 2 × 10 5 , and δ varies between zero and ten. We can clearly see that strengthening the irreversible perturbation leads to improvement of the squared bias and variance of the long term average estimator.
Finite sample analysis for the quadratic observable φ(θ) = θ 2 . The previous finite sample results for the observable φ(θ) = θ 1 + θ 2 suggests that both the bias and variance of the long term average estimator goes down with a larger irreversible term. In this section, we show that this is actually a special case, and that when the observable is not linear, then the bias and variance may increase. We analyze the bias and variance of the long term average estimator of the observable φ(θ) = θ 2 . Define As before, we assume that μ p = [0, 0] T , E[S X ] = 0, and σ x and σ θ are chosen such that a = 1. We compute |Eφ K − φ| 2 and Varφ k and see how they vary with δ. From previous computations, we can show that where s = 1 − 2ah + a 2 h 2 (1 + δ 2 ). Given this, the only term left to compute is the variance of the second moment of this observable: To compute the first sum, consider the following: θ T k θ k = θ T k−1 (I − hA) T (I − hA)θ k−1 + hξ T k−1 ξ k−1 + 2 √ hθ T k−1 (I − hA) T ξ k−1 and so we have Taking the expectation, we have After simplifying, we arrive at the following recurrence relation: Let β k = E[(θ T k θ k ) 2 ], ζ k = 8shE[θ T k θ k ], and κ = 8h 2 . We have the following recurrence, which we solve From previous for the term ζ k , we have Next we compute the summation of the cross terms. Define R k such that We write θ T l θ l = sθ T l−1 θ l−1 + hξ T l−1 ξ l−1 + 2 √ hξ T l−1 (I − hA)θ l−1 = s l−k θ T k θ k + l−k−1 n=0 hs n ξ T l−n−1 ξ l−n−1 + 2 √ hs n ξ T l−n−1 (I − hA)θ l−n−1 .
This implies that To summarize, we have the squared bias is (Eφ K −1) 2 and the variance is E[(φ K ) 2 ]− E[φ K ] 2 . These expressions are not simplifiable easily, so we plot these expressions and study their trends. In Figure 16, we plot the squared bias and variance for fixed h and K and varying δ. In these plots, h = 0.001, K = 2 × 10 5 , and δ varies between zero and ten. Notice that for these choices, both the squared bias and variance increases as δ grows, showing that for irreversibility provides no benefit, and in fact, harms the performance of the standard estimator.
Remark A.1 When discretization is considered, sampling properties in this simple example in which the drift is proportional to the state cannot be improved upon using irreversibility. Let us now explain this phenomenon from a theoretical point of view. In Rey-Bellet and Spiliopoulos (2016), the authors show that adding an irreversible perturbation to the generator of the diffusion process may decrease the spectral gap, and will never increase it. They further prove that in the continuous case, decreasing the spectral gap then decreases the asymptotic variance. However this improvement is not strict, that is, irreversibility is only guaranteed to not increase the spectral gap. Meanwhile, in Lelievre et al. (2013), the authors consider irreversibility only in the context of linear systems, and rigorously study optimal irreversible perturbations that accelerate convergence to the invariant distribution. Their results show that when the drift matrix is proportional to the identity matrix, the spectral gap cannot be widened. Proposition 4 in Lelievre et al. (2013) shows that the leading nonzero eigenvalue of the irreversibly perturbed drift matrix is bounded above by the leading nonzero eigenvalue of the original drift matrix and below by the trace of the original drift matrix over the dimension of the state space. The lower bound is then the optimal spectral gap. For a drift matrix that is a multiple of the identity, the upper and lower bounds are the same, which implies that the spectral gap can never decrease from its original value in the continuous case. After factoring in discretization, the irreversible perturbation increases stiffness of the system, which contributes to increased bias and variance in the resulting estimator.