Choice of Damping Coefficient in Langevin Dynamics

This article considers the application of Langevin dynamics to sampling and investigates how to choose the damping parameter in Langevin dynamics for the purpose of maximizing thoroughness of sampling. Also, it considers the computation of measures of sampling thoroughness.

1. Introduction. Langevin dynamics is a popular tool for molecular simulation. It requires the choice of a damping coefficient, which is the reciprocal of a diffusion coefficient. (More generally this might be a diffusion tensor.) The special case of a constant scalar diffusion coefficient is the topic of this article. The motivation for this study is a suspicion that proposed novel MCMC propagators based on Langevin dynamics (in particular, stochastic gradient methods for machine learning [4,9]) might be obtaining their advantage at the expense of reduced sampling efficiency, as, say, measured by effective sample size.
For simulations intended to model the dynamics, the appropriate choice of γ is based on physics. Generally, the dissipation and fluctuation terms are there to account for omitted degrees of freedom. In their common usage as thermostats, they model the effect of forces due to atoms just outside the set of explicitly represented atoms. These are essentially boundary effects, which disappear in the thermodynamic limit N atoms → ∞, where N atoms is the number of explicitly represented atoms. Since the ratio of the number of boundary atoms to interior atoms is of order N −1/3 atoms , it might be expected that γ is chosen to be proportional to N −1/3 atoms . There is second possible role for the addition of fluctuation-dissipation terms in a dynamics simulation: with a small damping coefficient, these terms can also play a role in stabilizing a numerical integrator [21], which might be justified if the added terms are small enough to have an effect no greater than that of the discretization error.
The bulk of molecular simulations, however, are "simply" for the purpose of drawing random samples from a prescribed distribution and this is the application under consideration here. The appropriate choice of γ optimizes the efficiency of sampling. A measure of this is the effective sample size N/τ where N is the number of samples and τ is the integrated autocorrelation time. The latter is, however, defined in terms of an observable. An observable is an expectation of a specified function of the configuration, which for lack of a better term, is referred to here as a preobservable. As an added complication, the accuracy of an estimate of an integrated autocorrelation time (IAcT) depends on sampling thoroughness [13,Sec. 3], so a conservative approach is indicated. Ref. [13,Sec. 3.1] advocates the use of the maximum possible IAcT and shows how it might be a surrogate for sampling thoroughness. The maximum possible IAcT is about the same (except for a factor of 2) as the decorrelation time of Ref. [30], defined to be "the minimum time that must elapse between configurations for them to become fully decorrelated (i.e., with respect to any quantity)". Therefore, for sampling, it is suggested that γ be chosen to achieve a high level of sampling thoroughness, as measured by the maximum possible IAcT. An initial study of this question is reported in Ref. [38,Sec. 5], and the purpose of the present article is to clarify and extend these results.
To begin with, we analyse an underdamped Langevin equation with a quadratic potential energy function. (See Eq. (12) below.) The main purpose of analyzing this model problem is, of course, to obtain insight and heuristics that can be applied to general potential energy functions. Needed for choosing the optimal gamma is a substitute for the lowest frequency. For the model problem, this can be obtained from the covariance matrix for the position coordinates, which is not difficult to compute for a general potentials. And for estimating τ q,max , the analysis suggests using the set of all quadratic polynomials, which can be achieved using the algorithm of reference [13,Sec. 3.5].
For molecular simulation, the suggestion is that one might choose linear combinations of functions of the form | r j − r i | 2 and ( r j − r i ) · ( r k − r i ) where each r i is an atomic position or center of mass of a group of atoms. Such functions share with the potential energy function the property of being invariant under a rigid body movement.
1.1. Results and discussion. Section 5 analyzes integrated autocorrelation times for the standard model problem of a quadratic potential energy function. An expression is derived for the IAcT for any preobservable; this is applied in Sec. 5.2 to check the accuracy of a method for estimating the IAcT. In Sec. 5, we also determine the maximum IAcT, denoted by τ q,max , over all preobservables defined on configurations, as well as the damping coefficient γ * that minimizes τ q,max . It is shown that it is polynomials of degree ≤ 2 that produce the largest value of τ q,max . And that choosing γ equal to the lowest frequency, which is half of the optimal value of γ for that frequency, minimizes τ q,max . These results extend those of Ref. [38,Sec. 5], which obtains a (less relevant) result for preobservables defined on phase space rather than configuration space.
Sections 6 and 7 test the heuristics derived from the quadratic potential energy on some simple potential energy functions giving rise to multimodal distributions. Results suggest that the heuristics for choosing the maximizing preobservable and optimal gamma are effective.
One of the test problems is one constructed by Ref. [23] to demonstrate the superiority of BAOAB over other Langevin integrators. Experiments for this problem in Sec. 6 are consistent with this claim of superiority.
In defining "quasi-reliability" and the notion of thorough sampling, Ref. [13] makes an unmotivated leap from maximizing over preobservables that are indicator functions to maximizing over arbitrary preobservables. The test problem of Sec. 7 provides a cursory look at this question, though the matter may warrant further study.
Obtaining reliable estimates of the IAcT without generating huge sets of samples very much hinders this investigation. To this end, Sec. 4.1 explores an intriguing way of calculating an estimate for the phase space τ max , which avoids the difficult calculation of IAcTs. For the model problem, it give more accurate results for τ max than estimating IAcTs, due to the difficulty of finding a set of functions that play the same role as quadratic polynomials when maximizing IAcTs. The literature offers interesting suggestions that might help in the development of better schemes for estimating IAcTs, and it may be fruitful to recast some of these ideas using the formalisms employed in this article. In particular, Ref. [30] offers a novel approach based on determining whether using every τ th sample creates a set of independent samples. Additionally, there are several conditions on covariances [16,Theorem 3.1] that can be checked or enforced.
1.2. Related work. While the major part of the literature on Markov chain Monte Carlo (MCMC) methods with stochastic differential equations focuses on the overdamped Langevin equation (e.g. [35,3] and the references given there), there have been significant advances, both from an algorithmic and a theoretical point of view, in understanding the underdamped Langevin dynamics [34]. For example, in Refs. [39,7] Langevin dynamics has been studied from the perspective of thermostatting and enhancment of specific vibrational modes or correlations, in Refs. [8,17,25] Langevin dynamics has been used to tackle problems in machine learning and stochastic optimisation. From a theoretical point of view, the Langevin equation is more difficult to analyse than its overdamped counterpart, since the noise term is degenerate and the associated propagator is non-symmetric; recent work on optimising the friction coefficient for sampling is due to [11,36,4], theoretical analyses using both probabilistic and functional analytical methods have been conducted in [10,5,12]; see also [27,] and the references therein.
Relevant in this regard are Refs. [20,26,33], in which non-reversible perturbations of the overdamped Langevin equation are proposed, with the aim of increasing the spectral gap of the propagator or reducing the asymptotic variance of the sampler. Related results on decorrelation times for the overdamped Langevin using properties of the dominant spectrum of the infinitesimal generator of the associated Markov process have been proved in [22,Sec. 4].
A key point of this article is that quantities like spectral gaps or asymptotic variances are not easily accessible numerically, therefore computing goal-oriented autocorrelation times (i.e. for specific observables that are of interest) that can be computed from simulation data is a sensible approach. With that being said, it would be a serious omission not to mention the work of Ref. [30], which proposes the use of indicator functions for subsets of configuration space in order to estimate asymptotic variance and effective sample size from autocorrelation times using trajectory data.
Finally, we should also mention that many stochastic optimisation methods that are nowadays popular in the machine learning comminity, like ADAM or RMSProp, adaptively control the damping coefficient, though in an ad-hoc way, so as to improve the convergence to a local minimum. They share many features with adaptive versions of Langevin thermostats that are used in moecular dynamics [24], and therefore it comes as no surprise that the Langevin model is the basis for the stochastic modified equation approach that can be used to analyse state of the art momentum-based stochastic optimisation algorithms like ADAM [1,28].
2. Preliminaries. The computational task is to sample from a probability density ρ q (q) proportional to exp(−βV (q)), where V (q) is a potential energy function and β is inverse temperature. In principle, these samples are used to compute an observable E[u(Q)], where Q is a random variable from the prescribed distribution and u(q) is a preobservable (possible an indicator function). The standard estimate is where the samples Q n are from a Markov chain, for which ρ q (q) (or a close approximation thereof) is the stationary density. Assume the chain has been equilibrated, meaning that Q 0 is drawn from a distribution with density ρ q (q). An efficient and popular way to generate such a Markov chain is based on Langevin dynamics, whose equations are where F (q) = −∇V (q), M is a matrix chosen to compress the range of vibrational frequencies, M h M T h = M , and W t is a vector of independent standard Wiener processes. The invariant phase space probability density ρ(q, p) is given by where Z > 0 is a normalisation constant that guarantees that ρ integrates to 1. We call ρ q (q) its marginal density for q. We suppose ρ > 0. It is common practice in molecular dynamics to use a numerical integrator, which introduces a modest bias, that depends on the step size ∆t. As an illustration, consider the BAOAB integrator [23]. Each step of the integrator consists of the following substeps: where R n+1/2 is a vector of independent Gaussian random variables with mean 0 and covariance matrix (1 − exp(−2γ∆t))β −1 M .
In the following, we use the shorthand Z = (Q, P) to denote a phase space vector. It is known [16,Sec. 2] that the variance of the estimate which is exact relative to 1/N in the limit N → ∞. Here τ is the integrated autocorrelation time (IAcT) and C(k) is the autocovariance at lag k defined by Here and in what follows the expectation E[·] is understood over all realisations of the (discretized) Langevin dynamics, with initial conditions Z 0 drawn from the equilibrium probability density function ρ.

Estimating integrated autocorrelation time.
Estimates of the IAcT based on estimating covariances C(k) suffer from inaccuracy in estimates of C(k) due to a decreasing number of samples as k increases. To get reliable estimates, it is necessary to underweight or omit estimates of C(k) for larger values of k. Many ways to do this have been proposed. Most attractive are those [16,Sec. 3.3] that take advantage of the fact that the time series is a Markov chain.
One that is used in this study is a short computer program called acor [18] that implements a method described in Ref. [31]. It recursively reduces the series to one half its length by summing successive pairs of terms until the estimate of τ based on the reduced series is deemed reliable. The definition of "reliable" depends on heuristically chosen parameters. A greater number of reductions, called reducs in this paper, employs greater numbers of covariances, but at the risk of introducing more noise.

Helpful formalisms for analyzing MCMC convergence. It is helpful to introduce the linear operator T defined by
where ρ(z |z) is the transition probability density for the Markov chain. Then one can express an expectation of the form E[v(Z 0 )u(Z 1 )], arising from a covariance, as where the inner product ·, · is defined by The adjoint operator is what Ref. [37] calls the forward transfer operator, because it propagates relative probability densities forward in time. On the other hand, Ref. [29] calls T † the backward operator and calls T itself the forward operator. To avoid confusion, use the term transfer operator for T . The earlier work [13,38] is in terms of the operator T † . To get an expression for where ρ k (z |z) is the iterated transition probability density function defined recursively by ρ 1 (z |z) = ρ(z|z ) and

By induction on k
Properties of the transfer operator and IAcT. It is useful to establish some properties of T and the IAcT that will be used throughout the article. In particular, we shall provide a formula for τ (u) in terms of the transfer operator that will be the starting point for systematic improvements and that will later on allow us to estimate τ by solving a generalised eigenvalue problem.
Clearly, T 1 = 1, and 1 is an eigenvalue of T . Here, where the context requires a function, the symbol 1 denotes the constant function that is identically 1. Where the context requires an operator, it denotes the identity operator. To remove the eigenspace corresponding to the eigenvalue λ = 1 from T , define the orthogonal projection operator Eu = 1, u 1 and consider instead the operator It is assumed that the eigenvalues λ of T 0 satisfy |λ| < 1, in other words, we assume that the underlying Markov chain is ergodic. Stationarity of the target density ρ(z) w.r.t. ρ(z|z ) implies that T † 1 = 1 and that T † T 1 = 1. Therefore, T † T is a stochastic kernel. This implies that the spectral radius of T † T is 1, and, since it is a symmetric operator, one has that The IAcT, given by Eq. (3), requires autocovariances, which one can express in terms of T 0 as follows: 0 u , which follows because E and 1 − E are symmetric. Substituting Equation (7) into Equation (3) gives It can be readily seen that τ is indeed nonnegative.
being not a constant.
3. Sampling Thoroughness and Efficiency. Less than "thorough" sampling can degrade estimates of an IAcT. Ref. [13,Sec. 1] proposes a notion of "quasi-reliability" to mean the absence of evidence in existing samples that would suggest a lack of sampling thoroughness. A notion of sampling thoroughness begins by considering subsets A of configuration space. The probability that Q ∈ A can be expressed as the expectation E[1 A ] where 1 A is the indicator function for A. A criterion for thoroughness might be that This is not overly stringent, since it does not require that there are any samples in sets A of probability ≤ tol . The next step in the development of this notion is to replace the requirement | 1 A − Pr(Q ∈ A)| ≤ tol by something more forgiving of the random error in 1 A . For example, we could require instead that which would satisfy Eq. (9) with 95% confidence, supposing an approximate normal distribution for the estimate. (If we are not willing to accept the Gaussian assumption, Chebychev's inequality tells us that we reach 95% confidence level if we replace the right hand side by 0.05 tol .) Now let τ A be the integrated autocorrelation time for 1 A . Because it is enough to have (1/4N )τ A ≤ (1/4)tol 2 for all sets of configurations A to ensure thorough sampling (assuming again Gaussianity). The definition of good coverage might then be expressed in terms of the maximum τ (1 A ) over all A. Note that the sample variance may not be a good criterion if all the candidate sets A have small probability Pr(Z ∈ A), in which case it is rather advisable to consider the relative error [6]. Remark 1. Generally, if there are symmetries present in both the distribution and the preobservables of interest, this may reduce the amount of sampling needed. Such symmetries can be expressed as bijections ψ q for which u(ψ q (q)) = u(q) and ρ q (ψ q (q)) = ρ q (q). Examples include translational and rotational invariance, as well as interchangeability of atoms and groups of atoms. Let Ψ q denote the set of all such symmetries. The definition of good coverage then need only include sets A, which are invariant under all symmetries ψ q ∈ Ψ q . The extension from indicator sets 1 A to general functions leads to considering W q = {u(q) | u(ψ q (q)) = u(q) for all ψ q ∈ Ψ q } and defining τ q,max = sup Another consideration that might dramatically reduce the set of relevant preobservables is the attractiveness of using collective variables ζ = ξ(q) to characterize structure and dynamics of molecular systems. This suggests considering only functions defined on collective variable space, hence, functions of the formū(ξ(q)).

4.
Computing the Maximum IAcT. The difficulty of getting reliable estimates for τ (u) in order to compute the maximum IAcT makes it interesting to consider alternative formulation.

A transfer operator based formulation.
Although, there is little interest in sampling functions of auxiliary variables like momenta, it may be useful to consider phase space sampling efficiency. Specifically, a maximum over phase space is an upper bound and it might be easier to estimate. Putting aside exploitation of symmetries, the suggestion is to using τ max = sup Var[u(Z)]>0 τ (u). One has, with a change of variables, that The last step follows because (1 − T 0 ) is nonsingular. Needed for an estimate of τ 2 (v) is T v, T v . To evaluate T v, T v , proceed as follows: Let Z n+1 be an independent realization of Z n+1 from Z n . In particular, repeat the step, but with an independent stochastic process having the same distribution. Then This approach has been tested on the model problem of Sec. 5, a Gaussian process, and found to be significantly better than the use of acor. Unfortunately, this observation is not generalisable: For example, for a double well potential, it is difficult to find preobservables v(z), giving a computable estimate of τ max which comes close to an estimate from using acor with u(z) = z 1 .
Another drawback is that the estimates, though computationally inexpensive, require accessing intermediate values in the calculation of a time step, which are not normally an output option of an MD program. Therefore we will discuss alternatives in the next two paragraphs.

4.2.
A generalised eigenvalue problem. Let u(z) be a row vector of arbitary basis functions u i (z), i = 1, 2, . . . , imax that span a closed subspace of the Hilbert space associated with the inner product ·, · defined by (5) and consider the linear combination u(z) = u(z) T x. One has If the span of the basis is sufficiently extensive to include preobservables having the greatest IAcTs (e.g. polynomials, radial basis functions, spherical harmonics, etc.), the calculation of τ max reduces to that of maximizing x T Dx/(x T C 0 x) over all x, which is equivalent to solving the symmetric generalized eigenvalue problem It should be noted that the maximum over all linear combinations of the elements of u(z) can be arbitrarily greater than use of any of the basis functions individually. Moreover, in practice, the coefficients in (11) will be random in that they have to be estimated from simulation data, which warrants special numerical techniques. These techniques, including classical variance reduction methods, Markov State Models or specialised basis functions, are not the main focus of this article and we therefore refer to the articles [19,32], and the references given there. Remark 3. B records different notions of reversibility of the transfer operator that entail specific restrictions on the admissible basis functions that guarantee that the covariance matrices, and thus C 0 , remain symmetric. In the experiments reported here, the original algorithm sometimes does one reduction fewer than the new algorithm.

The use of acor. It is not obvious how to use an IAcT estimator to construct matrix off-diagonal elements
Remark 4. Theoretically, the matrix D + D T is positive definite. If it is not, that suggests that the value of reducs is not sufficiently conservative, in which case reducs needs to be reduced. A negative eigenvalue might also arise if the Markov chain does not converge due to a stepsize ∆t that is too large. This can be confirmed by seeing whether the negative eigenvalue persists for a larger number of samples.

Analytical
Result for the Model Problem. The question of optimal choice for the damping coefficient is addressed in Ref. [38,Sec. 5.] for the standard model problem F (q) = −Kq, where K is symmetric positive definite, for which the Langevin equation is (12) Changing variables Q = M T h Q and P = M −1 h P and dropping the primes gives dQ t = P t dt, With an orthogonal change of variables, this decouples into scalar equations, each of which has the form where ω 2 is an eigenvalue of M −1 h KM −T h , or, equivalently, an eigenvalue of M −1 K. Changing to dimensionless variables t = ωt, γ = γ/ω, Q = (βm) 1/2 ωQ, P = (β/m) 1/2 P , and dropping the primes gives (13) dQ t = P t dt, dP t = −Q t dt − γP t dt + 2γ dW t .
For an MCMC propagator, assume exact integration with step size ∆t. From Ref. [38,Sec. 5.1], one has T = (e ∆tL ) † = exp(∆tL † ) where The Hilbert space defined by the inner product from Eq. (5) has, in this case, a decomposition into linear subspaces P k = span{He m (q)He n (p) | m + n = k} (denoted by P k in Ref. [38,Sec. 5.3]). Let and, in particular, With a change of notation from Ref. [38,Sec. 5.3], Lu T k = u T k A k , with A k given by One can show, using arguments similar to those in [38,Sec. 5.3], that P k closed under application of L † . Therefore, L † u T k = u T k B k for some k + 1 by k + 1 matrix B k . Forming the inner product of u k with each side of this equation The Hermite polynomials u k are orthogonal and Also, Eu T k = 0 T . Accordingly, A formula for τ (u) is possible if u(q) can be expanded in Hermite polynomials as u = ∞ k=1 c k He k . Then, from Eq. (15), DHe k ∈ P k , not to mention He k ∈ P k . Using these facts and the mutual orthogonality of the subspaces P k , it can be shown that (16) τ From this it follows that max u τ (u) = max k τ (He k ).
Empirically, max k T k = T max def = max{T 1 , T 2 }. Restoring the original variables, one has The leading term increases as ω decreases, so τ q,max depends on the lowest frequency ω 1 . And τ q,max is minimized at γ = ω 1 , which is half of the critical value γ = 2ω 1 . Contrast this with the result [38,Sec. 5.] for the phase space maximum IAcT, which is minimized for γ = ( √ 6/2)ω 1 .

Remark 5.
The result is consistent with related results from [4,12] that consider optimal damping coefficients that maximise the speed of convergence measured in relative entropy. Specifically, calling η t = N (µ t , Σ t ) the law of the solution to (13), with initial conditions (Q t , P t ) = (q, p); see A for details. Then, using [2, Thm. 4.9], we have where M ∈ (1, ∞) and α denotes the spectral abcissa of the matrix A in A, i.e. the negative real part of the eigenvalue that is closest to the imaginary axis. Here denotes the relative entropy (or: Kullback-Leibler divergence) between two phase space probability densities f and g, assuming that (Otherwise we set KL(f, g) = ∞.) It is a straightforward calculation to show that the maximum value for α (that gives the fastest decay of KL(η t , ρ)) is attained at γ = 2, which is in agreement with the IAcT analysis. For analogous statements on the multidimensional case, we refer to [4]. We should mention that that there may be cases, in which the optimal damping coefficient may lead to a stiff Langevin equation, depending on the eigenvalue spectrum of the Hessian of the potential energy function. As a consequence, optimizing the damping coefficient may reduce the maximum stable step size ∆t that can be used in numerical simulations.
5.1. Application to more general distributions. Note that for the model problem, the matrix K can be extracted from the covariance matrix Therefore, as a surrogate for the lowest frequency ω 1 , and as a recommended value for γ, consider using γ * = (λ min (M −1 K)) 1/2 = (βλ max (Cov[Q]M )) −1/2 .

Sanity check.
As a test of the accuracy of acor and the analytical expression (16), the IAcT is calculated by acor for a time series generated by the exact analytical propagator (given in A) for the reduced model problem given by Eq. (12). For the preobservable, we choose u(q) = He 3 (q)/ where He 2 (q) = q 2 − 1 and He 3 (q) = q 3 − 3q are Hermite polynomials of degree 2 and 3; as damping coefficient, we choose γ = 2, which is the critical value; the time increment is ∆t = 0.5, which is about 1/12 th of a period. In this and the other results reported here, equilibrated initial values are obtained by running for 50 000 burn-in steps. As the dependence of the estimate on N is of interest here, we run M = 10 3 independent realisations for each value of N , from which we can estimate the relative error , which we expect to decay as N −1/2 . Fig. 2 shows the relative error in the estimated IAcT τ (u) for N = 2 13 , 2 14 , . . . , 2 22 . The least-squares fit of the log relative error as a function of log N has slope m = 0.4908. Thus we observe a nearly perfect N −1/2 decay of the relative error, in accordance with the theoretical prediction.

6.
A simple example. The procedure to determine the optimal damping coefficient in the previous section is based on linear Langevin systems. Even though the considerations of Section 5 do not readily generalize to nonlinear systems, it is plausible to use the harmonic approximation as a proxy for more general systems, since large IAcT values are often due to noise-induced metastability, in which case local harmonic approximations inside metastable regions are suitable. For estimating the maximum IAcT, the model problem therefore suggests the use of linear, quadratic and cubic functions of the coordinates, where the latter is suitable to capture the possible non-harmonicity of the potential energy wells in the metastable regime.
The first test problem, which is from Ref. [23], possesses an asymmetric multimodal distribution. It uses U (q) = 1 4 q 4 + sin(1 + 5q) and β = 1, and it generates samples using BAOAB with a step size ∆t = 0.2, which is representative of step sizes used in Ref. [23]. Fig. 3 plots with dotted lines the unnormalized probability density function.
6.1. Choice of basis. A first step is to find a preobservable that produces a large IAcT. It would be typical of actual practice to try to select a good value for γ. To this end, choose γ = γ * = 1.276, To obtain this value, do a run of sample size N = 2 · 10 6 using γ = 1, as in one of the tests in Ref. [23].
With a sample size N = 10 7 , the maximum IAcT is calculated for polynomials of increasing degree using the approach described in Secs. 4.2-4.3. Odd degrees produces somewhat greater maxima than even degrees. For cubic, quintic, and septic polynomials, τ max has values 59.9, 63.9, 65.8, respectively As a check that the sample size is adequate, the calculations are redone with half the sample size. Fig. 3 shows how the maximizing polynomial evolves as its degree increases from 3 to 5 to 7.
6.2. Optimal choice of damping coefficient. The preceding results indicate that septic polynomials are a reasonable set of functions for estimating τ q,max . For 25 values of γ, ranging from 0.2 to 5, the value of τ q,max was thus estimated, each run consisting of N = 10 7 samples.
With respect to this example, Ref. [23,Sec. 5] states, "We were concerned that the improved accuracy seen in the high γ regime might come at the price of a slower convergence to equilibrium". The foregoing results indicate that the value γ = 1 used in one of the tests is near the apparent optimal value γ = 1.8. Hence, the superior accuracy of BAOAB over other methods observed in the low γ regime does not come at the price of slower convergence.
where d is a parameter that measures the distance of the three local minima from the origin. Integrating the Langevin system using BAOAB with a step size ∆t = 0.5 as for the model problem, which is what V (x, y) becomes if d = 0. Shown in Fig. 5 are the first 8 · 10 4 points of a trajectory where d = 4.8.  for which τ max = 18492, 4. 1 A alone, for which τ = 12087, 5. 1 B alone, for which τ = 5056, 6. 1 C alone, for which τ = 4521. As consequence of these results, the following section uses quadratic polynomials to estimate τ q,max . Fig. 6 is a plot of τ q,max vs. the ratio γ/γ * . To limit the computing time, we set the parameter to d = 4.4 rather than 4.8 as in Sec. 7.1; for d = 4.4, we have γ = 0.285, obtained using the same protocol as does Sec. 7.1.

Optimal choice of damping coefficient. Shown in
We consider 0.05 ≤ γ ≤ 2.2 in increments of 0.01 from 0.05 to 0.2, and in increments of 0.1 from 0.2 to 2.2. Each data point is based on a run of N = 2 · 10 7 time steps. Even though the variance of the estimator is not negligible for our choice of simulation parameters, it is clearly visible that the minimum of τ q,max is attained at γ ≈ γ * .

Conclusions.
We have discussed the question of how to choose the damping coefficient in (underdamped) Langevin dynamics that leads to efficient sampling of the stationary probability distribution or expectations of certain observables with respect to this distribution. Here, efficient sampling is understood as minimizing the maximum possible (worst-case) integrated Figure 6. τq,max vs. the ratio γ/γ * autocorrelation time (IAcT). We propose a numerical method that is based on the concept of phase space preobservables that span a function space over which the worst-case IAcT is computed using trajectory data; the optimal damping coefficient can then chosen on the basis of this information.
Based on heuristics derived from a linear Langevin equation, we derive rules of thumb for choosing good preobservables for more complicated dynamics. The results for the linear model problem are in agreement with recent theoretical results on Ornstein-Uhlenbeck processes with degenerate noise, and they are shown to be a good starting point for a systematic analysis of nonlinear Langevin samplers. The stochastic process R t is Gaussian with mean zero and covariance matrix To evaluate this expressions, use A = XΛX −1 where γ ± = 1 2 (γ ± δ), and δ = γ 2 − 4ω 2 .

Appendix B. Different notions of reversibility.
We briefly mention earlier work and discuss different reversiblity concepts for transfer operators.
B.1. Quasi-reversibility. Ref. [13,Sec. 3.4] introduces a notion of quasi-reversibility. A transfer operator T is quasi-reversible if where R is an operator such that R 2 = 1. This somewhat generalizes the (suitably modified) definitions in Refs. [13,38]. The principal example of such an operator is Ru = u • R where R is a bijection such that R • R = id and u • R = u for u ∈ W , e.g, momenta flipping.
The value of the notion of quasi-reversibility is that it enables the construction of basis functions that lead to a matrix of covariances that possesses a type of symmetric structure [38,Sec. 3.1]. This property is possessed by "adjusted" schemes that employ an acceptance test, and by the limiting case ∆t → 0 of unadjusted methods like BAOAB.

B.2. Modified detailed balance.
A quite different generalization of reversibility, termed "modified detailed balance", is proposed in Ref. [14] as a tool for making it a bit easier to prove stationarity.
Modified detailed balance is introduced in Ref. [14] as a concept to make it easier to prove stationarity. In terms of the transfer operator, showing stationarity means showing that F 1 = 1, where 1 is the constant function 1.
Ref. [14,Eq. (15)] defines modified detailed balance in terms of transition probabilities. The definition is equivalent to F = R −1 F † R −1 under the assumption that R preserves the stationary distribution. This readily generalizes to (20) F = R 2 F † R 1 where R 1 and R 2 are arbitrary except for the assumption that each of them preserve the stationary distribution. Stationarity follows from Eq. (20) because F † 1 = 1 for any adjoint transfer operator and R 1 1 = R 2 1 = 1 by assumption. Reference [14] has errors, which are corrected in Ref. [15].