Numerical stochastic perturbation theory (NSPT) [1,2,3] is a powerful tool that allows many interesting calculations in QCD and other quantum field theories to be performed to high order in the interactions. For technical reasons, the computations proceed in the framework of lattice field theory, but results for renormalized quantities in the continuum theory can then be obtained through an extrapolation to vanishing lattice spacing. NSPT can be highly automated and the application of the method in finite volume and to correlation functions of complicated composite fields gives rise to hardly any additional difficulties.

Reliable extrapolations to the continuum limit require accurate data at several lattice spacings in the scaling region. NSPT calculations can therefore rapidly become large-scale projects, where computational efficiency is all-important. Traditionally, NSPT is based on the Langevin equation, but the success of the HMC algorithm [4] in lattice QCD suggests that the inclusion of a molecular-dynamics update step in the underlying stochastic process might be beneficial. Smaller autocorrelation times and an improved scaling behaviour towards the continuum limit could perhaps be achieved in this way. Moreover, through the use of highly efficient symplectic integration schemes, the systematic errors deriving from the discretization of the simulation time may conceivably be reduced.

NSPT based on the SMD (stochastic molecular dynamics, or generalized HMC) algorithm [5,6,7] has recently been briefly looked at in Ref. [8] and was found to perform well. Here we establish the convergence of the algorithm to a unique stationary state and study its efficiency in the case of the gradient-flow coupling in the SU(3) gauge theory. Various technical problems are addressed along the way, among them the modifications required to ensure that the stochastic process does not run away in the gauge directions.

Stochastic molecular dynamics

In order to bring out the basic structure of the SMD-variant of NSPT most clearly, a generic system described by a set \(q=(q_1,\ldots ,q_n)\) of real coordinates and an action S(q) is considered in this and the following two sections.


The action S(q) is assumed to be differentiable and to have an expansion in powers of a coupling g of the form

$$\begin{aligned} S(q)=\sum _{r=0}^{\infty } g^rS_r(q), \end{aligned}$$

where \(S_r(q)\) is a polynomial in q of degree \(d_r\ge 2\). Moreover, it is taken for granted that the leading-order term

$$\begin{aligned} S_0(q)=\frac{1}{2}(q,\Delta q)=\frac{1}{2}\sum _{k,l=1}^nq_k\Delta _{kl}q_l \end{aligned}$$

is a strictly positive quadratic form in the coordinates.

The observables \(\mathcal{O}(q)\) of interest are assumed to be similarly expandable and their expectation values

$$\begin{aligned} \langle \mathcal{O}\rangle ={1\over \mathcal{Z}_S}\int \mathrm{d}q_1\ldots \mathrm{d}q_n\,\mathcal{O}(q) \mathrm{e}^{-S(q)} \end{aligned}$$

then have a well-defined perturbation expansion with coefficients given by Feynman diagrams as usual.

SMD algorithm

The SMD algorithm operates in the phase space of the theory and thus updates both the coordinates q and their canonical momenta \(p=(p_1,\ldots ,p_n)\). An SMD update cycle consists of a momentum rotation followed by a molecular-dynamics evolution and, optionally, an acceptance–rejection step.

The momenta are rotated in a random direction according to

$$\begin{aligned} p\rightarrow c_1p+c_2\upsilon , \end{aligned}$$

where the momentum \(\upsilon \) is randomly chosen from a Gaussian distribution with mean zero and unit variance. The coefficients

$$\begin{aligned} c_1=\mathrm{e}^{-\gamma \epsilon }, \quad c_2=(1-c_1^2)^{1/2}, \end{aligned}$$

depend on the simulation time step \(\epsilon >0\) and a parameter \(\gamma >0\) that controls the rotation angle.

In the second step, the molecular-dynamics equations

$$\begin{aligned} \partial _tp=-\nabla S(q), \qquad \partial _tq=p, \end{aligned}$$

are integrated from the current simulation time t to \(t+\epsilon \) using a reversible symplectic integration scheme (see Sect. 2.3). The algorithm (momentum rotation followed by the molecular-dynamics evolution) simulates the canonical distribution

$$\begin{aligned} {1\over \mathcal{Z}_H}\mathrm{e}^{-H(p,q)},\qquad H(p,q)=\frac{1}{2}(p,p)+S(q), \end{aligned}$$

provided the integration errors are negligible or an acceptance–rejection step is included in the update cycle which corrects for these [9]. Stochastic estimates of the expectation values (2.3) of the observables of interest are then obtained by averaging their values over a range of simulation time.

Integration schemes

The molecular-dynamics equations may be integrated by applying a sequence of the elementary steps

$$\begin{aligned}&I_{p,h}:\;p\rightarrow p-h\nabla S(q),\end{aligned}$$
$$\begin{aligned}&I_{q,h}:\;q\rightarrow q+hp, \end{aligned}$$

to the current momenta and coordinates, with step sizes h proportional to \(\epsilon \). A well-known example is the leapfrog integrator \(I_{p,\epsilon /2}I_{q,\epsilon }I_{p,\epsilon /2}\), and several highly efficient schemes are described in Ref. [10].

Integrators \(I_{\epsilon }\) of this kind are symplectic and they can be (and are here) required to be reversible, i.e. to be such that

$$\begin{aligned} I_{\epsilon }PI_{\epsilon }=P, \end{aligned}$$

where P stands for the momentum reflection \(p\rightarrow -p\).

Stochastic perturbation theory

Stochastic perturbation theory [11, 12] is usually derived from the Langevin equation by expanding the stochastic variables and the driving forces in powers of the coupling. In this section, another (although probably closely related) form of stochastic perturbation theory is discussed, which is obtained by expanding the SMD algorithm in the same way.

SMD algorithm at weak coupling

Since the acceptance–rejection step is not smooth in the coupling, its effects would be difficult to take into account in perturbation theory. In the following, the acceptance–rejection step is therefore omitted, without further notice, and one is thus left with an algorithm that simulates the system only up to integration errors.

The histories p(t), q(t) of the momenta and coordinates generated by the SMD algorithm depend on the coupling g through the force term in the integration step (2.8). In particular, they are smooth functions of the coupling and may consequently be expanded in the asymptotic series

$$\begin{aligned} p(t)=\sum _{r=0}^{\infty }g^r\hat{p}_r(t), \qquad q(t)=\sum _{r=0}^{\infty }g^r\hat{q}_r(t), \end{aligned}$$

where the leading-order histories \(\hat{p}_0(t),\hat{q}_0(t)\) coincide with the ones generated by the algorithm in the free theory with action (2.2).

In terms of the coefficients \(\hat{p}_r,\hat{q}_r\), the momentum rotation (2.4) becomes

$$\begin{aligned} \hat{p}_r\rightarrow c_1\hat{p}_r+\delta _{r0}c_2\upsilon \end{aligned}$$

and the molecular-dynamics integration steps (2.8), (2.9) assume the form

$$\begin{aligned}&I_{\hat{p}_r,h}:\;\hat{p}_r\rightarrow \hat{p}_r-h\hat{F}_r(\hat{q}_0,\ldots ,\hat{q}_r),\end{aligned}$$
$$\begin{aligned}&I_{\hat{q}_r,h}:\;\hat{q}_r\rightarrow \hat{q}_r+h\hat{p}_r. \end{aligned}$$

The forces \(\hat{F}_r\) in Eq. (3.3) are given by

$$\begin{aligned} \nabla S(q)=\sum _{r=0}^{\infty } g^r\hat{F}_r(\hat{q}_0,\ldots ,\hat{q}_r) \end{aligned}$$

and it is understood that all momenta and all coordinates are updated alternately so that the variables on the right of Eqs. (3.3), (3.4) are always the current ones. For any given initial data, these rules completely determine the histories \(\hat{p}_r(t),\hat{q}_r(t)\).

Perturbation expansion of observables

Similarly to the gradient of the action, any observable

$$\begin{aligned} \mathcal{O}(q)=\sum _{r=0}^{\infty } g^r\hat{\mathcal{O}}_r(\hat{q}_0,\ldots ,\hat{q}_r) \end{aligned}$$

may be expanded in powers of the coupling. The coefficients \(k_r(\mathcal{O})\) in the perturbation expansion

$$\begin{aligned} \langle \mathcal{O}\rangle =\sum _{r=0}^{\infty }k_r(\mathcal{O})g^r \end{aligned}$$

of its expectation value (2.3) then coincide with the averages of \(\hat{\mathcal{O}}_r(\hat{q}_0(t),\ldots ,\hat{q}_r(t))\) over the simulation time t up to statistical (and integration) errors.

Convergence to a stationary state

Stochastic processes can run away or do not converge to a stationary distribution for other reasons. In the case of the stochastic perturbation theory described in Sect. 3, the asymptotic stationarity of the underlying process can be rigorously shown if the simulation step size \(\epsilon \) is sufficiently small. The range of step sizes, where convergence is guaranteed, depends on the chosen integration scheme for the molecular-dynamics equations and the matrix \(\Delta \) in the leading-order part (2.2) of the action.

Molecular-dynamics evolution in the free theory

If the coupling g is turned off, the molecular-dynamics equations become linear and their (approximate) integration from time t to \(t+\epsilon \) amounts to a linear transformation

$$\begin{aligned} \left( { \begin{array}{l} p\\ q \end{array}} \right) \rightarrow M \left( { \begin{array}{l} p\\ q \end{array}} \right) \end{aligned}$$

of the current momenta and coordinates. The \(2n\times 2n\) matrix M in this equation has a block structure,

$$\begin{aligned} M = \left( { \begin{array}{ll} {{M_{pp}}}&{}\quad {{M_{pq}}}\\ {{M_{qp}}}&{}\quad {{M_{qq}}} \end{array}} \right) , \end{aligned}$$

with \(n\times n\) blocks that are polynomials in \(\epsilon \) and the matrix \(\Delta \) with some numerical coefficients. In particular, they are commuting real symmetric matrices. In the case of the leapfrog integrator,

$$\begin{aligned}&M_{pp}=M_{qq}=1-\frac{1}{2}\epsilon ^2\Delta , \quad M_{pq}=-\epsilon \Delta \left( 1-\frac{1}{4}\epsilon ^2\Delta \right) , \nonumber \\&M_{qp}=\epsilon , \end{aligned}$$

an explicit expression for the blocks can be obtained for other popular integrators as well (see Appendix A).

The symplecticity and reversibility of the chosen integration scheme imply

$$\begin{aligned}&M_{pp}M_{qq}-M_{pq}M_{qp}=1. \end{aligned}$$

It follows from these relations that the Hamilton function

$$\begin{aligned}&\hat{H}(p,q)=\frac{1}{2}(p,p)+\frac{1}{2}(q,\hat{\Delta }q),\end{aligned}$$
$$\begin{aligned}&\hat{\Delta }=-M_{pq}(M_{qp})^{-1}=\Delta +\mathrm{O}(\epsilon ), \end{aligned}$$

is exactly conserved by the integrator. In the following, \(\epsilon \) and \(\Delta \) are assumed to be such that \(M_{qp}\) is non-singular and \(\hat{\Delta }\) positive definite. Both conditions are met in the case of the leapfrog integrator if \(\epsilon ^2\left\| \Delta \right\| <4\), where \(\left\| \Delta \right\| \) is the largest eigenvalue of \(\Delta \). The probability distribution

$$\begin{aligned} \hat{P}(p,q)\propto \mathrm{e}^{-\hat{H}(p,q)} \end{aligned}$$

is then well-behaved and preserved by the SMD algorithm, since it is preserved both by the momentum rotation and the molecular-dynamics evolution.

Convergence of the leading-order process

For any initial distribution of the momenta and coordinates, the SMD algorithm produces a sequence of distributions, which converges to the stationary distribution (4.8) in the free theory. One can show this by working out the action of an SMD update cycle on a given distribution, but the convergence of the algorithm may be established more easily starting from the identity

$$\begin{aligned} \left( { \begin{array}{l} {p(t)}\\ {q(t)} \end{array}} \right) \!=\! {\tilde{M}^{t/ \epsilon }}\left( { \begin{array}{l} {p(0)}\\ {q(0)} \end{array}} \right) \!+\! {c_2}\sum \limits _{u = 0}^{t - \epsilon } {{{\tilde{M}}^{(t - u)/ \epsilon - 1}}} M\left( { \begin{array}{l} {v(u)}\\ 0 \end{array}} \right) ,\nonumber \\ \end{aligned}$$

where \(\upsilon (u)\) is the momentum chosen randomly in the momentum rotation (2.4) at simulation time \(u=0,\epsilon ,2\epsilon ,\ldots \) and

$$\begin{aligned} {\tilde{M}} = M\left( { \begin{array}{ll} {{c_1}}&{}\quad 0\\ 0&{}\quad 1\\ \end{array}} \right) . \end{aligned}$$

The long-time behaviour of the momenta and coordinates thus depends on the properties of the matrix \(\tilde{M}\).

The blocks \(M_{pp},M_{pq},M_{qp}\) and \(\hat{\Delta }\) are commuting real symmetric matrices. Since

$$\begin{aligned} M_{pp}^2=1-M_{qp}\hat{\Delta }M_{qp} \end{aligned}$$

and since \(\hat{\Delta }\) is assumed to be positive definite, the eigenvalues of \(M_{pp}\) have magnitude strictly less than 1. The eigenvalues of \(\tilde{M}\) are then

$$\begin{aligned} \lambda _{\pm }=\frac{1}{2} \left\{ (1+c_1)\mu \pm \sqrt{(1+c_1)^2\mu ^2-4c_1} \right\} , \end{aligned}$$

where \(\mu \) runs through the eigenvalues of \(M_{pp}\). In particular, \(|\lambda _{\pm }|<1\) and \(\tilde{M}\) is thus a contraction matrix.

The first term on the right of Eq. (4.9) consequently dies away exponentially with increasing simulation time t. Since the random momenta are normally distributed, the momenta and coordinates at large t are then normally distributed as well, with mean zero and variances equal to their two-point autocorrelation functions. In the large-time limit, the qq autocorrelation function, for example, is given by

$$\begin{aligned}&\langle q(t)q(s)\rangle _{\upsilon } \mathrel {\mathop =_{t\ge s}} \left\{ \tilde{M}^{(t-s)/\epsilon }K \right\} _{qq},\end{aligned}$$
$$\begin{aligned}&K=c_2^2 \sum _{u=0}^{\infty } \tilde{M}^{u/\epsilon }MP_{+} \left\{ \tilde{M}^{u/\epsilon }M \right\} ^T, \qquad {P_ + } = \left( { \begin{array}{ll} 1&{}\quad 0\\ 0&{}\quad 1 \end{array}} \right) ,\nonumber \\ \end{aligned}$$

and the other two-point functions by the pp, pq and qp blocks of the matrix on the right of Eq. (4.13). The kernel K satisfies

$$\begin{aligned} {\tilde{M}}K{\tilde{M}}^{T}=K-{c_{2}}^{2}MP_{+}{M^{T}}, \end{aligned}$$

i.e. an inhomogeneous linear equation that has a unique solution, since \(\tilde{M}\) is a contraction matrix. A few lines of algebra then show that

$$\begin{aligned} K = \left( { \begin{array}{ll} 1&{}\quad 0\\ 0&{}\quad {{{\hat{\Delta }}^{ - 1}}} \end{array}} \right) \end{aligned}$$

solves the equation. In particular, the variances of the momenta and coordinates at any fixed simulation time coincide with the ones of the stationary distribution (4.8), which proves that the latter coincides with the distribution simulated by the SMD algorithm.

Convergence beyond the leading order

The assumed structure of the action S(q) implies that the force in the molecular-dynamics integration step (3.3) is of the form

$$\begin{aligned} \hat{F}_{r}(\hat{q}_0,\ldots ,\hat{q}_r)= \Delta \hat{q}_r+\hat{F}'_r(\hat{q}_0,\ldots ,\hat{q}_{r-1}), \end{aligned}$$

where the second term on the right is a polynomial in the coordinates up to order \(r-1\). If the history of the latter and the associated momenta is already known (including their values at the intermediate integration steps), the integration of the molecular-dynamics equations at order r thus amounts to solving an inhomogeneous linear recursion. A moment of thought then reveals that

$$\begin{aligned}&\left( { \begin{array}{l} {{{\hat{p}}_r}(t)}\\ {{{\hat{q}}_r}(t)} \end{array}} \right) = {\tilde{M}^{t/ \epsilon }}\left( { \begin{array}{l} {{{\hat{p}}_r}(0)}\\ {{{\hat{q}}_r}(0)} \end{array}} \right) \nonumber \\&\quad \ + \sum \limits _{u = \epsilon }^t {{{\tilde{M}}^{(t - u)/ \epsilon }}} {\hat{V}_r}({\hat{p}_0}(u), \ldots ,{\hat{q}_{r - 1}}(u)) \end{aligned}$$

for all \(r\ge 1\). The “vertices” \(\hat{V}_r(\hat{p}_0,\hat{q}_0;\ldots ;\hat{p}_{r-1},\hat{q}_{r-1})\) in this formula are polynomials in their arguments, whose exact form depends on both the integration scheme and the forces (4.17).

Recalling Eq. (4.9) and the fact that \(\tilde{M}\) is a contraction matrix, the convergence of the autocorrelation functions of the momenta and coordinates at large times t may now be shown recursively from order 0 to any finite order r. Equation (4.18) actually allows the highest-order variables in any correlation function to be expressed through lower-order ones up to an exponentially decaying contribution. Clearly, in the large-time limit, the autocorrelation functions do not depend on the initial distribution of the variables and are stationary, i.e. invariant under time translations.

The expectation values of the coefficients in the perturbation expansion (3.6) of the observables coincide with a sum of autocorrelation functions of the coordinates at equal times. Their convergence at large times is therefore guaranteed as well.


The discussion in this section shows that the SMD algorithm converges to all orders of perturbation theory if the matrix \(\Delta \) is strictly positive and if \(\epsilon ^2\left\| \Delta \right\| <\kappa \), where \(\kappa \) depends on the molecular-dynamics integrator. In the case of the leapfrog, the second-order OMF and the fourth-order OMF integrators, \(\kappa \) is equal to 4, 6.51 and 9.87, respectively (cf. Appendix A).

Stochastic perturbation theory in lattice QCD

With respect to the generic system considered so far, the situation in lattice QCD is complicated by the gauge symmetry and the quark fields. In this section, stochastic perturbation theory is first set up for the pure \(\mathrm{SU}(N)\) gauge theory. The modifications required for the damping of the gauge modes are then discussed and the section ends with a brief description of how the quarks can be included in the simulations.

Lattice fields

The lattice theory is set up on a \(T\times L^3\) lattice with periodic boundary conditions in the space directions and Schrödinger functional (SF) [13, 14] or open-SF [15] boundary conditions in the time direction. In both cases the link variables \(U(x,\mu )\) satisfy

$$\begin{aligned} U(x,k)=1,\quad k=1,2,3, \end{aligned}$$

at time \(x_0=T\) and additionally at \(x_0=0\) if SF boundary conditions are chosen. The notation used in the following coincides with the one previously employed in Ref. [15]. In particular, all dimensionful quantities are expressed in units of the lattice spacing.

The momenta \(\pi (x,\mu )\) of the link variables \(U(x,\mu )\) take values in the Lie algebra \(\mathfrak {su}(N)\) of the gauge group. They vanish at the boundaries of the lattice, where the link variables are frozen to unity. The scalar product of any two momentum fields,

$$\begin{aligned} (\pi ,\upsilon )= \sum _{x,\mu }w_{x,\mu }^{-1}\pi ^a(x,\mu )\upsilon ^a(x,\mu ), \end{aligned}$$

includes a conventional weight factor

$$\begin{aligned} {w_{x,u}} = \left\{ { \begin{array}{ll} 2&{}\quad {\mathrm{if}\,\mu > 0\,\,\mathrm{and}\,{x_0} = 0\,\mathrm{or}\,{x_0} = T}\\ 1&{}\quad {\mathrm{otherwise}} \end{array}} \right. \end{aligned}$$

which will reappear below in various expressions.

Infinitesimal gauge transformations are fields \(\omega (x)\) of \(\mathfrak {su}(N)\) elements defined on the sites x of the lattice. They vanish at \(x_0=T\) and must, furthermore, be constant at \(x_0=0\) if SF boundary conditions are imposed. In the latter case, the fields may be split in two parts according to

$$\begin{aligned} \omega (x)=\left( 1-x_0/T\right) \omega (0)+\hat{\omega }(x), \end{aligned}$$

where \(\hat{\omega }(x)\) vanishes at \(x_0=0,T\) and is otherwise unconstrained. A possible choice of scalar product is then

$$\begin{aligned} (\omega ,\nu )=TL^3\omega ^a(0)\nu ^a(0)+\sum _x\hat{\omega }^a(x)\hat{\nu }^a(x), \end{aligned}$$

while in the case of open-SF boundary conditions

$$\begin{aligned} (\omega ,\nu )=\sum _xw_x^{-1}\omega ^a(x)\nu ^a(x) \end{aligned}$$

with \(w_x=2\) at \(x_0=0,T\) and \(w_x=1\) elsewhere.

Independently of the boundary conditions imposed on the gauge field, the quark fields are required to vanish at both \(x_0=0\) and \(x_0=T\). No weight factor is included in the scalar product of these fields.

Basic stochastic process

In the absence of quark fields, the SMD algorithm proceeds along the lines of Sect. 2. For the action S(U) of the gauge field any of the frequently used ones may be taken. The Hamilton function from which the SMD algorithm derives is then given by

$$\begin{aligned} H(\pi ,U)=\frac{1}{2}(\pi ,\pi )+S(U). \end{aligned}$$

In particular, the random field in the momentum rotation

$$\begin{aligned} \pi (x,\mu )\rightarrow c_1\pi (x,\mu )+c_2\upsilon (x,\mu ) \end{aligned}$$

must be distributed with probability density proportional to \(\mathrm{e}^{-{1\over 2}(\upsilon ,\upsilon )}\).

If the weight factor \(w_{x,\mu }\) in the scalar product (5.2) is also included in the symplectic structure (i.e. the Poisson bracket), the molecular-dynamics equations assume the form

$$\begin{aligned}&\partial _t\pi (x,\mu )=-g_0w_{x,\mu }(\partial ^a_{x,\mu }S)(U)T^a,\nonumber \\&\partial _tU(x,\mu )=g_0\pi (x,\mu )U(x,\mu ). \end{aligned}$$

For later convenience, the expressions on the right of the equations have been scaled by the bare gauge coupling \(g_0\), an operation that could be undone by rescaling the simulation time. The integration steps (2.8) and (2.9) are then given by

$$\begin{aligned}&I_{\pi ,h}:\;\pi (x,\mu )\rightarrow \pi (x,\mu ) -hg_0w_{x,\mu }(\partial ^a_{x,\mu }S)(U)T^a,\nonumber \\ \end{aligned}$$
$$\begin{aligned}&I_{U,h}:\;U(x,\mu )\rightarrow \mathrm{e}^{hg_0\pi (x,\mu )}U(x,\mu ). \end{aligned}$$

Since it will be omitted in perturbation theory, the acceptance–rejection step needed to correct for the integration errors is not described here.

Perturbation expansion

In perturbation theory, the link variables are represented by a gauge potential \(A_{\mu }(x)\) through

$$\begin{aligned}&U(x,\mu )=\exp \{g_0A_{\mu }(x)\}= 1+\sum \limits _{r=0}^{\infty }g_0^{r+1}\hat{U}_r(x,\mu ), \nonumber \\&\hat{U}_0(x,\mu )=A_{\mu }(x). \end{aligned}$$

The gauge potential takes values in \(\mathfrak {su}(N)\) and satisfies the same boundary conditions as the momentum field

$$\begin{aligned} \pi (x,\mu )=\sum \limits _{r=0}^{\infty }g_0^r\hat{\pi }_r(x,\mu ). \end{aligned}$$

When the gauge and momentum fields are replaced by these expansions, the SMD algorithm leads to a hierarchy of stochastic equations as in the case of the generic system considered in Sect. 3.

At lowest order in the coupling, the momentum is rotated according to Eq. (5.8) (with \(\hat{\pi }_0\) in place of \(\pi \)) and the integration steps (5.10), (5.11) become

$$\begin{aligned}&I_{\hat{\pi }_0,h}:\;\hat{\pi }_0(x,\mu )\rightarrow \hat{\pi }_0(x,\mu )-h(\Delta \hat{U}_0)(x,\mu ),\end{aligned}$$
$$\begin{aligned}&I_{\hat{U}_0,h}:\;\hat{U}_0(x,\mu )\rightarrow \hat{U}_0(x,\mu )+h\hat{\pi }_0(x,\mu ), \end{aligned}$$

where \(\Delta \) is the symmetric linear operator in the leading-order expression

$$\begin{aligned} S_0(U)=\frac{1}{2}(A,\Delta A) \end{aligned}$$

for the gauge action.

Damping of the gauge modes

The space \(\mathcal{H}_1\) of gauge potentials may be split into the subspace \(\mathcal{H}_1^L\) of gauge modes and its orthogonal complement \(\mathcal{H}_1^T\). There is a one-to-one correspondence between the infinitesimal gauge transformations, introduced in Sect. 5.1, and the gauge modes through the forward difference operator

$$\begin{aligned} (d\nu )(x,\mu )=\partial _{\mu }\nu (x), \end{aligned}$$

which maps any field \(\nu \) in the space \(\mathcal{H}_0\) of infinitesimal gauge transformations to a field \(d\nu \in \mathcal{H}_1^L\). The adjoint operator \(d^{*}\) goes in the opposite direction and is defined by the requirement that

$$\begin{aligned} (d\nu ,A)=(\nu ,d^{*}\!A) \end{aligned}$$

for all \(\nu \in \mathcal{H}_0\) and \(A\in \mathcal{H}_1\). In particular, the subspace \(\mathcal{H}_1^T\) coincides with the space of gauge potentials A satisfying \(d^{*}\!A=0\) (\(d^{*}\) is given explicitly in Appendix B).

Since the gauge modes are annihilated by \(\Delta \), the \(\mathcal{H}_1^L\)-component of the leading-order gauge field \(\hat{U}_0\) performs a random walk and thus slowly runs away in the course of a simulation. Stability can be regained by applying a gauge transformation

$$\begin{aligned}&\pi (x,\mu )\rightarrow \Lambda (x)\pi (x,\mu )\Lambda (x)^{-1}, \nonumber \\&\Lambda (x)=\exp \{\epsilon g_0\omega (x)\},\end{aligned}$$
$$\begin{aligned}&U(x,\mu )\rightarrow \Lambda (x)U(x,\mu )\Lambda (x+\hat{\mu })^{-1}, \end{aligned}$$

to the fields after each SMD update cycle, where \(\omega \in \mathcal{H}_0\) is a new field that is evolved together with the other fields. There are two update steps for this field, the first,

$$\begin{aligned} \omega (x)\rightarrow c_1\omega (x), \end{aligned}$$

being applied together with the momentum rotation and the second,

$$\begin{aligned} \omega (x)\rightarrow & {} \omega (x)+\epsilon \lambda _0(d^{*} C)(x), \end{aligned}$$
$$\begin{aligned} C_{\mu }(x)= & {} {1\over 2g_0} \left\{ \phantom {{1\over N}} U(x,\mu )-U(x,\mu )^{-1}\right. \nonumber \\&\left. -{1\over N}\mathrm{tr}\left[ U(x,\mu )-U(x,\mu )^{-1} \right] \right\} , \end{aligned}$$

at the end of the molecular-dynamics evolution. The parameter \(\lambda _0>0\) controls the feedback from the current gauge field to the gauge-damping field \(\omega (x)\) and can, in principle, be set to any value. Clearly, the history of gauge-invariant observables is not affected by these modifications of the SMD algorithm, but as discussed below (in Sect. 5.5), they have the desired damping effect on the gauge modes.Footnote 1

Like the other fields, the gauge-damping field is expanded in a series

$$\begin{aligned} \omega (x)=\sum _{r=0}^{\infty }g_0^r\hat{\omega }_r(x) \end{aligned}$$

in perturbation theory. At leading order, the new update steps are then

$$\begin{aligned}&\hat{\omega }_0(x)\rightarrow c_1\hat{\omega }_0(x),\end{aligned}$$
$$\begin{aligned}&\hat{\omega }_0(x)\rightarrow \hat{\omega }_0(x)+\epsilon \lambda _0(d^{*}\hat{U}_0)(x),\end{aligned}$$
$$\begin{aligned}&\hat{U}_0(x,\mu )\rightarrow \hat{U}_0(x,\mu )-\epsilon (d\hat{\omega }_0)(x,\mu ), \end{aligned}$$

and the higher-order rules have the familiar hierarchical structure.

Long-time stationarity of the process

Although the stochastic process now includes variables without associated momenta, the discussion in Sect. 4 applies here too with only minor changes. In particular, the convergence of the process to a unique stationary state is guaranteed to all orders of the coupling, if the matrix \(\tilde{M}\) describing the evolution of the fields \((\hat{\pi }_0,\hat{U}_0,\hat{\omega }_0)\) at leading order is a contraction matrix.

Since the gauge modes are zero modes of \(\Delta \), the subspace of field vectors

$$\begin{aligned} (\hat{\pi }_0,\hat{U}_0,\hat{\omega }_0)= (d\hat{\nu }_0,d\hat{\sigma }_0,\hat{\omega }_0), \qquad \hat{\nu }_0,\hat{\sigma }_0\in \mathcal{H}_0, \end{aligned}$$

is invariant under the action of \(\tilde{M}\). When restricted to its orthogonal complement (which is invariant too), the matrix describes the evolution of the \(\mathcal{H}_1^T\)-components of \(\hat{\pi }_0,\hat{U}_0\). The chosen boundary conditions (SF or open-SF) imply the strict positivity of \(\Delta \) in this subspace and \(\tilde{M}\) is therefore a contraction matrix there if

$$\begin{aligned} \epsilon ^2\Vert \Delta \Vert <\kappa , \end{aligned}$$

where \(\kappa \) depends on the molecular-dynamics integrator (cf. Sect. 4.4).

In the subspace (5.28) of the gauge modes, on the other hand, the action of \(\tilde{M}\) amounts to applying the linear transformation

$$\begin{aligned} \left( \begin{array}{l} \hat{\nu }_0\\ \hat{\sigma }_0\\ \hat{\omega }_0 \end{array}\right) \rightarrow \left( \begin{array}{lll} c_1 &{}\quad 0 &{}\quad 0 \\ \epsilon c_1 &{}\quad 1-\epsilon ^2\lambda _0d^{*}{d} &{}\quad -\epsilon c_1 \\ 0 &{}\quad \epsilon \lambda _0d^{*}{d} &{}\quad c_1 \end{array}\right) \left( \begin{array}{l} \hat{\nu }_0\\ \hat{\sigma }_0\\ \hat{\omega }_0\\ \end{array}\right) \end{aligned}$$

to the fields. The associated eigenvalues are equal to \(c_1\) or to

$$\begin{aligned} \frac{1}{2}\left\{ b\pm \sqrt{b^2-4c_1} \right\} , \qquad b=1+c_1-\epsilon ^2\lambda _0\mu ^2, \end{aligned}$$

where \(\mu ^2\) is any eigenvalue of \(d^{*}{d}\). This operator has no zero modes and some simple estimates then show that \(\tilde{M}\) is a contraction matrix in the subspace of gauge modes, provided the bound

$$\begin{aligned} \epsilon ^2\lambda _0\Vert d^{*}{d}\Vert <2(1+c_1) \end{aligned}$$


Convergence of the stochastic process to equilibrium is thus guaranteed if the inequalities (5.29) and (5.32) are both satisfied. With the chosen boundary conditions,

$$\begin{aligned} \Vert \Delta \Vert \le 16k, \qquad \Vert d^{*}{d} \Vert \le 16, \end{aligned}$$

where k is equal to 1, 5 / 3 and 3.648 in the case of the Wilson [16], tree-level Symanzik improved [17, 18] and Iwasaki [19] gauge action, respectively. In practice, the constraints (5.29) and (5.32) therefore tend to be fairly mild and the main concern is to ensure that the integration errors are sufficiently small at the chosen values of \(\epsilon \).

Inclusion of the quark fields

As in the case of the HMC algorithm [4], the effects of the quarks can be included in SMD simulations by adding a multiplet of pseudo-fermion fields to the theory with the appropriate action. The details are not important here and it suffices to know that the action is a sum of terms, one per pseudo-fermion field \(\phi \), of the form

$$\begin{aligned} S_\mathrm{pf}(U,\phi )=(R(U)\phi ,R(U)\phi ), \end{aligned}$$

with R(U) being some gauge-covariant linear operator (see Ref. [20], for example).

Only few modifications of the SMD update cycle are required in the presence of the pseudo-fermion fields. Similarly to the momentum field, each field is rotated according toFootnote 2

$$\begin{aligned} \phi (x)\rightarrow c_1\phi (x)+c_2(R(U)^{-1}\eta )(x) \end{aligned}$$

at the beginning of the cycle, where \(\eta \) is a Gaussian random field with mean zero and unit variance. The molecular-dynamics evolution of the gauge and momentum field then proceeds at fixed pseudo-fermion fields, with the contribution of the pseudo-fermion action to the driving force properly taken into account. Clearly, the gauge transformation (5.19), (5.20) applied at the end of the update cycle must be extended to the pseudo-fermion fields.

When the algorithm is expanded in powers of the coupling \(g_0\), the renormalization of the quark masses should be taken into account so that the masses in the leading-order stochastic equations are the renormalized ones. The pseudo-fermion fields decouple from the other fields at lowest order and are simulated exactly by the random rotation (5.35). Convergence of the stochastic process to a unique stationary state is then again guaranteed to all orders, if the bounds (5.29) and (5.32) are satisfied.

Computation of the gradient-flow coupling

The gradient-flow coupling in finite volume with SF boundary conditions has recently been used in step scaling studies of three-flavour QCD [22, 23]. Such calculations serve to relate the low-energy properties of the theory to the high-energy regime, where contact with the standard QCD parameters and matrix elements can be made in perturbation theory [24] (see Ref. [25] for a review).

In the following, the perturbation expansion of the gradient-flow coupling is determined in the \(\mathrm{SU(3)}\) Yang–Mills theory up to two-loop order, using the SMD-variant of NSPT described in the previous section. To one-loop order, the expansion coefficient in the \(\overline{\mathrm{MS}}\) scheme of dimensional regularization is known in infinite volume since a while [26], but a huge effort plus the best currently available techniques were required to be able to extend this calculation to the next order [27]. In finite volume with SF boundary conditions, these techniques do not apply and a similar analytical computation may be practically infeasible at present.

Definition of the coupling

The renormalized coupling considered in this paper belongs to a family of couplings based on the Yang–Mills gradient flow. Explicitly, it is given by [21]

$$\begin{aligned} \bar{g}^2=k\left\{ t^2\langle E(t,x)\rangle \right\} _{T=L,x_0=L/2,\sqrt{8t}=cL}, \end{aligned}$$

where E(tx) denotes the Yang–Mills action density at flow time t and position x, c is a parameter of the scheme and the proportionality constant \(k\) is determined by the requirement that \(\bar{g}^2\) coincides with \(g_0^2\) to lowest order of perturbation theory. Most of the time c will be set to 0.3, which implies a localization range of the action density of about \(0.3\times L\).

On the lattice, the gradient flow is implemented as in Ref. [15]. The Wilson action, with boundary terms so as to ensure the absence of O(a) lattice effects in the flow equation, is thus used to generate the flow. For the action density E(tx) in Eq. (6.1) either the Wilson action density or the square of the familiar symmetric “clover” expression for the gauge-field tensor is inserted. Furthermore, alternative couplings, where the full action density is replaced by its spatial or time-like part, are considered as well.

All in all this makes six different action densities and couplings, labeled w, ws, wt, c, cs, and ct, where the letters stand for Wilson, clover, space and time, respectively (\(E^\mathrm{ct}\), for example, denotes the clover expression for the time-like part of the action density).

Expansion in powers of \(\alpha _s\)

The gradient-flow coupling has an expansion

$$\begin{aligned} \bar{g}^2=4\pi \left\{ \alpha _s+k_1\alpha _s^2+k_2\alpha _s^3+\cdots \right\} \end{aligned}$$

in powers of the running coupling \(\alpha _s\) in the \(\overline{\mathrm{MS}}\) scheme of dimensional regularization at momentum scale q, with coefficients \(k_1,k_2,\ldots \) depending on q and L. If q is scaled with L like

$$\begin{aligned} q=(cL)^{-1}, \end{aligned}$$

the coefficients are of order one and independent of L in the continuum theory [28].

In the following \(k_1\) and \(k_2\) are computed by combining the expansion

$$\begin{aligned} t^2\langle E(t,x)\rangle =E_0g_0^2+E_1g_0^4+E_2g_0^6+\cdots \end{aligned}$$

obtained in NSPT with the relation

$$\begin{aligned} \alpha _s=\alpha _0+d_1\alpha _0^2+d_2\alpha _0^3+\cdots , \quad \alpha _0={g_0^2\over 4\pi }, \end{aligned}$$

between \(\alpha _s\) and the bare coupling. For the Wilson gauge action (which is the action used in NSPT), the coefficients \(d_1\) and \(d_2\) are accurately known [29]. The coefficients \(k_1,k_2\) determined in this way depend on the spacing of the simulated lattices so that an extrapolation to the continuum limit is then still required.

Computation of the coefficients \(E_k\) in NSPT

In order to cancel the O(a) lattice effects in \(E_k\), the action of the theory must include boundary counterterms at \(x_0=0\) and \(x_0=T\) with a tuned coupling [13]Footnote 3

$$\begin{aligned} c_\mathrm{t}=1+c_\mathrm{t}^{(1)}g_0^2+c_\mathrm{t}^{(2)}g_0^4+\cdots . \end{aligned}$$

In the case of the Wilson action,

$$\begin{aligned} c_\mathrm{t}^{(1)}=-0.08900(5), \qquad c_\mathrm{t}^{(2)}=-0.0294(3), \end{aligned}$$

as was shown long ago [30,31,32]. The counterterms lead to further terms in the forces that drive the molecular-dynamics evolution, but do not require a modification of the general framework described in Sect. 5. Alternatively, the effects of the counterterms can be included in the calculations by treating \(c_\mathrm{t}-1\) as a second coupling.

Once a representative sample of gauge configurations (5.12) has been generated, the stochastic estimation of the gradient-flow coupling requires E(tx) to be expanded in powers of \(g_0\) for each of these configurations. To this end, the gauge field \(V_t(x,\mu )\) at flow time t is represented by a gauge potential \(B_{\mu }(t,x)\) according to

$$\begin{aligned} V_t(x,\mu )=\exp \{g_0B_{\mu }(t,x)\} =1+\sum _{r=0}^{\infty }g_0^{r+1}\hat{V}_{t,r}(x,\mu ).\nonumber \\ \end{aligned}$$

The numerical integration of the flow equation, using a Runge–Kutta integrator, for example, then amounts to applying a sequence of integration steps to the expansion coefficients of the field as in the case of the integration of the molecular-dynamics equations. Gauge damping is however not required here, since the linearized flow is transversal and leaves the gauge modes unchanged.

Table 1 Simulation parameters

Simulation parameters and tables of results

The parameters of the NSPT runs performed for the “measurement” of \(E_0,E_1,E_2\) are listed in Table 1. In all cases, the gauge-damping parameter \(\lambda _0\) was set to 2 and the fourth-order OMF integrator was employed for the molecular-dynamics equations (cf. Appendix A). Measurements were made after every \(\Delta t_\mathrm{ms}/\epsilon \) update cycles using a third-order Runge–Kutta integrator for the gradient flow [26], with step size varying from 0.002 at small flow times to 0.1 at large times. With this scheme, the gradient-flow integration errors are guaranteed to be completely negligible with respect to the statistical errors. The number \(N_\mathrm{ms}\) of measurements made is listed in the last column of Table 1 (the programs that were used in these simulations can be downloaded from

At the chosen values of the parameters, the bounds (5.29) and (5.32) are satisfied by a wide margin so that the convergence of the SMD algorithm is rigorously guaranteed. The results obtained on the simulated lattices for the expansion coefficients \(k_1\) and \(k_2\) and their statistical errors are listed in Appendix D, for \(c\in \{0.2,0.3,0.4\}\) and all choices w,c,ws,\(\ldots \) of the action density (cf. Sect. 6.1).

Statistical and systematic errors

The values of \(k_1\) and \(k_2\) obtained in NSPT depend on the scheme parameter c, the lattice size L (in lattice units), the simulation step size \(\epsilon \) and the SMD parameter \(\gamma \). No attempt is made here to determine all these dependencies in detail. Instead some basic facts and empirical results are discussed that helped controlling the errors in the case of the simulations reported in this paper (see Refs. [8, 33] for related complementary studies of the \(\phi ^4\) theory).

Autocorrelations and statistical variances

Usually the variances of the observables are a property of the simulated field theory and hence independent of the simulation algorithm. In NSPT this is not so, because the algorithm is not exact, but mainly because the square of the order-r coefficient \(\hat{\mathcal{O}}_r\) of an observable \(\mathcal{O}\) in general does not coincide with the order-2r coefficient of another observable. The time average of \(\hat{\mathcal{O}}_r^2\) and thus the variance of \(\hat{\mathcal{O}}_r\) are then not necessarily determined by the theory alone. For illustration, the dependence on \(\gamma \) of the variances of the coefficients in

$$\begin{aligned} {t^2\over L^3}\sum _{\varvec{x}}E(t,x) =\hat{E}_0g_0^2+\hat{E}_1g_0^4+\hat{E}_2g_0^6+\ldots \end{aligned}$$

(where \(x_0=L/2\) and \(\sqrt{8t}=cL\) as before) is shown in Fig. 1. As will be discussed in Sect. 7.3, the integration errors are negligible with respect to the rapid growth of the variances of the one- and two-loop coefficients seen at small \(\gamma \), which is therefore entirely an effect of the change of algorithm.

Fig. 1
figure 1

Variances of \(\hat{E}^\mathrm{c}_k\) at fixed \(L=16\), \(c_\mathrm{t}=1\), \(c=0.3\) and \(\epsilon =0.238\) versus \(\gamma \). The dotted lines connecting the data points are drawn to guide the eye. Beyond \(\gamma =3\) the variances are practically constant

In practice one would like to minimize the computational work required to obtain the calculated coefficients to a given statistical precision. The simulation algorithm should thus be tuned so as to minimize the products

$$\begin{aligned} \tau _\mathrm{int}(\hat{\mathcal{O}}_r)\times \mathrm{var}(\hat{\mathcal{O}}_r) \end{aligned}$$

of the integrated autocorrelation times and variances of the order-r coefficients \(\hat{\mathcal{O}}_r\) of the observables \(\mathcal{O}\) of interest. Empirical studies show that the two factors often work against each other, i.e. algorithms tuned for small autocorrelations tend to give large variances and vice versa.

The autocorrelation times of the coefficients \(\hat{E}_k^\mathrm{c}\), for example, grow monotonically with \(\gamma \) (see Table 2). At the chosen point in parameter space, the associated products (7.2) are then minimized at values of \(\gamma \) around 0.5, 1.0 and 2.0 for \(k=0\), 1 and 2. The example shows that there may be no uniformly best choice of \(\gamma \), but a range of values, where all coefficients of interest are obtained reasonably efficiently.

Table 2 Autocorrelation times of \(\hat{E}_k^\mathrm{c}\) and associated products (7.2)\(^{\dagger }\)

Critical slowing down

The behaviour of the autocorrelation times and variances near the continuum limit \(L\rightarrow \infty \) depends on the simulation algorithm and the observables considered. When NSPT is based on the Langevin equation, the autocorrelations of the coefficients of multiplicatively renormalizable quantities can be shown to diverge proportionally to \(L^2\) [34, 35], while their variances grow at most logarithmically [36].

At large \(\gamma \), the variant of NSPT studied here is expected to behave similarly, since the SMD algorithm then effectively integrates the Langevin equation. Choosing \(\gamma \) to depend on L in some particular way may, however, conceivably lead to an improved scaling behaviour. In the free theory, for example, the autocorrelation times grow proportionally to L rather than \(L^2\) if \(\gamma \) is scaled like 1 / L [5,6,7]. Beyond the leading order, the situation is complicated by the algorithm-dependence of the variances and the effects of the parameter tuning are then not easy to predict.

Considering the fact that the computational cost of the measurements of \(\hat{E}_k\) tends to be larger than the one of the SMD update cycles, the parameters of the runs on the large lattices (those with \(L=24,\ldots ,40\) in Table 1) were chosen so that subsequent measurements are practically uncorrelated. At fixed \(\gamma =3\) the required increase with L of the measurement time separation \(\Delta t_\mathrm{ms}\) then turned out to be quite moderate. Moreover, from \(L=24\) to \(L=40\), the variances of \(\hat{E}_k\) grow only slowly (at \(c=0.3\) and for \(k=0\), 1 and 2 by about 2, 19 and 30 percent).

Integration errors

As already noted in Appendix A, the theory is very accurately simulated at leading order if the fourth-order OMF integrator is used for the molecular-dynamics equations. The expectation values of the coefficient \(\hat{E}_0\) calculated in the runs listed in Table 1 in fact all agree with the known analytic formula [15, 21] within a relative statistical precision of about \(2\times 10^{-4}\).

Beyond the leading order, the integration errors remain difficult to detect in empirical studies (see Fig. 2). The stability bounds (5.29), (5.32) are respected in all these runs and the coefficients \(\hat{H}_0,\ldots ,\hat{H}_4\) of the Hamilton function H are accurately conserved, with deficits decreasing like \(\epsilon ^5\) (as has to be the case in the asymptotic regime of a fourth-order integrator). It thus seems safe to conclude that the integration errors in the tests reported in Fig. 2 are smaller than the statistical errors.

Apart from adjustments to match the target statistics, the step sizes in the runs listed in Table 1 were, for \(L\ge 24\), scaled proportionally to 1 / L so that the integration errors fall off like \(1/L^4\) at large L and thus much more rapidly than the leading lattice effects. Since the statistical errors are approximately the same in all runs, the scaling of the step sizes may be a luxury, but was applied here as a safeguard measure against an enhancement of systematic errors through the continuum extrapolation.

Fig. 2
figure 2

Dependence of \(k^\mathrm{c}_1\) and \(k^\mathrm{c}_2\) at \(L=24\), \(c_\mathrm{t}=1\) and \(c=0.3\) on the simulation step size \(\epsilon \) (note the log scale on the abscissa). In the range of data shown, \(\epsilon \) increases from 0.1 to 0.238. Each data point is based on approximately \(10^4\) statistically independent measurements of the coefficients \(\hat{E}^\mathrm{c}_0\), \(\hat{E}^\mathrm{c}_1\) and \(\hat{E}^\mathrm{c}_2\)

Extrapolation to the continuum limit

With \(\mathrm{O}(a)\)-improvement in place, the L-dependence of the one-loop coefficient \(k_1^\mathrm{w}\) is asymptotically given by [37, 38]

$$\begin{aligned} k^\mathrm{w}_1\mathrel {\mathop =_{L\rightarrow \infty }} a_0+\{a_1+b_1\ln L\}L^{-2}+\mathrm{O}(L^{-3}), \end{aligned}$$

where the leading-order term, \(a_0\), coincides with the coefficient \(k_1\) in the continuum theory. The available data are well described by this asymptotic expression down to \(L=10\) if \(c\ge 0.3\) and \(L=12\) if \(c=0.2\) (see Fig. 3). Note the fact that the data points in Fig. 3 are uncorrelated and that random fluctuations by more than one standard deviation are consequently not improbable in a sample of 10 points. Plots of the other one-loop coefficients \(k_1^\mathrm{c},k_1^\mathrm{ws},\ldots \) look much the same.

Fig. 3
figure 3

Extrapolation of the simulation results for \(k_1^\mathrm{w}\) at \(c=0.3\) to the continuum limit (open point). The full line is obtained by fitting the data in the range \(10\le L\le 40\) with the asymptotic expression (7.3). A linear fit in the range \(16\le L\le 40\) (dashed grey line) lies practically on top of the full line

Even though only a short extrapolation is required to reach the continuum limit, there is no way the extrapolation errors can be rigorously assessed. Following standard practice, the errors are estimated by varying the fit procedure within reasonable bounds. In particular, fits are discarded if the quality of the fit is low or if the fit parameters are poorly determined. Another indication of the size of the extrapolation errors is provided by the deviations of the results obtained using the “w” and the “c” data, respectively. The errors of the results quoted in Table 3 include both the statistical and an estimate of the extrapolation errors.

Fig. 4
figure 4

Extrapolation of the simulation results for \(k_2^\mathrm{w}\) at \(c=0.3\) to the continuum limit (open point). The full line is obtained by fitting the data in the range \(10\le L\le 40\) according to Eq. (7.4), with the two lowest modes of the design matrix projected away. In the range \(16\le L\le 40\) the data can also be fitted by a straight line (dashed grey line)

Table 3 Values of \(k_1\) and \(k_2\) in the continuum limit

With increasing loop order, the continuum extrapolations tend to become more difficult, partly because one loses statistical precision and partly because the asymptotic formulae describing the leading lattice effects have more and more terms. In particular, with respect to the one-loop coefficients, the coefficients at the next order are obtained about 10 times less precisely and their asymptotic form

$$\begin{aligned} k^\mathrm{w}_2=a_0+\{a_1+b_1\ln L+c_1(\ln L)^2\}L^{-2}+\mathrm{O}(L^{-3})\nonumber \\ \end{aligned}$$

includes an additional logarithm. Combinations of the logarithms in this expression only too easily accurately approximate a constant in the range of the available data, but may blow up closer to the continuum limit and then strongly influence the result of the extrapolation.

In the present context, the goal is not to determine the values of the coefficients \(a_1\), \(b_1\) and \(c_1\), but to perform a short extrapolation of the data to the continuum limit. A sensible way to stabilize fits of the data by the asymptotic expression (7.4) is then to constrain the minimization of the likelihood function to the directions in parameter space orthogonal to its flattest directions (see Fig. 4). The results quoted in Table 3 were obtained in this way and by varying the fit procedure, as in the case of the one-loop coefficients, in order to assess the extrapolation errors.

Miscellaneous remarks

Lattice effects. Since the smoothing range of the gradient flow decreases with c, it is no surprise that the coefficients \(k_1,k_2\) calculated in NSPT are found to be increasingly sensitive to lattice effects when c is lowered. The continuum extrapolation of the data then becomes more and more difficult and eventually requires larger lattices to be simulated.

Infinite-volume limit. The gradient-flow coupling in infinite volume runs with the flow time t and may be expanded in powers of \(\alpha _s\) at scale \(q=(8t)^{-1/2}\), as in Eq. (6.2), the one- and two-loop coefficients in the continuum limit being \(k_1=1.0978(1)\) [26] and \(k_2=-0.9822(1)\) [27]. In the finite-volume scheme considered in this paper, and after passing to the continuum limit, the infinite-volume limit is reached by sending c to zero. The results listed in Table 3 cannot be reliably extrapolated to \(c=0\), but the values of \(k_1,\ldots ,k^\mathrm{t}_2\) at \(c=0.2\) are actually already quite close to the infinite-volume values quoted above.

Three-loop \(\beta \) -function. The L-dependence of the gradient-flow coupling \(\alpha =\bar{g}^2/4\pi \) is governed by the renormalization group equation

$$\begin{aligned} L{\partial \alpha \over \partial L}= \beta _0\alpha ^2+\beta _1\alpha ^3+\beta _2\alpha ^4+\ldots , \end{aligned}$$

where \(\beta _0=11(2\pi )^{-1}\) and \(\beta _1=51(2\pi )^{-2}\) are the universal one- and two-loop coefficients of the Callan–Symanzik \(\beta \)-function. Using the results quoted in Table 3, the three-loop coefficient may be calculated in a few lines (see Table 4). The coefficient thus has opposite sign and is significantly larger than in the \(\overline{\mathrm{MS}}\) scheme, particularly so at \(c=0.4\) and if the spatial part of the Yang–Mills action density is inserted in the definition (6.1) of the coupling.


In stochastic perturbation theory the fields are represented by a series of coefficient fields that solve the underlying stochastic equation order by order in the interactions. Beyond the leading order, the probability distributions of the coefficient fields are however not a priori known and actually depend on the chosen stochastic process. The variances of the observables of interest must consequently be expected to vary with the parameters of the simulation algorithm, an effect that tends to considerably complicate the situation with respect to the one in simulations based on importance sampling.

Table 4 Ratio \(\rho _2=\beta _2/\beta _0\) of coefficients of the \(\beta \)-function

The SMD algorithm provides a technically attractive basis for NSPT. Compared to the traditional version of NSPT [1,2,3], where the starting point is the Langevin equation, a significantly improved efficiency is achieved, particularly so near the continuum limit. Moreover, the available highly accurate integrators for the molecular-dynamics equations allow the integration errors to be easily kept under control. For the reasons mentioned above, some tuning of the friction parameter \(\gamma \) is however required and must take into account the variances of the observables of interest.

The inclusion of the quark fields in the SMD algorithm along the lines of Sect. 5 is straightforward and is not expected to slow down the simulations by a large factor [3]. In general, the cost of NSPT computations very much depends on the observables considered, the order in perturbation theory and the desired precision of the results.

The statistical errors of the expansion coefficients \(k_l\) of the gradient-flow coupling, for example, appear to grow rapidly with the loop order l. In practice some loss of precision from one order to the next is tolerable, since the coefficients get multiplied by \(\alpha _s^{l+1}\) in the perturbation series (6.2). An extension of the computations to three-loop order would however only make sense if \(k_1\) and \(k_2\) are recalculated with errors about an order of magnitude smaller than the ones quoted in Table 3. Furthermore, the relation between \(\alpha _s\) and the bare coupling would need to be worked out to three loops.