Inclusive rates from smeared spectral densities in the two-dimensional O(3) non-linear σ-model

This work employs the spectral reconstruction approach of ref. [1] to determine an inclusive rate in the 1 + 1 dimensional O(3) non-linear σ-model, analogous to the QCD part of e+e− → hadrons. The Euclidean two-point correlation function of the conserved current j is computed using Monte Carlo lattice field theory simulations for a variety of spacetime volumes and lattice spacings. The spectral density of this correlator is related to the inclusive rate for j → X in which all final states produced by the external current are summed. The ill-posed inverse problem of determining the spectral density from the correlation function is made tractable through the determination of smeared spectral densities in which the desired density is convolved with a set of known smearing kernels of finite width ϵ. The smooth energy dependence of the underlying spectral density enables a controlled ϵ → 0 extrapolation in the inelastic region, yielding the real-time inclusive rate without reference to individual finite-volume energies or matrix elements. Systematic uncertainties due to cutoff effects and residual finite-volume effects are estimated and taken into account in the final error budget. After taking the continuum limit, the results are consistent with the known analytic rate to within the combined statistical and systematic errors. Above energies where 20-particle states contribute, the overall precision is sufficient to discern the four-particle contribution to the spectral density.


Introduction
Markov chain Monte Carlo simulations of lattice QCD continue to provide a quantitative window into the strong nuclear force. However, the Euclidean metric signature required for Monte Carlo sampling of the QCD path integral complicates the study of real-time scattering processes. While the spatial volume dependence of energies and matrix elements has successfully been used as a probe of few-particle scattering and decay amplitudes [2,3], this approach is inapplicable to energies above arbitrary multi-particle thresholds 1 and is restricted to the discrete set of center-of-mass energies that appear in a given finite-volume geometry. This last disadvantage hampers the continuum limit of amplitudes at fixed energy, which can only be achieved by maintaining a constant physical volume as the lattice spacing is decreased. Using the finite-volume formalism to determine inclusive rates is also difficult as these can only be calculated by extracting all contributing amplitudes separately and summing the individual rates.

JHEP07(2022)034
An alternative approach, following the seminal work of ref. [6], is to use Euclidean correlation functions computed in lattice simulations to determine spectral densities which are definitionally independent of the metric signature. This alternative is not subject to the same limitations as finite-volume methods and has been formally developed for inclusive rates mediated by an external current [7][8][9] as well as arbitrary scattering and transition amplitudes [10,11]. 2 A major obstacle of this program is the solution of an ill-posed inverse problem to determine continuous spectral densities from correlation functions sampled on a finite set of time separations with statistical errors. In refs. [13,14] Backus and Gilbert deal with this by instead computing spectral densities smeared with a kernel that is known a posteriori. An important advancement by ref. [1] (see also refs. [15,16]) enables the smearing kernel to be specified a priori. A review and comparison of alternative spectral reconstruction techniques goes beyond the scope of this work [17,18].
Although these methods are not new, their application to compute inclusive rates and scattering amplitudes from actual Monte Carlo simulation data, with a detailed analysis of both statistical and systematic errors, has not yet been performed. It is therefore worthwhile to pursue such a test in a controlled setting. This work employs the spectral reconstruction strategy of ref. [1] to determine an inclusive rate in the two-dimensional O(3) non-linear σ-model. As in ref. [19], comparison is made with exact continuum results. However, our work overcomes the limitations mentioned above: the inclusive rate is computed above inelastic thresholds and the continuum limit taken at fixed center-of-mass energy without the need for a constant physical volume. Large physical simulation volumes are however required to control finite-volume effects in the smeared spectral density at fixed smearing width. After taking the continuum limit and accounting for systematic errors due to finite-volume effects, the desired spectral density is obtained by extrapolating the smearing width to zero.
For this first application we treat a process akin to e + e − → hadrons, the QCD component of which is given by the spectral density ρ µν (k) = 1 2πˆd 4 x e −ik·x Ω|ĵ em µ (x)ĵ em ν (0)|Ω = (g µν k 2 − k µ k ν ) ρ(k 2 ), (1.1) where the integral is performed in Minkowski space,ĵ em µ is the quark-level electromagnetic current and the relation to the physical process is This celebrated 'R-ratio' has a number of phenomenological applications including the hadronic vacuum polarization contribution to the anomalous magnetic moment of the muon. It can be obtained via spectral reconstruction of the Euclidean correlator in the time-momentum representation

JHEP07(2022)034
computed in lattice QCD simulations. 3 Everything discussed in this work is readily transferable to the determination of ρ(s) from C(t) in lattice QCD. While computations of ρ(s) are naturally compared with experimental determinations of R(s), numerical results for the spectral density computed here are compared with the corresponding analytical predictions in figure 9, which is our main result. This remainder of this work is organized as follows. Section 2 defines the spectral density of interest in the O(3) model, outlines the reconstruction approach, and suggests a strategy for extrapolating the smearing width to zero. Section 3 examines the influence of the finite torus on which the simulations are performed while section 4 defines the lattice regularization and discusses the continuum limit. Section 5 presents numerical results and section 6 concludes.

General framework in continuous infinite volume
This section introduces the two-dimensional O(3) non-linear σ-model and the Euclidean correlator C(t) with spectral density ρ(ω), which are both defined via the conserved vector current. We additionally review the algorithm of ref. [1] for systematically determining a smeared version of ρ(ω) from numerical estimates of C(t) and detail an extrapolation procedure for taking the smearing width to zero. In this section the Euclidean spacetime is assumed to be continuous and infinite in both directions. Peculiarities due to the finite simulation volume are discussed in section 3 and the lattice discretization in section 4.
The continuum Euclidean action of the two-dimensional O(3) model is defined as where the 3-component real field σ a (x) has unit length σ(x) · σ(x) = 1. The O(3) model is asymptotically free, 4 has a dynamically-generated mass gap m, and is integrable. It also possesses a global O(3) symmetry which rotates the field σ(x). The corresponding Noether current is given by where abc is the Levi-Civita tensor and repeated indices are summed. This current transforms irreducibly under the I = 1 (fundamental) representation of O (3). The aim of this work is to reconstruct the spectral density ρ(ω) associated with j c µ using Euclidean correlation functions determined numerically from lattice Monte Carlo 3 As written, this expression holds for both the anti-hermitian definition ofĵ em z (0), conventionally used with the Euclidean metric, as well as the hermitian Minkowski current implicitly used in eq. (1.1). 4 The beta function in the MS scheme is given by [20] β where g R denotes the renormalized coupling.

JHEP07(2022)034
simulations. The spectral density is defined implicitly via the relation whereP = (Ĥ,P ) is the 2-momentum operator and p is an external 2-momentum coordinate with components denoted p = (E, p). Covariance under both the O(3) internal symmetry and the Euclidean spacetime SO(2) symmetry, as well as current conservation, enable the factorization in eq. (2.3). By specializing µ = ν = 1 and p = (E, 0), and contracting the internal indices, one obtains the more direct definition The utility of the two-dimensional O(3) model for the present study is that (due to integrability) ρ(E) can be computed analytically, enabling a comparison between our numerical reconstruction and the exact result. The spectral density decomposes into sectors defined by the number n of asymptotic particles 5 propagating between the two currents (2.5) The contribution ρ (n) (E) has support for E > n m, where m is the mass gap. Even though integrability (together with a number of mild and generally accepted assumptions) fixes ρ (n) (E) for every even value of n, explicit expressions have been worked out only for n = 2, 4, 6. At the energies considered here, the sum over the number of particles is rapidly convergent and the n = 6 contribution is at least a couple of orders of magnitude smaller than the n = 4 contribution. In the following we therefore refer to the spectral density summed over n = 2, 4, 6 as the exact spectral density. The n = 2 contribution can be written in closed form while the n = 4, 6 contributions are expressed in terms of phase-space integrals that require numerical evaluation as discussed in appendix A. This spectral density is related to the Euclidean-signature current-current correlator at zero spatial momentum via the Laplace transform The demonstration of a systematic method to invert this relation, given realistic numerical estimates of C(t) at a finite set of time slices with statistical errors, is the central focus of this work.

JHEP07(2022)034
A ubiquitous problem, faced across a breadth of scientific disciplines, is the numerically ill-conditioned nature of the inverse Laplace transform. 6 A promising way forward is to recognize that ρ(ω) is not directly extractable from numerical data and that one should instead target the smeared spectral density The challenge of recovering ρ (E) from C(t) can then be made arbitrarily mild (or severe) by varying the specific functional form of δ (E, ω) and the values of and E. Following ref. [1] we consider smearing kernels δ (E, ω) that can be represented exactly as where τ is a dimensionless integer variable, a an arbitrary scale with dimensions of inverse energy to be later identified with the lattice spacing, the b τ (ω) are basis functions, and g target τ ≡ g target τ ( , E) components of the target smearing kernel δ (E, ω) in this basis. If we identify b τ (ω) = e −aωτ , then the smeared spectral density is given by (2.10) The choice b τ (ω) = e −aωτ is based on the non-essential assumption that spacetime is infinite. In practice, as explained in section 3, we use a basis that takes into account the periodicity of the finite temporal direction. An estimator for ρ (E) is obtained by approximating δ (E, ω) with an element of the space spanned by a finite number of basis functions. The coefficients g τ corresponding to the approximation of δ (E, ω) are determined by minimizing the functional 7  are a linear system of equations to be solved for the coefficients g λ τ ≡ g λ τ ( , E, τ max ). These coefficients define the approximation of δ (E, ω) and the associated estimator for ρ (E) according to 14) The functional B[g λ ] is simply the statistical variance of ρ λ (E) and therefore vanishes in the ideal case of infinitely precise input data. On the other hand, A[g λ ] measures the distance between the target kernel δ (E, ω) and its approximation δ λ (E, ω) in the range 9 It can only vanish in the limit τ max → ∞ when g λ τ → g target τ . The coefficients g λ τ that minimize W λ [g] thus represent a particular balance between statistical and systematic errors, as dictated by the λ parameter. For small λ the estimator ρ λ (E) is close to ρ (E) but with a large statistical uncertainty. Conversely, for large λ the estimator ρ λ (E) has a small statistical error but differs significantly from ρ (E).
When evaluated at the minimum, the functional W λ [g] is a function of λ only, thus The recipe suggested in ref. [1] to choose the optimal value of the trade-off parameter defines λ such that A straightforward application of eq. (2.13) demonstrates that at λ (the maximum of W (λ) . This can be understood as the condition of 'optimal balance' between statistical and systematic errors. The numerical results presented here are obtained using this recipe. In the remainder of this work, unless explicitly stated, we do not distinguish the theoretical quantity ρ (E) from its numerical estimator ρ λ (E). Estimates of the systematic error on ρ (E) induced by the residual difference δ λ (E, ω) − δ (E, ω) are discussed in section 5.
A defining feature of the approach in ref. [1] is that the smearing kernel δ (E, ω) and the associated values of and E are inputs of the algorithm, in contrast to the original Backus-Gilbert method [13,14]. This work exploits this by employing four functional forms for the smearing kernel, each of which is a function of x = E − ω: The parameter E0 can be adjusted by exploiting the fact that ρ(E) has support only for E > 2m, so that ρ (E) in eq. (2.8) is insensitive to δ (E, ω) for ω < 2m. The same holds for ρ λ (E) so that the functional form of δ λ (E, ω) can be left unconstrained for ω < 2m. Any E0 ≤ 2m is therefore a viable choice in determining the coefficients g λ τ so E0 can be chosen to improve the numerical stability of the minimization procedure.

JHEP07(2022)034
x Table 1. Summary of the kernel-dependent coefficients w x k entering the small-expansion in eq. (2.18). Although not reflected in the general formulae, w c1 3 and w c2 5 are the lowest non-zero coefficients with odd k for their respective smearing kernels.
Here g and c stand for 'Gauss' and 'Cauchy' respectively, and the number following c gives the order of the Cauchy-like pole as shown. All kernels are normalized to unit area.
Given estimates of ρ (E) over a range of for each of the resolution functions shown above, the final step in determining ρ(E) is to perform an → 0 extrapolation. To this end it is useful to understand the small-expansion at fixed E. We consider this expansion only at energies away from singularities, i.e. away from 2mZ + . At such points the smeared spectral density satisfies where the superscript x labels a particular smearing kernel. As indicated in the rightmost expression, the O( k ) contribution to the expansion factorizes into a geometric kerneldependent coefficient w x k and a kernel-independent piece a k (E) which depends on ρ(E) only. Table 1 summarizes the values of w x k for the four kernels used in this work. The ambiguity in separating w x k and a k (E) is fixed by setting w c0 k = 1 for all k. For this choice, all remaining w x k are rational numbers and

Finite-volume estimator
The theory is considered here on an L × T volume with periodic boundary conditions in both directions. Finite-L and finite-T effects on the spectral density are significantly different. On the one hand, finite-T (or thermal) effects are shown below to be exponentially suppressed and are therefore reliably small. On the other hand, the spectral density is dramatically different at finite and infinite L. At infinite L the spectral density ρ(E) defined in eq. (2.4) is a continuous function which is analytic for all E except for E = n m where n is a positive even integer. At finite JHEP07(2022)034 L, by contrast, the spectrum of the Hamiltonian is discrete and the spectral density is a sum of Dirac δ-functions. As L increases, the spectrum becomes denser and denser so that the continuous L = ∞ spectral density is recovered as a weak (or distributional) limit. In no meaningful way can this limit be considered point-wise and the finite-L effects on ρ(E) treated as small corrections. Questions such as "Are the finite-L corrections to ρ(E) exponentially suppressed?" are simply ill-posed. By contrast, smeared spectral densities converge to their infinite-L value in a point-wise sense. An immediate consequence of this discussion is that the L → ∞ and → 0 + limits (where is the width of the smearing kernel) do not commute and make sense only in a precise order: the L → ∞ limit must be taken before the → 0 + limit. The importance of the ordered double limit is also stressed in ref. [8] where the combined L and dependence of related spectral functions is considered in a perturbative context.
We know that individual finite-L matrix elements at energies above the two-particle threshold approach their infinite-L limit with corrections which vanish as inverse powers of 1/L. When smeared spectral densities are considered, one may hope that finite-L corrections vanish faster than any inverse power in 1/L, at least if the smearing kernel is 'reasonable enough'. This issue is explored in appendix C, which considers a fictitious system where the infinite-L spectral density is given entirely by the two-particle contribution in eq. (2.6). The finite-L spectral density is then determined (up to exponential corrections) by the Lellouch-Lüscher formalism [3,21]. One sees explicitly that the smeared spectral density has finite-L contributions which are O(e −mL ) in the case of the Gaussian kernel, despite the power-law corrections to the individual energies and matrix elements. For the Cauchy kernels the effects are also exponentially suppressed by O(e −µL ) where µ ≤ m depends on the values of E, and m as specified in eq. (C.25).
In the derivation of the exponentially-suppressed finite-L contributions, the analyticity of the smearing kernel plays a central role. Smooth but non-analytic kernels produce finite-L corrections that decay faster than any inverse power of L but generally not exponentially. We also observe that the finite-volume effects are oscillatory functions of L with an envelope decaying according to the predicted scaling. We do not claim that these findings are valid beyond the simple exercise considered here, but it is clear that the landscape of phenomena related to the L → ∞ limit of smeared spectral densities is rich.
After this lengthy introduction, we give some explicit formulae. At finite L and T , the zero-momentum current-current correlator has the following Hamiltonian representation is employed. At finite L, the spectrum of the Hamiltonian is discrete. By introducing an orthonormal basis of energy eigenstates |n L satisfyingĤ L |n L = E n (L)|n L , one easily

JHEP07(2022)034
derives the spectral representation of the correlator with the definitioñ Notice thatρ T,L (ω) is non-vanishing also for negative values of ω. By separating terms with E n ≥ E n and E n ≤ E n , and taking care to avoid double counting contributions with where the spectral density vanishes for E < 0. By plugging this information in the correlator, we obtain the spectral representation in the form As described in section 2, for a target smearing kernel δ (E − ω), one now seeks an approximation δ λ (E, ω) in the space generated by the function basis is an approximation for the smeared spectral density A couple of comments on these formulae are in order.
(1) The estimator in eq. (3.9) formally depends on τ max , but (as demonstrated in section 5) our results are rather insensitive to its particular value. (2) The coefficients g λ τ obtained by solving the minimization problem in eq. (2.13) depend on L and T via the covariance matrix appearing in B[g], and on T via our choice of the basis b T,τ (ω). In the limit of infinite statistics (with B[g] = 0), the g λ τ depend on T but not on L.
where m(L) is the energy gap in finite volume. Smearing the spectral density amounts to replacing the Dirac δ-function in eq. (3.6) with the smearing kernel δ . At fixed L the smeared spectral density also has finite-temperature effects that are exponentially suppressed with O(e −T m(L) ). Even though far from obvious, it is reasonable to assume that finite-T effects are exponentially suppressed also at L = ∞ since the theory has a mass gap.

Lattice-discretized model
We calculate the Euclidean current-current correlator in eq. (3.7) by numerical simulation of the lattice-discretized model. The field σ(x) is now defined on the set of (L/a)×(T /a) equallyspaced points x = (aτ, x) on the two-torus, denoted by Λ. The standard discretization of the action is employed here is the forward-difference operator defined with periodic boundary conditions in both directions. Expectation values of observables are given by the path integral where dσ(x) denotes the O(3)-invariant integration measure over the unit sphere. The global O(3) symmetry is preserved by the lattice discretization, and its associated Noether current is Exact O(3) Ward identities imply that this current is renormalized. The discretized version of the zero-momentum Euclidean current-current correlator is given by The strong dynamics of the O(3) model generate a mass gap m, which is often used to set the scale. However, in this work we determine all dimensionful quantities in units of the appropriate power of the alternative scale m , defined as [22] m −1 C(m −1 ) = 0.046615 .
(4.5)  Table 2. Details for the ensembles of field configurations generated for this work. The dimensionful scale m m is defined in eq. (4.5). Ensembles A1-A4 enable the continuum limit at approximately fixed physical volume, while B1 and B2 are used to estimate finite size effects. Each ensemble consists of N rep independent identical replica, each of which is thermalized for N th cluster updates before N bin measurements are taken by averaging over B subsequent updates. This results in a total number of measurement cluster updates N c for each ensemble. Only two digits are given for m L and m T .

JHEP07(2022)034
While in principle one can choose any number in the right-hand side, we use a value that gives m m. Using the correlator reconstructed from the 2-, 4-and 6-particle contributions to the spectral density we find that the relative difference between m and m is of order 10 −5 . In practice the correlator C(t) is known only at values of t = aτ that are integer multiples of the lattice spacing a, and the equation for m is solved using a piecewise linear interpolation of log tC(t). The quantity m is determined with higher statistical precision than m and is also affected by smaller finite-volume effects. It furthermore does not require any additional correlation functions.
The numerical simulations are performed with the single-cluster algorithm described in appendix D. Details concerning the generated ensembles of field configurations are summarized in table 2. The ensembles A1, A2, A3, A4 have different values of β and therefore different values of the lattice spacing, but similar m L ≥ 29 and m T ≥ 14. The ensembles B1 and B2 have been generated with the same lattice spacing as A4 but with doubled spatial and temporal extent, respectively. While the ensembles A1, A2, A3, and A4 are used to perform a continuum extrapolation, B1 and B2 enable estimates of the residual finite-L and finite-T effects, as explained in section 5.
The standard discretization of the two-dimensional O(3) σ-model employed here is known to approach the continuum limit rather slowly. Ref. [23] observed that lattice artifacts behave like O(a) over a large range of lattice spacings, in apparent contradiction with Symanzik's effective theory, which predicts an asymptotic O(a 2 ) behavior up to logarithms. The puzzle was solved in refs. [20,24]: the asymptotic behavior is correctly described by Symanzik's effective theory, but the logarithmic corrections turn out to be large and must be included in fitting formulae used to extrapolate to the continuum limit. On-shell quantities, such as the mass gap or energy levels in finite volume, have the asymptotic expansion merically in ref. [20]), while the other constants are non-perturbative. The dependence on β 3 , where β is the bare coupling in eq. (4.1), is generated by the dimension-four operator with largest one-loop anomalous dimension appearing in the Symanzik expansion of the action. The current two-point function, and consequently its spectral density, get extra contributions from dimension-three operators appearing in the Symanzik expansion of the current. These one-loop anomalous dimensions are unknown, and their calculation is well beyond the scope of this work. For the continuum extrapolation of the smeared spectral densities we use a fit function of the type where the exponent r is fixed to 0, 3, 6, and Q(0) and C are fit parameters. We take Q(0) at r = 3 as our continuum extrapolation and the spread generated by the three values of r as an estimate of the systematic error. One may argue that, since a β 3 term exists in the Symanzik expansion, the correct asymptotic formula should have r ≥ 3. However it is conceivable that the coefficients of the various logarithms conspire in such a way that an effective power r < 3 is generated in some intermediate regime. We therefore choose to include also r = 0 in our analysis. As is evident in figure 1, the variation of r across 0, 3, and 6 has little effect on the continuum extrapolation of tC(t) with t = 0.5m −1 using our three finest lattice spacings. These conclusions are supported by additional extrapolations at several values of t in the range t ∈ [0.5m , 4m ]. This strategy is thus adopted in section 5 for continuum extrapolations of ρ (E), where the fourth lattice spacing is included for stability in the presence of larger statistical errors on ρ (E).

Numerical results
After detailing estimates of the systematic errors due to the reconstruction and finite volume, this section presents the numerical verification of the spectral reconstruction procedure using two different tests.

JHEP07(2022)034
The procedure to ensure that the systematic errors from the (necessairly imperfect) reconstruction of the kernels are smaller than statistical errors is illustrated in figures 2 and 3. Estimation of finite-L and finte-T effects is illustrated in figure 4, and the continuum extrapolation in figures 5 and 7. Finally, the → 0 extrapolation is illustrated in figure 8.
The first test (discussed in section 5.1) compares ρ (E) with exact results at fixed smearing width for each of the four smearing kernels in eq. (2.16). As anticipated, ρ (E) is more difficult to determine with increasing E and decreasing . The second test, which is detailed in section 5.2, uses ρ (E) at finite to extrapolate to the → 0 limit and obtain the unsmeared spectral density ρ(E). The extrapolation procedure (as applied here) requires smearing widths that are small compared to the scale at which the unsmeared spectral density varies, and is therefore less effective in the elastic region where ρ(E) increases rapidly with the onset of two-particle phase space. Since ρ(E) in the elastic region is accessible to the finite-volume formalism, the focus is instead on energies in the inelastic region E > 4m. Due to the absence of any sharp 'resonance peaks', ρ(E) varies increasing slowly with increasing E, suggesting that the smearing width should be scaled ∝ (E − 2m), i.e. with the distance to the rapid variation from the two-particle threshold. Apart from this scaling of the smearing width, the analysis for these two tests proceeds identically.
All statistical errors on the results presented here are estimated using the bootstrap procedure with N boot = 800 samples, which are generated independently on each ensemble from the N rep × N bin measurements listed in table 2. For all reconstructions the lower bound of the integration range E 0 defined in eq. (2.12) is set using the scale m via aE 0 = 2 am to (approximately) coincide with the two-particle threshold. This choice minimizes the range in energy over which the functional A[g] forces the reconstructed and target kernels to be similar. The values of am given in table 2 are also used to fix the dimensionful parameters and E at different lattice spacings. Properly including the statistical error on am requires the determination of the coefficients g λ τ on each bootstrap sample, which significantly increases the computational cost. After confirming that this has no observable effect on a selection of sample reconstructions, the statistical error on the values of am in table 2 is subsequently ignored.

Fixed smearing width
Before presenting the main results of this section in figure 6, several systematic errors must be estimated. Consider first systematic errors due to the reconstruction procedure.
As discussed in section 2, we follow here the recipe of ref. [1] and quote as the best estimate for ρ (E) the result from the unconstrained reconstruction obtained at λ = λ defined in eq. (2.15). For all smearing kernels and all values of (E, ) we have cheked that the quoted results are in the statistics-limited regime by checking the stability of ρ λ (E) as a function of A[g λ ]/A[0] and by imposing different constraints on the reconstructed kernels. More precisely, due to the finite number of input values C(aτ ), the reconstructed smearing kernel will never be exactly equal to the desired one. This source of system- the systematic error of ρ (E), this error certainly vanishes in the A[g λ ] → 0 limit. The value of A[g λ ] is therefore a useful diagnostic for determining the onset of the statisticslimited regime: if λ is lowered such that A[g λ ] changes significantly and no significant change is observed in ρ λ (E), then this source of systematic error is likely smaller than the statistical error. This type of 'plateau' analysis is well-known to lattice field theory practitioners and is exemplified in figure 2. Another probe of the systematic error due to the reconstruction is the comparison of ρ (E) determined from coefficients subject to various constraints on the minimization, implemented as explained in appendix B. Reconstructions subject to different constraints result in different reconstructed smearing kernels. Their consistency when compared at similar values of A[g λ ]/A[0] demonstrates that these differences are insignificant and suggests that deviations from the exact kernel are as well. Figure 2 shows results for the unconstrained reconstruction, the constraint that the reconstructed smearing kernel has a signed area equal to the desired one, defined in eq. (B.6), and the constraint that the reconstructed kernel exactly coincides with the desired one at JHEP07(2022)034 the peak ω = E, defined in eq. (B.7). For large λ, these different reconstructions differ significantly 10 but are compatible within the statistical error at λ , suggesting that λ is indeed in the statistics-dominated regime. This is particularly evident in figure 2 for the gaussian kernel δ g (E − ω) at E = 3m and = 0.5m , the smallest value of the smearing parameter considered in our analysis at fixed smearing width. At E = 3m and this small value of it is possible to obtain an accurate reconstruction while keeping statistical errors reasonably small. However, at E = 4m and = 0.5m the statistical errors on ρ λ (E) increase rapidly. This suggests that given the statistical precision of the input data, an accurate and precise spectral reconstruction with the gaussian smearing kernel at (E, ) = (4.0m , 0.5m ) is not possible. However, as employed in section 5.2, increasing from 0.5m to 1.0m at E = 4.0m again results in a well-controlled reconstruction.
Overall, the approach employed for choosing λ discussed in eq. (2.15) to balance statistical and systematic errors is somewhat conservative. Figure 2 suggests the less restrictive alternative strategy of merely demanding consistency between the three different reconstructions. This could potentially result in smaller statistical errors, but requires further investigation beyond the scope of this work.
The input correlator time slices {C(aτ )} range over [τ min , τ max ] with τ min = 1 and τ max = 160 used as the best estimate of ρ (E) everywhere. They are furthermore correlated and suffer from an exponential degradation of the signal-to-noise ratio with increasing τ with a rate phenomenologically similar to m. The sensitivity to the input correlator data is probed in figure 3 which demonstrates that, for a sample reconstruction, ρ (E) is relatively insensitive to τ max and to a 'thinning' of the input data where the time slices are separated by σ t = a(τ n+1 − τ n ). These observations can be plausibly explained by the signal-to-noise degradation and correlation of the input data. The role of B[g] is to penalize coefficients g λ τ which would result in a large statistical error. This penalty naturally disfavors input data at large τ , effectively 'turning off' these time slices and resulting in an insensitivity to 10 The equal area constraint is apparently somewhat weaker than demanding δ λ (E, ω) = δ x (E − ω) at ω = E, as evidenced by the plots of δ λ (E, ω) in the bottom row of figure 2.

JHEP07(2022)034
τ max . Similarly, the correlation between the input time slices may be responsible for the robustness to changing σ t . The preceding discussion demonstrates that the reconstruction procedure on a particular ensemble of field configurations results in a statistics-dominated estimator for ρ (E). No further systematic error due to the reconstruction is assigned.
We turn now to systematic errors associated with the finite extent of the lattice. Along the (approximate) line of constant physics defined by the ensembles A1-A4, these errors are crudely estimated by considering separately the differences between B1 and A4, and B2 and A4. These differences are taken as estimators for the systematic errors due to the finite L and T and are added (in absolute value) independently for each , E, and smearing kernel. A selection of these deviations are shown in figure 4, illustrating that they are at most marginally significant. Nonetheless, this systematic error is subsequently taken into account and represents the largest of the systematic error estimates. Although eq. (3.11) ensures that finite-T effects are 'reliably' O(e −mT ), we conservatively account for both finite L and T with additional systematic errors as discussed above. An increase of statistics on the B1 and B2 ensembles would more accurately estimate these systematic errors and provide a more stringent test of the error estimates due to the reconstruction, but is left for future work. In addition to finite size effects, systematic errors due to the lattice spacing must be estimated as outlined in section 4. The continuum limit is taken with four lattice spacings using the ensembles A1-A4, but (as detailed in section 4) the asymptotic a 2 behavior is affected by large logarithmic corrections and the 'phenomenological' extrapolation form of eq. (4.7) is employed with r = 0, 3, and 6. The value at r = 3 is taken as the best estimate and the largest deviation between any two as an estimate of the systematic error. A selection of the extrapolations for the three different values of r is shown in figure 5. Evidently, the difference in the extrapolated value varies little across these three extrapolation forms.
After estimating systematic errors due to the reconstruction, finite lattice size, and finite lattice spacing, the results for ρ (E) are at last confronted with the exact values. While no further systematic error is assigned to the reconstruction procedure, estimates of the remaining three sources of systematic error (finite L, finite T , continuum limit) are added naively. The total systematic error is then combined in quadrature with the statistical error on the continuum limit estimator. The finite-L and T errors are typically the dominant source of systematic error, and are similar in magnitude to the statistical error.
A selection of these continuum extrapolated values are shown in figure 6. In the left two panels all four kernels are compared at a large fixed = 2m to accentuate the difference between them. However, all of these results are consistent with the exact values (including 2-, 4-, and 6-particle contributions) within the combined statistical and systematic errors. The right two panels show various for the Gaussian kernel and are similarly consistent with the exact numbers. The increasing difficulty of the reconstruction problem with increasing E and decreasing is apparent.

Extrapolation to zero smearing width
The continuum extrapolated results for ρ (E) displayed in figure 6 demonstrate that the spectral reconstruction procedure yields quantitatively accurate results within the quoted statistical and systematic errors. However, the difficulty in reconstructing ρ (E) at large E and small naively suggests that the approach is of little use in the inelastic region, precisely where the finite-volume formalism is not yet developed. However, the increasing smoothness of ρ(E) with increasing E can be exploited by scaling ∝ (E − 2m ) in order to probe larger energies. The scaled values of ρ (E) are used to extrapolate → 0 and determine the unsmeared spectral density ρ(E) using the small-expansion discussed in section 2. The rapid variation of ρ(E) in the elastic region inhibits the application of this approach there, so we focus on E ≥ 4m . 11 11 The opening of subsequent 2n-particle thresholds introduces non-analyticities in ρ(E), but for the range of energies considered here ρ(E) is dominated by fewer-particle contributions and therefore approximately smooth. A natural concern is the growth of cutoff effects with increasing E and , since ρ (E) has increasingly significant contributions from more energetic states. Figure 7 shows the continuum extrapolations of ρ (E) at the largest energy considered here, which is E = 40m , and = 1.2(E − 2m ), which is the largest used in the → 0 extrapolation. Although the continuum limit is somewhat steeper than those in figure 5, there is still little variation across the different values of r. However, it appears that the statistical error on the estimator ρ λ (E) decreases more rapidly with am in figure 7 than in figure 5. This phenomenon requires further study, but could be influenced by our choice of basis vectors, namely the inclusion of all τ ∈ [1,160] at each lattice spacing. The edge of the Brillouin zone may also play a role since 1/(am ) E/m when E 20m for the A1 ensemble and E 90m for the A4. Although not displayed here, the continuum-extrapolated values for the energies and smearing widths used in this section are consistent with the exact results at a level similar to figure 6.

JHEP07(2022)034
The small-behavior of ρ (E) depends on the smearing kernel. 12 As discussed in section 2, for the four smearing kernels considered here, this dependence is encoded in the coefficients w x k , which are known analytically and given in table 1. The coefficients a k (E) in eq. (2.18) however depend on ρ(E) only and are therefore identical across the different smearing kernels. This means that an extrapolation ansatz which includes terms up to O( n ) contains n + 1 fit parameters for each of the a k , which are constrained by data from all the smearing kernels. In this work we target a 0 (E) = ρ(E) only: no additional information from the other a k (E) is used to further constrain the desired spectral density.
The clear sources of systematic error in this approach are the extrapolation form and the extrapolation range [ min , max ]. Due to the difficulty in reconstructing small , the statistical errors increase with decreasing so that in practice varying min has little effect on the extrapolated value. We therefore fix min = 0.3(E − 2m ) for all smearing kernels and extrapolations. The effect of varying max must be monitored more closely, however. In order to fairly include data from different smearing kernels, max for each kernel is re-scaled down to the two-particle threshold, where ρ(E) varies rapidly. For the c0 kernel the large value α c0 ≈ 1.84 together with the leading O( ) behavior effectively renders it useless in the extrapolations. For the Gaussian kernel α g = 1 by definition, while α c1 = 0.7 and α c2 = 0.855 increase the fit range for these two kernels relative to the Gaussian. The extrapolations are therefore performed with the three kernels g, c1, and c2. An example of such an extrapolation is shown in the left panel of figure 8. The systematic error estimate due to the → 0 extrapolation proceeds by employing extrapolation forms up to and including O( p ) with p = 2, 3, and 4 and varying max keeping below max < 1.3(E − 2m ). For each value of p, the largest max resulting in a (correlated) χ 2 /d.o.f. < 1 is identified, with the p = 4 value taken as the best estimate for ρ(E). The spread of these three values is then added in quadrature as a systematic error. An example plot showing the consistency between different extrapolation forms and extrapolation ranges is shown in the right panel of figure 8. This procedure is performed for all energies for which E/m takes integral values from 4 to 40. Figure 9 shows our final results above the elastic treshold. The left panel shows the comparison of the numerical determination of ρ (E) using the Gaussian kernel with the exact results (including 2, 4, and 6 particle contributions) smeared in the same manner at different values of /(E − 2m ). The right panel shows a summary of the → 0 extrapolated results which are similarly consistent with the exact results. Interestingly, the data exhibits some sensitivity to ρ (4) (E) for E 20m . Four-particle scattering amplitudes are currently beyond the reach of the finite-volume formalism. JHEP07(2022)034 Figure 9. Left: numerical results for ρ (E) for the Gaussian kernel and the spectral density including up to six-particle contributions (solid lines) smeared with the same kernel at different values of /(E − 2m ). Right: results for ρ(E) after extrapolation → 0, together with the exact two-particle contribution (light dashed line), the two-, four-, and six-particle contributions (dark solid line), and the 2-loop perturbative result (dark dotted line). Statistical and systematic errors due to the finite volume, continuum limit, and → 0 extrapolation are combined in quadrature as described in the text.

Conclusions
The aim of the preceding sections is to verify the procedure of ref. [1] for numerically computing smeared spectral densities (with an a priori specified smearing kernel) from lattice field theory correlation functions. In this regard the two-dimensional O(3) model usefully provides exact results against which the estimates can be checked. These checks, which are presented in figures 6 and 9, are satisfied and compare both ρ (E) at finite and the results from → 0 extrapolations to determine ρ(E) deep into the inelastic region where finite-volume methods have not yet been developed. The highest energy considered here is E = 40m , at which ρ(E) is determined with a relative accuracy of 5% and differs significantly from the exact two-particle contribution ρ (2) (E) given in eq. (2.6).
Apart from the 'usual' sources of systematic error due to the finite lattice spacing and finite-volume spacetime, we must also consider the imperfect reconstruction of the smearing kernel due to the finite number of input time slices and their associated statistical errors. All sources of systematic error have been estimated and included in figures 6 and 9 where the statistical and systematic errors are added in quadrature. Generally the errors due to the finite lattice extent are the largest source of systematic uncertainty, and are typically less than or comparable to the statistical errors.
The determination of ρ (E) becomes increasingly difficult for smaller smearing widths at fixed energy E, and increasing E with fixed . As is evident from the right two panels of figure 6, it is difficult to achieve precise results outside of the elastic region for m/2 with the Gaussian smearing kernel. Better is to exploit the smoothness of ρ(E) and scale ∝ (E − 2m), so that an equal proportion of the smearing kernel 'leaks' down to the two particle threshold at each energy. This enables the determination of ρ(E) in figure 9, which is the main result of this work.

JHEP07(2022)034
The analysis performed here is (in principle) directly applicable to the vector-vector correlator in QCD to compute the R-ratio, provided a few key points are addressed. First, the presence of narrow resonances in QCD seemingly invalidates our approach to the → 0 extrapolation by scaling ∝ (E − 2m). Second, the spatial extent achieved here of mL = 30 is currently beyond the state of the art for simulations at physical quark masses. In order to address the first problem it might helpful to exploit the linearity of the spectral reconstruction problem by using a preconditioning technique: given a model for the infinite volume spectral density, in which narrow resonances are aprroximately parametrized, one can construct the correlator corresponding to the model. The procedure discussed here can then be applied to the difference of the theory and model correlators in order to extract a much milder spectral function that, at the end, is added back to the model spectral density in order to get a (preconditioned but fully model independent) estimator of the true spectral density. Moreover, results at finite smearing width can be directly compared with experimental data smeared with the same kernel, and may be used to probe the onset of the perturbative regime [25]. Concerning the second problem, it is important to note that the density of states is higher in three spatial dimensions compared to one so that smaller values of mL may be acceptable. Eventually, the masterfield simulation paradigm [26,27] may enable larger volumes in the near future [28].
Apart from this, the difficulty in the reconstruction problem faced here may in fact be similar to QCD. Although the use of τ max = 160 may seem daunting, figure 3 indicates that reducing the range and density of input correlator time slices likely has little effect. Although the input correlator data for this work is generated using the two-cluster algorithm, it still possesses an exponentially degrading signal-to-noise ratio which decays at a rate empirically similar to m, as expected for the vector-vector correlator in QCD. Also, as is typical with scalar field theory, the statistical errors achieved at C(m −1 π ) are roughly comparable with state-of-the-art determinations of the vector-vector correlator in QCD.
Finally, the validation of the spectral reconstruction procedure presented here is intended as a stepping stone to other applications. Among them is the determination of exclusive scattering amplitudes by exploiting an asymptotic formalism as in refs. [10,11]. There are several additional challenges there compared to this work, including the computation and spectral reconstruction of at least three-point temporal correlation functions with two associated time separations. Nonetheless, this may provide an alternative to the finite-volume formalism for determining few-body scattering amplitudes where applicable, and extend the reach of such computations by enabling (in principle) arbitrary center-of-mass energies.

JHEP07(2022)034
Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Projektnummer 417533893/GRK2575 "Rethinking Quantum Field Theory". This work was supported by CINECA that granted computing resources on the Marconi supercomputer to the LQCD123 INFN theoretical initiative under the CINECA-INFN agreement.

A Analytic expressions for the spectral density
Thanks to the integrability of the O(3) model, the matrix elements of the Noether current j µ between the vacuum and a generic n-particle state (form factors) are completely determined by a known set of recursive functional equations, derived by Karowski and Weisz [29]. These functional equations are written in terms of the S-matrix of the system, which has been analytically derived by Zamolodchikov and Zamolodchikov [30] on the basis of the integrability of the model and some fairly weak assumptions. The n-particle contribution to the spectral density of the Noether current can be reconstructed from the form factors. This program has indeed been carried out by Balog and Niedermaier [31] for n = 2, 4, 6. Interestingly, the original motivation for this calculation was the calculation of the Euclidean two-point function of the O(3) model, and the comparison with lattice simulations and perturbative calculations [32].
For convenience we summarize here the most important formulae taken from [31]. A different normalization for the spectral density with respect to Balog and Niedermaier (BN) is employed here With this normalization, the large-µ behaviour of the spectral density is given by ρ(µ) = 1/(2π) + O(g 2 (µ)).
The form factors are usually written in terms of the rapidities θ i=1,...,n . The (spatial) momentum and energy of the i-th particle are given by The invariant mass of a system of n particles is then where trivial hyperbolic function identities are applied. Notably the invariant mass depends only on the rapidity differences, so that it is convenient to define the new variables The kinematics of the n-particle system are completely specified by the set of variables θ i=1,...,n , or equivalently by θ n and u i=1,...,n−1 . In terms of the new variables

JHEP07(2022)034
Once the phase-space integral appearing in the definition of the n-particle contribution to the spectral density has been written in terms of the new variables, one can use the δ-function over the spatial momentum to eliminate the integral over θ n . One is left with the integral over the variables u i=1,...,n−1 of a known function times the δ-function over the invariant mass The form factor is parametrized as where the r.h.s. can be written as a function of u by means of eq. (A.5), and the functions G (n) (u) are polynomials in u which are given explicitly in ref. [31] for n = 2, 4, 6. We report here only where, for the n = 4 case, the following auxiliary variables have been introduced . (A.11) Notice that for n = 2 there is no remaining integral, and one obtains the closed-form expression

JHEP07(2022)034
and is calculated numerically for m = 1 (the m dependence can be trivially reintroduced by dimensional analysis) and a selection of values of µ by means of the GNU Scientific Library implementation of Press and Farrar's MISER Monte Carlo algorithm [33]. In the region µ ≤ 40m, the 6-particle contribution to the spectral density is always smaller than the 0.2% of the sum of the 2-and 4-particle contributions.

B Spectral reconstruction implementation
Explicit expressions and numerical implementation details of the spectral reconstruction algorithm are provided here for completeness. The presentation follows ref.
[1] with a slightly different notation. Additional information concerning the constraints used in this work is also provided. By using a matrix-vector notation, the functionals in eq. (2.12) can be expressed as where g T = (g 1 , · · · , g τmax ) is the vector collecting the coefficients g τ , f the vector with entries and A and B matrices with elements Here C(aτ ) is the correlator at time τ in lattice units and With these definitions the vector g λ that solves the minimum conditions of eq. (2.13) is given by In figure 2 the minimization problem is subject to two different constraints on the reconstructed kernel. The first is detailed in ref. [1] and forces the reconstructed and target kernels to have the same areâ The second constraint requires the reconstructed and target kernels to coincide at ω = E a τmax τ =1

JHEP07(2022)034
Both constraints are represented in vector notation as where for the constraint in eq. (B.6) the entries of the vector R and the constant r are given by while for the constraint in eq. (B.7) they are given by The solution of the minimization problem subject to eq. (B.8) is For the basis functions actually used in this work, namely explicit expressions for the entries of the matrix A are When τ max is large this matrix is poorly conditioned. Consequently, when both λ and the errors on the input data are small, the coefficients g λ τ are large in magnitude and oscillate in sign at different values of τ . In these situations the signal on the reconstructed spectral density, given in vector notation by ρ λ (E) = c T · g λ , c τ = aC(aτ ) , (B.14) results from large numerical cancellations. For this reason (as in ref. [1]), an implementation using extended-precision arithmetic is advocated. This is relatively straightforward for the kernels considered in this work using the libraries of refs. [34,35] because the integrals appearing in eqs. (B.2), (B.4), and (B.9) can be expressed in terms of standard special functions already implemented in these packages. 13 In the case of more generic kernels a numerical integration implemented in extended-precision arithmetic may be required. An implementation based on the double-exponential quadrature algorithm [36] is available upon request.

JHEP07(2022)034 C Finite-volume effects in a model smeared spectral density
In this appendix we present results concerning finite-volume effects in an idealized system where the spectral density is given entirely by ρ (2) (E) in eq. (2.6) and its finite-volume analog contains only two-particle states. 14 The results are therefore relevant for ρ (E) in the O(3) model for values of E and for which n ≥ 4 particle states do not significantly contribute. Using the notation of section 3, we first define ρ ∞,L, (E) as the full spectral density (with n = 2, 4, 6, · · · contributions) at finite L and infinite T smeared with a resolution function. This can be written as where δ (x) a generic smearing kernel, and In absence of n ≥ 4 particle states in the sum in eq. (C.1) the E n (L) and c n (L) are given by the Lüscher quantization condition [19] and the Lellouch-Lüscher relation [3], This means that (up to corrections exponentially suppressed in L) both the energies and overlaps are dictated by kinematic factors and the two-particle scattering phase shift δ I=1 (E). The I = 1 label emphasizes that this scattering phase shift corresponds to twoparticle states transforming irreducibly under the fundamental representation of the global O(3) symmetry and distinguishes the phase shift from the resolution function δ (x) and the Dirac δ-function. The phaseshift δ I=1 can be written explicitly via the analytically-known S-matrix, denoted S(k) where the log is defined such that 0 < δ I=1 (E) < π. Note that the θ used here matches that of eq. (2.6), since 14 Stated more precisely, this condition means that the finite-volume smeared spectral density is equal to the one in eq. (C.8). 15 See also ref. [37] for explicit expressions relevant to a two-dimensional spacetime.

JHEP07(2022)034
The two-particle, finite-volume, smeared spectral density is therefore defined as where we have dropped the T = ∞ label. If δ (x) has compact support and and E are chosen so that only E n (L) < 4m states contribute, then this corresponds exactly to the smeared finite-volume spectral density in the full theory. By contrast, for non-compact smearing or for E > 4m, ρ L, (E) is an approximation in which the role of infinite-volume states with more than two-particles is ignored.
In the remainder of this appendix, we show that terms scaling as 1/L cancel in ρ (2) L, (E) provided that δ (x) is differentiable and falls off fast enough as x → ∞. The precise condition is given after eq. (C.18) below. We additionally show the stronger result that ρ (2) L, (E) has exponentially suppressed volume effects provided δ (x) is analytic in some finite-width strip about the real axis. Before investigating the nature of the scaling at asymptotically large L, we formally evaluate the L → ∞ limit of eq. (C.8) where in the final line we have dropped contributions to the finite-volume energy and the Lellouch-Lüscher factor that are 1/L suppressed, replaced the sum over n with an integral and performed the change of variables k = 2πn/L. These results hold for any smearing kernel δ (x) for which the integral converges. Finally, changing integration variables to ω yields as expected.

C.1 1/L cancellation
We now show that 1/L contributions to the finite-volume energies and Lellouch-Lüscher factors cancel in the definition of ρ (C.14)

JHEP07(2022)034
Contributions to this coefficient arise from the leading-order shift to both the matrix element and the energy. The former can be written as where we have substituted ∂k/∂ω = ω/(4k) for k = ω 2 /4 − m 2 . The corresponding result for the energy is obtained by expanding eq. (C.3) 16) where here k = 2πn/L. Putting everything together yields (C.17) where the first term in parenthesis arises from the energy shift and the second from the matrix element. Evaluating the limit gives The precise condition on δ (x) is therefore that the integral above should vanish. Denoting the quantity in square brackets by f (ω), this holds if df (ω)/dω is integrable, f (ω) is differentiable everywhere, and f (ω) vanishes at both ω = 2m and infinity. All the smearing kernels considered in this work satisfy this condition as do much more aggressive choices, such as δ (x) with compact support in the region [x − , x + ].

C.2 Exponentially suppressed volume effects
We now demonstrate that for a more restrictive class of smearing kernels, finite-volume effects are indeed exponentially suppressed in L. This result also holds for all of the smearing kernels used in this work. The argument is based on an elegant identity that is closely related to the derivation of the Lellouch-Lüscher formalism. One can combine the appearance of (∂Q(ω)/∂ω) with the definition of the finite-volume energies to re-write eq. (C.8) as where the 'δ' in the sum over n denotes the Dirac δ-function. The relation δ f (x) = |f (x)| −1 δ(x) can be used to show that this expression is equivalent to eq. (C.8).
Next we use that Q(E) is non-negative 16 to extend the sum over n to include all integers. Then applying the Poisson summation formula where in the second line we have substituted the definition of Q(E) from eq. (C.5).
Taking the expressions for the spectral density and S-matrix (eqs. (2.6) and (C.6) respectively) and changing the variable of integration to k = ω 2 /4 − m 2 now results in We have also used the invariance of the integrand under the simultaneous replacements n → −n and k → −k (θ → −θ) to 'unfold' the integral to the whole real axis. The function I n (k) is analytic in the whole complex plane except for two cuts on ±i[m, ∞).
Assuming that δ E − 2 √ m 2 + k 2 is analytic in the complex strip Im k < µ for some µ ≤ m, one can shift the integration contour to R + iµ. Concerning the kernels used in this work, µ = m always holds for the Gaussian kernel g but for the Cauchy kernels, c0, c1 and c2, the singularity at (E − 2 √ m 2 + k 2 ) 2 = − 2 can be a distance less than m from the real axis. In this case one has where the small expansion is included for illustration. The following formulae are also only valid if the segments at infinity do not contribute. We have confirmed this is the case for the kernels used in this work. Extending the integration range and shifting the contour in eq. (C.23) as described, we find that the difference between finite-and infinite-volume smeared spectral densities can be written as ∆ρ (2) L, (E) ≡ ρ (2) L, (E) − ρ (2) ∞, (E) = The coefficient C n is oscillatory with an envelope that is either constant or falling with a power of L. It is defined as ) e inLx I n (θ η (x)) , (C. 27) with θ η (x) = 2 sinh −1 [(x + iη)/m] and ω η (x) = 2 (x + iη) 2 + m 2 .
We have also studied these results numerically and confirmed that the straightforward expression based on a sum over finite-volume states in eq. (C.8) matches the sum over JHEP07(2022)034 Figure 10. Theoretically predicted two-particle spectral densities for three sets of /m and mL values (as indicated) with the Gaussian smearing kernel. Each panel includes the smeared infinitevolume results (orange), as well as the finite-volume smeared spectral density defined in eq. (C.8) (blue). The points arise from numerically evaluating the n = 1 term of eq. (C.26) and combining with ρ ∞, (E). Figure 11. Plots of the finite-volume residue vs. L for the Gaussian kernel (g, left) and Cauchy kernel (c0, right), determined using eq. (C.26). The dashed lines give the predicted e −µL scaling for each (with a pre-factor chosen to position the curve for ease of comparison); µ = m for the Gaussian and µ = 0.265m for the Cauchy, slightly exceeding /2 as given by eq. (C.25). The Gaussian data includes only the n = 1 term of eq. (C.26) while the Cauchy includes the sum over the n = 1, 2, 3 terms, i.e. all terms that fall-off slower than e −mL .
Poisson modes evaluated both with the original contour (eq. (C.23)) and the shifted contour which makes the e −µL scaling manifest (eq. (C.26)). In figure 10 we show the results for the Gaussian kernel plotted as a function of E for various values of fixed and L. In figure 11 we plot |∆ρ (2) L, (E)| at fixed E and versus L for both the Gaussian and Cauchy kernels. We see that the volume effects are oscillatory functions of L with an envelope decaying according to the predicted exponential. The derivations of this appendix therefore suggest that finite-volume effects are small for the numerical results in section 5.1.

D Simulation algorithm
We employ the single-cluster algorithm and associated cluster estimators from ref. [19] which are briefly summarized here. A cluster update proceeds as follows. First, a random vector r ∈ R 3 , |r| = 1 is drawn uniformly from the unit sphere. Then a 'seed' site is chosen uniformly as the first member of the cluster. where σ r (x) = σ(x) · r. After all cluster neighbors have been considered for addition, all cluster sites are updated according to In order to employ a cluster estimator for C(aτ ), which contains four fundamential fields, a second orthogonal cluster update is required. This proceeds by choosing a second random vector u from the unit sphere with the constraint that r · u = 0 and then performing a second cluster update in the same manner. The combination of a single r-update followed by a u-update is henceforth referred to as a 'cluster update', the number of which are tabulated in table 2 for each ensemble. The cluster estimator for C(aτ ) is built from where (although not denoted explicitly) time translation invariance is employed to average over all equivalent time separations on the periodic torus.
The estimator for C(aτ ) in eq. (D.5) is not positive definite, in contrast to the cluster estimator for the two-point function of the fundamental fields, which requires only a single cluster. The computational cost of Φ ru (aτ ) for a single update scales only weakly with the lattice size at fixed β, since only pairs of sites which are neighbors in the spatial direction and belong to different clusters contribute. However this overlap becomes increasingly unlikely, resulting in an increasing statistical error for a fixed number of cluster updates. Despite this 'cluster cutoff', the variance of the estimator in eq. (D.5) decays exponentially with increasing τ , while the one for standard estimator (given by the middle expression in eq. (D.4)) approaches a constant. The cluster estimator thus results in a significant improvement in the signal-to-noise ratio, which empirically decays with a rate roughly similar to m.

JHEP07(2022)034
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. SCOAP 3 supports the goals of the International Year of Basic Sciences for Sustainable Development.