## Abstract

Accurate predictions of inclusive scattering cross sections in the linear response regime require efficient and controllable methods to calculate the spectral density in a strongly-correlated many-body system. In this work we reformulate the recently proposed Gaussian Integral Transform technique in terms of Fourier moments of the system Hamiltonian which can be computed efficiently on a quantum computer. One of the main advantages of this framework is that it allows for an important reduction of the computational cost by exploiting previous knowledge about the energy moments of the spectral density. For a simple model of medium mass nucleus like \(^{40}\)Ca and target energy resolution of 1 MeV we find an expected speed-up of \(\approx 125\) times for the calculation of the giant dipole response and of \(\approx 50\) times for the simulation of quasi-elastic electron scattering at typical momentum transfers.

### Similar content being viewed by others

Avoid common mistakes on your manuscript.

## 1 Introduction

Response functions describe the linear response of a many-body system after an excitation and contain the same information as an inclusive reaction cross section. They can be expressed in terms of the spectral density operator \(\delta ({\hat{H}}-\omega )\) of the Hamiltonian \({\hat{H}}\) describing the many-body system. First principle calculations of response function is in general extremely challenging for strongly correlated systems. A very powerful approach to study dynamical properties of many-body systems is to employ integral transform techniques which map the local spectral density into more manageable ground state expectation value which can be used to infer properties of the response function. This is the approach used in the Lorentz Integral Transform (LIT) method [1, 2] and the more recent Gaussian Integral Transform (GIT) method [3, 4]. Thanks to the ability to efficiently simulate the real time dynamics of many-body systems, simulations employing quantum computers offer the possibility to tackle the calculation of scattering cross sections from first principles (see e.g. [5] for a recent review). Interestingly, in order to describe inclusive scattering in the linear response regime using the response function, integral transform techniques are also useful in the design of efficient quantum algorithms [3, 6]. It is therefore important and timely to extend the available techniques in order to reduce as much as possible the computational resources required to calculate the nuclear response function on a quantum device.

Given an Hamiltonian \({\hat{H}}\), an initial state \(\vert \varPsi _0\rangle \) and a Hermitian excitation operator \({\hat{O}}\), our goal is to evaluate the frequency dependent response function defined as

where in the last line we used the expansion on the eigenbasis \(\{\vert m\rangle \}\) of \({\hat{H}}\). In general, it is difficult to directly use this definition as it requires a complete knowledge of the full energy eigenspectrum. The main idea is, similarly to the LIT and the GIT methods, to consider instead an integral transform with kernel *K* given by

Note that both the original transform and it’s integral transform have units of inverse energy. Furthermore, in this work we will focus on translationally invariant kernel functions for which \(K(\nu ,\omega )=K(\nu -\omega )\), but extensions to the more general case are straightforward.

Once \(\varPhi (\nu )\) has been obtained, one usually attempts an inversion of the integral transform in order to obtain \(S(\omega )\) back, however this procedure can introduce uncontrollable errors whenever the kernel function has compact support [7, 8]. A variety of approximate inversion techniques that introduce, more or less explicitly, additional smoothing to reduce these artifacts have been proposed in the past [9,10,11,12,13]. However, if the kernel function is chosen appropriately this last step might not be necessary [3, 4]. In particular, it is convenient to consider kernels such that

A kernel satisfying Eq. (3) is called \(\varSigma \)*-approximate* with *resolution* \(\varDelta \) [3]. The reason for this definition is that, in the commonly encountered situation where one is interested in observables of the form

for some bounded function *f*, one can use directly the integral transform \(\varPhi \) to approximate *Q*(*S*, *f*) with \(Q(\varPhi ,f)\) with controllable error [3]. More intuitively, we can think of \(\varPhi \) as a finite width approximation of the original response function with an energy resolution given by \(\varDelta \) and additional tails controlled by the parameter \(\varSigma \). The original response is then recovered in the limit \(\varDelta ,\varSigma \rightarrow 0\).

This approach was recently used in Ref. [4] to provide a controllable approximation of the spectral density using histograms derived from the integral response (see also [14] for a recent application for computing spectral functions in light nuclei using the Coupled Cluster method).

A direct procedure to construct integral kernels satisfying Eq. (3) is to use a polynomial expansion such as

with \(\{\phi _n\}\) an orthonormal basis of polynomials. This leads directly to an alternative expression for the integral transform in this basis

where \(m_n\) denotes the *n*th frequency moment of the response function over these polynomials

If the series in Eq. (6) converges rapidly, one can then estimate the integral transform \(\varPhi (\nu )\) with a small error by keeping only the first *N* terms in the expansion. Since the response function, and thus its integral transform, have units of inverse energy we will quantify the error in its approximation as \(\varepsilon /\varOmega \) with \(\varepsilon >0\) and \(\varOmega \) a suitable energy constant. As shown in [3], we can approximate a response function with a \(\varSigma \)-accurate kernel with resolution \(\varDelta \) using a basis of Chebyshev polynomials and a number of terms given by

with \(\Vert {\hat{H}}\Vert \) as the spectral norm of the Hamiltonian. The notation used here neglects subleading logarithmic factors, in particular we define \(\widetilde{{\mathcal {O}}}(f(x))={\mathcal {O}}(f(x)\log (f(x)))\) in line with previously reported estimates [3].

This results applies to any possible response function and is helpful whenever the Chebyshev moments of the Hamiltonian can be computed efficiently. This can be done efficiently and exactly with quantum computers [15,16,17,18,19] or in approximate way with classical methods, such as the Coupled Cluster approach [4, 14]. In many applications the spectral function has a number of distinctive features like being dominated by low energy contributions or displaying a distinct peak structure like e.g. in the quasi-elastic regime. Many of these features are captured by energy moments of the form [20]

For instance, the qualitative shape of a quasi-elastic peak can be characterized with a good accuracy from the first few moments alone [21]. The advantage of using moments \(\mu _n\) to characterize the spectral density is that in many situations they can be calculated explicitly using ground-state many-body methods (e.g. with Monte Carlo [22,23,24]).

In this work we employ a Fourier basis for the polynomial expansion of the integral transform Eq. (6) in order to incorporate this information into the calculation of the full spectral density and reduce the overall computational cost of the simulation. Thanks to their ability to efficiently simulate the real-time evolution of many-body systems, quantum computers are expected to be able to give us access to expectation values of the form

for states \(\psi \) which can be efficiently prepared. The function \(g_{\psi }(t)\) contains important information about the many body system, and several authors have proposed techniques to use this to get access to a variety of observables like excitation energies, the local spectral density and even thermodynamic expectation values [25,26,27,28,29].

Under general considerations, we can expect two main time scales to play a role: first, in order to obtain a frequency resolution of order \(\varDelta \), the maximum time required will need to scale as \(T={\mathcal {O}}(1/\varDelta )\); second, since the full energy spectrum is contained in a frequency interval of size \(\Vert {\hat{H}}\Vert \), a time step \(\delta t={\mathcal {O}}(1/\Vert {\hat{H}}\Vert )\) will guarantee a perfect reconstruction without aliasing thanks to the Shannon–Nyquist theorem. The combination of these two time scales predicts a scaling of the number of time-steps required as \(T/\delta t={\mathcal {O}}(\Vert {\hat{H}}\Vert /\varDelta )\). However, as shown recently in Ref. [27], in situations where the spectral function has an energy variance \(\sigma ^2\ll \Vert {\hat{H}}\Vert ^2\), a good Fourier approximation can be obtained with a scaling \(T/\delta t={\mathcal {O}}(\sigma /\varDelta )\) instead. The goal of this work is to formalize this intuition and provide rigorous error bounds allowing use of prior information about the energy moments of the spectral function in order to reduce the number of expectation values required.

In Sect. 2 we present the general framework employing Fourier moments to approximate the response function and specialize the treatment to the Gaussian Integral Transform in Sect. 2.1. Section 3 details two example response functions that are reconstructed with this method and the improved efficiency afforded by employing energy moments. In Sect. 4 we conclude by discussing applications of this method to typical scattering properties in nuclear physics, and the estimated improvement in number of Fourier moments required.

## 2 Fourier based reconstruction

In order to use directly a discrete Fourier transform to perform the expansion of the integral kernel in Eq. (5) it’s convenient to define a periodic extension of the kernel function as follows [26]

where the period \(P=\chi \Vert {\hat{H}}\Vert \) is expressed here as a function of the spectral norm. This convention allows us to truncate the sum at the first term if we take \(\chi \gg 1\): since the spectrum is bounded, only the term with \(n=0\) has a finite contribution. As \(\chi \) is reduced, more terms will start to contribute to the sum and the extended integral kernel \(K^\chi \) will start to deviate from the original. The main advantage of this construction is that we can express directly the periodically-extended kernel as a Fourier series

where the Fourier transformed coefficients are given by

The main advantage of working with \(K^\chi \) and this Fourier series representation is that we can now express a general integral transform of the response function as a linear combination of expectation values of the time-evolution operator. From the definition in Eq. (2) we can in fact express the new integral transform as follows

with finite time-steps of size \(\delta t=2\pi /(\chi \Vert {\hat{H}}\Vert )\). Provided we choose a smooth integral kernel in the \(\omega \) variable, the series expansion converges quickly and we can therefore obtain an accurate approximation of the integral transform by taking a truncation to some finite order

For example, if we consider integral kernels in \(C^\infty \), the convergence will be in general super-polynomial in *N* and, importantly, rigorous bounds on the truncation error can be found with relatively straightforward calculations. It is important to note at this point that the use of a smoothing kernel with a finite energy resolution \(\varDelta \) is critical to ensure that the expansion is well-behaved: indeed a direct approximation of the response function \(S(\omega )\) using a finite Fourier sum would have encountered difficulties due to the discreteness of the energy spectrum for a finite system, which creates sharp peaks. The difference between the present approach and related ones (from Refs. [3, 4]) with more heuristic smoothing procedures like the Maximum Entropy Method [9, 11] or alternative polynomial expansion methods like the Kernel Polynomial Method [30] is that the energy window over which the smoothing is applied is fully under control and all errors can be accounted for. The price to pay to be able to use higher energy resolution is a corresponding increase in the number of terms in the expansion Eq. (15) which corresponds to a comparable increase in the maximum time \(T=N\delta t\) over which one needs to be able to simulate the many-body dynamics. We will discuss this in more detail for a specific choice of integral kernel in the next section.

### 2.1 Gaussian integral transform

As shown already in Ref. [3], the Gaussian Integral Transform (GIT) is particularly useful for the purpose of obtaining a \(\varSigma \)-accurate integral transform with resolution \(\varDelta \) with a fast converging polynomial expansion like Eq. (5). This result is rather intuitive since a Gaussian envelope is a perfect trade-off between good resolution in frequency and small widths in the time domain. The original construction from Ref. [3] used a Chebyshev expansion for the polynomial basis, however in this work we instead explore an expansion of the Gaussian kernel into Fourier modes.

The first step is to determine an appropriate value for the width \(\varLambda \) of the Gaussian kernel in order to satisfy the condition in Eq. (3). A direct calculation gives

Using the upper-bound on the complementary error function \(\text {erfc}(x)=1-\text {erf(x)}\) by a Gaussian

we can find the following sufficient condition [3]

Following the notation from Eqs. (11) and (12), we can then express the periodically extended Gaussian kernel as

with the corresponding Fourier coefficients

In the expressions above we used \(\delta t=2\pi /(\chi \Vert {\hat{H}}\Vert )\) for the time-step. We can now write the approximate integral transform obtained by truncating the series as

The ideal integral transform \(\varPhi (\nu )\) would be obtained in principle by choosing \(\varLambda \) in order to satisfy Eq. (18), taking \(\chi =2\) and letting \(N\rightarrow \infty \). The error in the approximation \(\varPhi ^\chi _N\) is caused only by taking a finite number of terms *N* and (possibly) reducing the value of \(\chi \) which parametrizes the frequency interval used in the periodic extension. For a fixed value of the frequency \(\nu \) we can then bound the error with

where we simply used the triangle inequality. The advantage is that the two error contributions on the right hand side can be bounded individually in a simpler way. We will denote the first error on the right hand side as \(\epsilon _P\) and the second as \(\epsilon _N\). In addition to these two sources of error, we need also to account for the fact we will estimate the Fourier moment

by computing the expectation value with a finite statistical sample leading to an additional error

where we have denoted by \({\overline{\varPhi }}_N^\chi (\nu )\) the finite sample estimate of \(\varPhi _N^\chi (\nu )\). The total error will be given then by the sum of all contributions \(\epsilon =\epsilon _P+\epsilon _N+\epsilon _S\).

Before presenting our results for the complexity of estimating an accurate approximation of the integral transform \(\varPhi (\nu )\), it is convenient to specify several of the conventions we use, note however that the results provided in Appendix A are completely general and do not depend on these conventions. First of all, since the response function (and thus it’s integral transform) have dimensions of inverse energy, we will consider a dimensionless error parameter \(\varepsilon >0\) obtained using a suitable energy scale \(\varOmega \). That is, we want to find values \(\chi \) and *N* for which

Second, we consider the situation where we are interested in approximating the response function on a finite energy window \(\omega \in [\omega _{\min },\omega _{\max }]\). For many applications in nuclear physics is also customary to shift the Hamiltonian by the ground state energy so that all frequencies become positive. Since both of these considerations apply directly to the integral transform, we will consider the situation where \({\hat{H}}\) has been shifted so that the ground state is at \(\omega _{\min }=-\Vert {\hat{H}}\Vert \) and consider a range \([-\Vert {\hat{H}}\Vert ,-\Vert {\hat{H}}\Vert +\delta \nu ]\) for the energies we want the value of \(\varPhi (\nu )\). Finally, since we are interested in computing the Fourier moments \(m^\chi _n\) on a quantum computer, we will consider the situation where the excitation operator has been appropriately rescaled so that the state \({\hat{O}}|\varPsi _0\rangle \) is normalized to one. This directly implies that the zeroth moment \(\mu _0\) will also be equal to one. A rescaling of this form has been employed already in past works on quantum algorithms for the response function [3, 4, 6, 31, 32] and is natural when using efficient methods for preparing the excited state (see [33]).

The only error term that depends directly on the energy variance \(\sigma ^2=\mu _2-\mu _1^2\) is the first one, while the other ones will only depend parametrically on the chosen period \(P=\chi \Vert {\hat{H}}\Vert \). We start with the truncation error which can be bounded easily using standard techniques resulting in the following requirement for the number of terms

A full derivation of this result can be found in Appendix A.3. The dependence on the error is typical for Gaussian kernels (cf. [3, 4]). The statistical contribution of the error can be controlled, with confidence level \(1-\delta \), by requiring in the worst case a number of experiments given by

where in the second line we used the estimate from Eq. (26). We present a full derivation of this result in Appendix A.4. As we can see, the sample complexity of the scheme is directly proportional to the factor \(\chi \) controlling the period.

The remaining error \(\epsilon _P\), controlled by the choice of \(\chi \), can be found under two separate situations: the general case where we do not use information from known energy moments \(\mu _n\) and the typical case where we have information at least on the energy variance. As mentioned in the text following Eq. (11), in the first situation we want to take \(\chi \gg 1\) in order to minimize the error. In particular, as shown explicitly in Appendix A.1, we find that for \(\epsilon _P\le \varepsilon _P/\varOmega \) the following choice would be sufficient

In the limit \(\varLambda \rightarrow 0\) we recover the Shannon–Nyquist theorem which gives \(\chi =2\) for a perfect reconstruction. At this point we can look for the asymptotic scaling of both *N* and *S* while guaranteeing a total error \(\varepsilon _P+\varepsilon _N+\varepsilon _S\le \varepsilon \). If we take \(\varepsilon _P=\varepsilon _N=\varepsilon _S=\varepsilon /3\) we find immediately

which, apart from a logarithmic correction, is the same result found for the Chebyshev version of the GIT protocol. In turn we find that the required number of samples will scale in general as

again similar to the Chebyshev version (cf. Appendix A.4 and the orginal work Ref. [3]). The main difference is that the Chebyshev implementation avoids the (necessary) approximation of the time-evolution operator. If simulations schemes with optimal asymptotic scaling with the error are used, like the one based on Refs. [16, 17], the Chebyshev GIT will have an advantage in terms of number of operations over its Fourier version described here.

We now turn to the main result of this work where instead we use information about first two energy moments to design a scheme with better scaling. The main result we use is the Chebyshev inequality which, in terms of response functions, states that (assuming \(\mu _0=1\))

for any positive constant energy \(\varGamma \). The idea is to use Eq. (31) to constrain the integrated strength of the spectrum away from the mean. The interested reader can find the full derivation in Appendix A.2, the final result is

which is valid for \(\varLambda \le 2\sigma \) (see Appendix A.2 for an estimate valid also in a lower resolution regime). This immediately gives an estimate for the required number of moments

Several comments are in order at this point. First of all, the scaling of *N* with the target error \(\varepsilon \) is exponentially worse than the original result Eq. (29) which didn’t use energy moments. For situations that require very small errors \(\varepsilon \lesssim \varOmega \sigma ^2/\Vert {\hat{H}}\Vert ^3\) then the general scheme with \(\chi \) from Eq. (28) should be used instead. This drawback can be mitigated if more information is available. For instance if we know the value of the *n* central moment \(\widetilde{\mu _n}\) then we can bring the cost down to

See Appendix A.2 for additional details and a proof.

The second important comment we can make about the result Eq. (33), which also applies to its generalization Eq. (34), is that if we want to reconstruct the energy spectrum over a large frequency range scaling as \({\mathcal {O}}(\Vert {\hat{H}}\Vert )\) then the second contribution proportional to \(\delta \nu \) will dominate and we recover again the general result obtained without moments. This somewhat counter-intuitive behavior can be understood by noticing that our error metric in Eq. (25) measures the error in a pointwise fashion: a very narrow but tall peak in the energy spectrum will contribute to the error even if its spectral weight is small. For specific applications, like the calculation of energy histograms presented in Ref. [4], better bounds could be obtained. We leave this development for future work.

Finally we note that all the results presented in this work depend on an arbitrary energy scale \(\varOmega \) which controls the energy dimension of the final error. An appropriate choice for this parameter depends on the specific physical application desired. For example, in applications where one is interested in obtaining histograms of the spectrum like in Ref. [4], the choice \(\varOmega =\varDelta \) seems appropriate.

## 3 Numerical examples

In this section we present some numerical experiments that show in practice the savings afforded by the method presented here. In order to show the general trend, we will employ two simple model response functions \(S_A(\omega )\) and \(S_B(\omega )\). In both cases we produce a peak at low frequencies according to a skewed Gaussian distribution

where \( \xi \) is the location of the peak, \( \beta \) is the scale of the distribution and \( \alpha \) the skewedness. In order to explore the impact of a tail at high energies we also use

Here \(\lambda \) is an overall normalization, \(\rho \) sets the scale and \(\gamma \) the exponent of a power-law decaying tail.

In the numerical tests shown here we took \(\Vert {\hat{H}}\Vert =1\) and 512 eigenvalues \(\{\omega _k\}\) distributed over the whole spectrum. The two model response functions we consider are

where the normalization in the denominators is chosen in order to guarantee that \(\mu _0=1\) for both response functions. The parameters for the response functions are

We show the two model response function in Fig. 1. The main panel shows \(S_A(\omega )\) and \(S_B(\omega )\) over the entire frequency range, the presence of the tail in \(S_B(\omega )\) is easily seen. In the inset we show instead the two response function in linear scale for the energy range of interest \([-1,-0.8]\), corresponding to the choice \(\delta \nu =0.2\). In this range the two responses appear almost identical. In the figure we also report the obtained values for the mean \(\mu \equiv \mu _1\) and the square root of the variance \(\sigma \) for the two models

The presence of the high-frequency tail does not modify appreciably the mean value while the variance is increased by more then a factor of two.

Due to the presence of delta function peaks, which are generated by the discreteness of the spectrum of a finite matrix, in general we cannot directly compare the integral transform \(\varPhi \) with the originating response function \(S(\omega )\) since the normalization conventions are different. For the discrete response function we have

while for the continuous integral transform instead

In order to directly compare the two, we approximate the integral over frequencies with the finite difference

and choose \(\delta \omega =2/512\) in order to match the average spacing between the eigenvalues. In the following we will then consider the dimensionless quantity \(\delta \omega \varPhi \) which implies the choice \(\varOmega =\delta \omega \) for the energy scale of the error. We further choose the parameters of the integral kernel as

Figure 2 shows a comparison between \(S_A(\omega )\) and \(S_B(\omega )\) with their rescaled integral transforms \(\delta \omega \varPhi _A(\omega )\) (shown as a solid green curve) and \(\delta \omega \varPhi _B(\omega )\) (shown as a dotted turquoise curve). We can clearly see that the rescaling described above allows us to compare them directly.

Since the truncation error \(\epsilon _N\) and the sampling error \(\epsilon _S\) are more standard and the bounds described in this work are not novel, we concentrate in the following in the analysis of the systematic error \(\epsilon _P\) coming from the need to perform a periodic extension of the integral kernel.

We present in Fig. 3 a direct comparison between the ideal (scaled) integral transform \(\delta \omega \varPhi _B(\omega )\) for model *B* (turquoise solid lines) with the approximations obtained by applying a periodic extension to the integral kernel with different choices of periods *P* (cf. Eq. (11) above). In all cases we set \(\varepsilon _P=0.01\) for the target error. The approximation in the bottom panel, denoted \(\delta \omega \varPhi _B(\omega )\) (blue dotted line), is obtained using the conservative choice from Eq. (28) which is valid in general. We see clearly that the effect is to push the replicas outside the whole range of the Hamiltonian, which in our case is \([-1,1]\), and therefore no appreciable change affects the transform in the required energy range \([\nu _{\min },\nu _{\max }]=[-1,0.8]\): the maximum observed deviation is \(\approx 10^{-8}\). To highlight the location of this energy range, we have shaded in grey the region of energies outside of it. Also, for ease of visualization, we have presented results in a larger range of energies in order to be able to see where the replicas are located. The top panel shows instead the periodic extensions obtained using the improved bound on *P* obtained in this work which uses directly prior information about the first and second energy moment. It is apparent that now the integral transform outside the energy window of interest is severely distorted due to the presence of the periodic replicas. However, inside the required region, the maximum deviations are well below the value \(\epsilon _P=0.01\) chosen as target: we observe in fact an error \(\approx 10^{-4}\). Very similar results have been observed for the approximations of the integral transform of model *A*. Before analyzing the relationship between target error and the empirically observed deviation, we want to point out the the reduction in the value of the period are

for the two model responses respectively. Here we denoted by \(P^{gen}\) the value obtained without information about the energy moments and with \(P^\sigma \) the value obtained using the knowledge of the energy variance. In terms of the required number of moments to guarantee a total approximation error less than \(\varepsilon _P+\varepsilon _N=0.02\) we find using Eq. (26)

In order to better understand the above observation that the measured error seems to be much smaller than the target one, which employs a possibly not tight bound on the error, we present in Fig. 4 an analysis of the scaling of the empirical error with the one set as target. The bottom panel shows how the approximation error in the scaled integral transform changes as a function of the target error for the cases where we use the improved estimate for the period \(P^\sigma \) which uses the energy variance. The solid black line shows the upper-bound employed to control the error while the solid green line and solid turquoise line show the observed maximum errors for both models respectively. We can see that, as expected, the real error is always below the bound we use but that the two models show a very different scaling for small target errors: the integral transform of model *A* has an error which decreases super-polynomially with target error while for model *B* the bound is saturated for small enough values of target error. We can understand this behavior in terms of the high-energy tail of the original response functions: as can be seen from the inset of Fig. 2 the response function for model *B* is dominated by the slowly decaying tail for energies outside the range of interest \([\nu _{\min },\nu _{\max }]=[-1,-0.8]\) while the strength of model *A* decays much faster. Once the target error becomes comparable to the strength in the tail of model *B* then one is forced to consider \(P^\sigma = P^{gen}\) as for a general response. On the other hand, for model *A* as soon as the value of \(P^\sigma \) exceeds the range of frequencies of interest than the error decays quickly following the strength of the response. These results show very clearly the important effect produced by slowly decaying tails in the response function and how their presence is automatically captured by the tail bounds employed in this work. The observation that the error bound is essentially saturated for small errors in model *B* suggests that it is unlikely to be possible to considerably improve our estimate for the optimal period \(P^\sigma \) in general. Finally, for not too small values of the target error \(\gtrapprox 10^{-3}\) our error bound overestimated the real error by more than an order of magnitude. In this regime it is possible that improved estimates using higher central moments [cf. Eqs. (34) and (93)] would be able to increase the savings in computational cost even further.

We have also performed numerical tests on more complex spectral densities, with multiple peaks of varying widths and heights, and obtained comparable results. This is to be expected since the relevant properties of the response function are its energy moments which, together with the required energy resolution, control the efficiency of the approach presented in this work.

## 4 Conclusions

In this work we have extended the Gaussian Integral Transform method [3] by employing a Fourier basis for the polynomial expansion of the integral transform of the spectral density. This replaces the need to estimate Chebyshev moments of the Hamiltonian with analogous Fourier moments that can be evaluated as expectation values of the real-time evolution operator \(\exp (-it{\hat{H}})\) (cf. Eq. (15)). Importantly, evaluation of both of these moments can be achieved efficiently using existing quantum algorithms. The main advantage of employing a Fourier basis is the possibility of using prior information about the spectral density, in the form of its first few energy moments, to reduce the number of observables needed for an accurate reconstruction of the spectrum. Most of our derivation and numerical examples are focused on the reasonable assumption that only the mean and variance of the energy spectrum are available to the user but we also provide improved bounds in case more moments are available.

The technique presented here can allow for orders of magnitude speed-ups in the evaluation of the spectral density. In order to give some concrete examples in nuclear physics, we take the Hamiltonian derived from Lattice EFT and used in previous cost estimates for quantum computations of the response function (see [31]) to model a medium mass nucleus with \(A=40\). As a first example we consider a calculation of the dipole response of \(^{40}Ca\) for excitation energies up to 100 MeV, with resolution \(\varDelta =1\) MeV, small tail contributions \(\varSigma =0.01\) and approximation error \(\varepsilon =0.01\) (we take \(\varOmega =\varDelta \) as scale). Using the experimental values from Ref. [34] (see also [35]) we estimate \(\mu ^{GT}_1\approx 20\) MeV and \(\sigma ^{GT}\approx 22\) MeV. With these values we find more than two orders of magnitude reduction in the number of moments required

As a second example we consider instead the simulation of longitudinal response in quasi-elastic electron-nucleus scattering at momentum transfer *q*. Typical values of the moments are \(\mu ^{QE}_1=q^2/2M\), with *M* the nucleon mass, and \(\sigma ^{QE}\approx k_F\approx 250\) MeV, with \(k_F\) the Fermi momentum [36, 37]. For a momentum transfer \(q=400\) MeV and a maximum excitation energy \(\delta \nu =400\) MeV we find

An expected saving by a factor \(\approx 50\).

It is likely that for calculations in this regime the simple low-energy model used in Ref. [31] might not be suitable. Due to the expected increase in the Hamiltonian norm, we would expect a larger relative saving in number of moments in cases where higher resolution Hamiltonians with a larger momentum cut-off are employed.

Lastly, we want to comment on the fact that the total computational cost controlled by the maximum evolution time \(T=N\delta _t\) still scales as \({\mathcal {O}}(1/\varDelta )\) and we therefore have no violation of the no-fast-forwarding theorem [38]. Interestingly for Hamiltonians that can be fast-forwarded [38, 39] the computational cost is no longer bounded by *T* but by *N* instead. For applications like the reconstruction of the spectral function or the calculation of thermodynamic observables like in Ref. [27] this will be completely given by classical cost. One can also employ the construction presented here to prepare states in a given energy window by performing the summation in Eq. (12) (or more correctly it’s finite *N* approximation) coherently on a quantum device using the Linear Combination of Unitaries strategy [40]. This is similar to the energy filter proposed in Ref. [41] or the original GIT [3] (if used coherently) and the explicit appearance of the energy variance in our cost estimates can prove useful in reducing the cost for some situations. A similar procedure could be employed as a subroutine to the Verified Phase Estimation [42] in order to estimate general expectation values. We leave a more thorough exploration of these possibilities to future work on spectral filters.

## Data Availability

This manuscript has no associated data or the data will not be deposited. [Authors’ comment: The work presented in this manuscript did not produce any new data, the results for the numerical example presented in Sect. 3 can be reproduced easily using the parameters listed in the text.]

## References

V.D. Efros, W. Leidemann, G. Orlandini, Response functions from integral transforms with a Lorentz kernel. Phys. Lett. B

**338**(2), 130–133 (1994)V.D. Efros, W. Leidemann, G. Orlandini, N. Barnea, The Lorentz integral transform (lit) method and its applications to perturbation-induced reactions. J. Phys. G: Nucl. Part. Phys.

**34**(12), R459–R528 (2007)A. Roggero, Spectral-density estimation with the gaussian integral transform. Phys. Rev. A

**102**, 022409 (2020)J.E. Sobczyk, A. Roggero, Spectral density reconstruction with Chebyshev polynomials. Phys. Rev. E

**105**, 055310 (2022)Natalie Klco, Alessandro Roggero, Martin J. Savage, Standard model physics and the digital quantum revolution: thoughts about the interface. Rep. Prog. Phys.

**85**(6), 064301 (2022)A. Roggero, J. Carlson, Dynamic linear response quantum algorithm. Phys. Rev. C

**100**, 034610 (2019)W. Glöckle, M. Schwamb, On the ill-posed character of the Lorentz integral transform. Few-Body Syst.

**46**(1), 55–62 (2009)N. Barnea, V.D. Efros, W. Leidemann, G. Orlandini, The Lorentz integral transform and its inversion. Few-Body Syst.

**47**(4), 201–206 (2010)R.N. Silver, D.S. Sivia, J.E. Gubernatis, Maximum-entropy method for analytic continuation of quantum Monte Carlo data. Phys. Rev. B

**41**, 2380–2389 (1990)E. Vitali, M. Rossi, L. Reatto, D.E. Galli, Ab initio low-energy dynamics of superfluid and solid \(^{4}\text{ H }\text{ e }\). Phys. Rev. B

**82**, 174510 (2010)Y. Burnier, A. Rothkopf, Bayesian approach to spectral function reconstruction for Euclidean quantum field theories. Phys. Rev. Lett.

**111**, 182003 (2013)L. Kades, J.M. Pawlowski, A. Rothkopf, Ml. Scherzer, J.M. Urban, S.J. Wetzel, N. Wink, F.P.G. Ziegler, Spectral reconstruction with deep neural networks. Phys. Rev. D

**102**, 096001 (2020)K. Raghavan, P. Balaprakash, A. Lovato, N. Rocco, S.M. Wild, Machine-learning-based inversion of nuclear responses. Phys. Rev. C

**103**, 035502 (2021)J.E. Sobczyk, S. Bacca, G. Hagen, T. Papenbrock, Spectral function for \(^{4}{\rm He}\) using the Chebyshev expansion in coupled-cluster theory. Phys. Rev. C

**106**, 034310 (2022)A.M. Childs, R. Kothari, R.D. Somma, Quantum algorithm for systems of linear equations with exponentially improved dependence on precision. SIAM J. Comput.

**46**(6), 1920–1950 (2017)G.H. Low, I.L. Chuang, Optimal Hamiltonian simulation by quantum signal processing. Phys. Rev. Lett.

**118**, 010501 (2017)G.H. Low, I.L. Chuang, Hamiltonian simulation by qubitization. Quantum

**3**, 163 (2019)A. Gilyén, Y. Su, G.H. Low, N. Wiebe, Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In

*Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing*, pp. 193–204 (2019)S. Subramanian, S. Brierley, R. Jozsa, Implementing smooth functions of a Hermitian matrix on a quantum computer. J. Phys. Commun.

**3**(6), 065002 (2019)G. Orlandini, M. Traini, Sum rules for electron-nucleus scattering. Rep. Prog. Phys.

**54**(2), 257–338 (1991)R. Rosenfelder, Quasielastic electron scattering from nuclei. Ann. Phys.

**128**(1), 188–240 (1980)J. Carlson, R. Schiavilla, Structure and dynamics of few-nucleon systems. Rev. Mod. Phys.

**70**, 743–841 (1998)J. Carlson, R. Schiavilla, Euclidean proton response in light nuclei. Phys. Rev. Lett.

**68**, 3682–3685 (1992)J. Carlson, S. Gandolfi, F. Pederiva, Steven C. Pieper, R. Schiavilla, K.E. Schmidt, R.B. Wiringa, Quantum Monte Carlo methods for nuclear physics. Rev. Mod. Phys.

**87**, 1067–1118 (2015)T.E. O’Brien, B. Tarasinski, B.M. Terhal, Quantum phase estimation of multiple eigenvalues for small-scale (noisy) experiments. New J. Phys.

**21**(2), 023022 (2019)R.D. Somma, Quantum eigenvalue estimation via time series analysis. New J. Phys.

**21**(12), 123025 (2019)L. Sirui, M.C. Bañuls, J. Ignacio Cirac, Algorithms for quantum simulation at finite energies. PRX Quantum

**2**, 020321 (2021)Edgar Andres Ruiz Guzman, Denis Lacroix, Calculation of generating function in many-body systems with quantum computers: technical challenges and use in hybrid quantum-classical methods (2021)

C.L. Cortes, S.K. Gray, Quantum Krylov subspace algorithms for ground- and excited-state energy estimation. Phys. Rev. A

**105**, 022417 (2022)A. Weiße, G. Wellein, A. Alvermann, H. Fehske, The kernel polynomial method. Rev. Mod. Phys.

**78**(1), 275–306 (2006)A. Roggero, A.C.Y. Li, J. Carlson, R. Gupta, G.N. Perdue, Quantum computing for neutrino-nucleus scattering. Phys. Rev. D

**101**, 074038 (2020)A. Baroni, J. Carlson, R. Gupta, Andy CY. Li, G.N. Perdue, A. Roggero, Nuclear two point correlation functions on a quantum computer. Phys. Rev. D

**105**, 074503 (2022)A. Roggero, G. Chenyi, A. Baroni, T. Papenbrock, Preparation of excited states for nuclear dynamics on a quantum computer. Phys. Rev. C

**102**, 064624 (2020)J. Ahrens, H. Borchert, K.H. Czock, H.B. Eppler, H. Gimm, H. Gundrum, M. Kröning, P. Riehn, G. Sita Ram, A. Zieger, B. Ziegler, Total nuclear photon absorption cross sections for some light elements. Nucl. Phys. A

**251**(3), 479–492 (1975)S. Bacca, N. Barnea, G. Hagen, M. Miorelli, G. Orlandini, T. Papenbrock, Giant and pigmy dipole resonances in \(^{4}{\rm He}\), \(^{16,22}{\rm O}\), and \(^{40}{\rm Ca}\) from chiral nucleon-nucleon interactions. Phys. Rev. C

**90**, 064619 (2014)C.F. Williamson, T.C. Yates, W.M. Schmitt, M. Osborn, M. Deady, Peter D. Zimmerman, C.C. Blatchley, Kamal K. Seth, M. Sarmiento, B. Parker, Yanhe Jin, L.E. Wright, D.S. Onley, Quasielastic electron scattering from \({}^{40}\)ca. Phys. Rev. C

**56**, 3152–3172 (1997)J.E. Sobczyk, B. Acharya, S. Bacca, G. Hagen, Ab initio computation of the longitudinal response function in \(^{40}{\rm Ca}\). Phys. Rev. Lett.

**127**, 072501 (2021)Y. Atia, D. Aharonov, Fast-forwarding of Hamiltonians and exponentially precise measurements. Nat. Commun.

**8**(1), 1572 (2017)G. Shouzhen, R.D. Somma, B. Şahinoğlu, Fast-forwarding quantum evolution. Quantum

**5**, 577 (2021)A.M. Childs, N. Wiebe, Hamiltonian simulation using linear combinations of unitary operations. Quantum Inf. Comput.

**12**, 0901–0924 (2012)Y. Ge, J. Tura, J. Ignacio Cirac, Faster ground state preparation and high-precision ground energy estimation with fewer qubits. J. Math. Phys.

**60**(2), 022202 (2019)T.E. O’Brien, S. Polla, N.C. Rubin, W. Huggins, S. McArdle, S. Boixo, J.R. McClean, R. Babbush, Error mitigation via verified phase estimation. PRX Quantum

**2**, 020317 (2021)

## Acknowledgements

We thank Joseph Carlson for discussions about nuclear response functions. This work was supported in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, Inqubator for Quantum Simulation (IQuS) under Award Number DOE (NP) Award DE-SC0020970.

## Funding

Open access funding provided by Universitá degli Studi di Trento within the CRUI-CARE Agreement.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

Communicated by Denis Lacroix.

## Derivation of error bounds

### Derivation of error bounds

In this appendix we provide a full derivation of the error bounds used in the main text. In order to simplify the notation we will denote the Gaussian kernel as

and it’s periodic extension with period P as

We start with the error introduced by using the periodic extension of the kernel

### 1.1 Periodic extension in the general case

We can write this difference explicitly as

where in the last line we used the fact that both \(S(\omega )\) and the Gaussian kernel are positive definite. We can now use the fact that \(S(\omega )=0\) for frequencies outside the energy spectrum so that

At this point we can use the fact that \(S(\omega )\) is integrable, and in particular we can write

the zeroth-moment (cf. Eq. (9) in the main text). Using this result one can then bound the error with

It is convenient to rewrite the summation as follows

since the second term can be bounded easily using

so that, including the contribution with \(-k\), we find

We can now proceed to use the bound in Eq. (17) of the main text to write

where the constant factor *C* is given by

In order to turn this into a useful bound, we will consider the largest error that can occur for any

which directly implies that

First, notice that one of two Gaussians in Eq. (58) will always dominate over the other. To see this we will consider two cases separately: first let’s take

In this range of values we have

For this range, the second Gaussian centered in \(\nu -P\) dominates and we need to have \(P>\Vert {\hat{H}}\Vert +\nu _{\max }\) in order to prevent the exponent to go to zero. Therefore

If we take the complementary range of values

we find instead the the first Gaussian dominate and, for \(P>\Vert {\hat{H}}\Vert -\nu _{\min }\) we find the useful bound

These results suggest that we should take

for some appropriate \(\eta >0\). For this choice we have in fact

In order to simplify the calculation of a good value for \(\eta \) that would guarantee \(\epsilon _P(\nu )\varOmega <\varepsilon _P\), for some (dimensionless) target error tolerance \(\varepsilon _P>0\), we use the simple lower bound \(P>\Vert {\hat{H}}\Vert \) and find

Neglecting sub-leading logarithmic factors we find therefore the asymptotic scaling

A more practical bound which uses the conventions of the main text is the following one

which is the result quoted in the main text.

### 1.2 Periodic extension with moment information

We now turn to show how to obtain an improved error bound using information about the energy moments. As before, and without loss of generality, we will consider \(\nu \in [\nu _{\min },\nu _{\max }]\). For the following manipulations, it will be convenient to change variables and define

with \(\mu _1\) the first energy moment which gives us information about the average energy in the spectral function and \(\varOmega _{\min }\),\(\varOmega _{\max }\) both positive. At this point we start from the expression for the error \(\epsilon _P(\nu )\) obtained in Eq. (51) before and rewrite it as follows

for some \(\alpha >0\) and where we have denoted by \(\sigma \) the square root of the variance

The central integral can be bounded in the same way we obtained the result in the previous section, the main difference is that now we take the interval

In this range we can bound the central contribution with

where now we took the period to be

for some \(\eta >0\). Note that, in order for the exponential to become small, we would need \(P>\eta \alpha \sigma >\varLambda \) so that we can use the simpler bound

which will incur at most a logarithmic cost. In order to guarantee this term to be smaller than \(\varepsilon _P/(2\varOmega )\), for dimensionless \(\varepsilon _P>0\), we can take [cf. Eq. (69)]

In order to bound the other two terms, we first bound the sum over *k* with an integral using

together with a similar one for the negative terms

so that summing them together we find the bound

The sum of the two missing terms \(\epsilon _{13}(\nu )=\epsilon _1(\nu )+\epsilon _3(\nu )\) is thus bounded by the following

Here we used the normalization of \(S(\omega )\) to get to the second line, the Chebyshev bound Eq. (31) in the main text to get to the third and Eq. (77) for the last one. At this point we can in principle use the expression for \(\eta \) in Eq. (79) to find an appropriate value for \(\alpha \) so that \(\epsilon _{13}(\nu )<\varepsilon _P/(2\varOmega )\). In order to better see the general trend, we instead ensure that \(\alpha \) satisfies

so that \(\eta <1\). We can then solve

This results in the following bound

This result can be inserted directly into Eq. (77) to obtain the final estimate for the period *P* required.

We now use the conventions described in the main text to obtain a more intuitive result and distinguish explicitly two different regimes. For \(2\sigma \ge \varLambda \) the second term will dominate for all \(\varepsilon _P\lesssim 0.6\) and for all values if we increase the numerical factor to 1.9. In this case we can take

as quoted in the main text. Here we used the fact that

For large values of \(\varLambda >2\sigma \) we would need to use the full expression in Eq. (86) above instead.

Finally, for cases where we know the value of central moments

for *n* higher then 2 are known, then one can use a the following generalization of the Chebyshev inequality

Using this additional information we can take instead

and can therefor choose \(\alpha \) as follows

For \(\widetilde{\mu _n^{1/n}}\ge \varLambda \) we have the following simpler result

valid up to \(n=15\). This can be advantageous for small errors provided the central moments do not grow too much.

### 1.3 Truncation error

Here we provide the details underlying the bound for the truncation error

used in the main text. In order to simplify the notation we will use \(m^\chi _n\) to denote the moments (see Eq. (7)). For our Fourier polynomials these are

In addition, note that \(|m^\chi _n|\le \mu _0\) with \(\mu _0\) the zeroth energy moment. This can be easily seen by normalizing the state \({\hat{O}}|\varPsi _0\rangle \) with \(\sqrt{\mu _0}\) and using the fact that the evolution operator is unitary. We can now express explicitly the truncation as follows

with \(g^\chi _n(\nu )\) the Fourier coefficients from Eq. (20). Using the bound on the moments \(m^\chi _n\) we then find immediately

where we used the triangle inequality on the first line, the bound on a sum with an integral in the third and Eq. (17) in the last. Note that the bound does not depend on the frequency \(\nu \) anymore. We can now find the value for *N* that would guarantee \(\epsilon _N(\nu )\varOmega \le \varepsilon _N\) for some (dimensionless) target error tolerance \(\varepsilon _N>0\). The result is

This is the result used in the main text.

### 1.4 Statistical error

We now turn to the discussion of the bound on the statistical error in the evaluation of the Fourier moments \(m^\chi _n\). For the moment we will assume we have estimated each moment with a fixed error \(\varepsilon _M\) which for simplicity we take to be equal for all moments (thanks to the rapid decrease in the coefficients an adaptive strategy be more advantageous). We also neglect the error on the zeroth moment \(m^\chi _0\) since its value is known beforehand. Since errors on different moments are independent, we add the error contributions in quadrature to find

where we used the fact that the variance of the moments is less than \(\mu ^2_0\) and the same procedure employed in Eq. (97). In order to attain a total expected error \(\epsilon _S\le \varepsilon _S/\varOmega \) we then need to take

resulting in an expected number of samples

The additional factor of 2 in the numerator comes from the need to evauate separately the real and imaginary part of each moment separately. If we want to ensure this is sufficient with high probability we can use Markov’s to ensure the probability that the error is below \(\epsilon _S^2\) is larger than 2/3 by increasing the target error \(\varepsilon _S\) by a factor of at least \(\sqrt{3}\) and then use the Chernoff bound and majority voting to increase the probability to \(1-\delta \) with logarithmic effort. For instance

will be enough for a confidence level \(1-\delta \). Together with the bound from Eq. (98) this shows that the number of samples is independent from the number of terms *N*.

The treatment above assumes errors are completely uncorrelated which might not necessarily be the case due to the need of controlling systematic errors with e.g. error mitigation techniques. For a more conservative error estimate we consider instead a bound to \(\epsilon _S\) obtained by summing the individual errors in absolute value. Following the same procedure used above we find the final result

quoted in the main text. We want to conclude this appendix with a similar result regarding the original Chebyshev based GIT from Ref. [3]. The estimate of the sample complexity reported there didn’t use the strategy employed here and as a result the original work gave an estimate \(S={\mathcal {O}}(N^3)\) which was somewhat pessimistic. Using in fact the present strategy, together with the results in Appendix D.3 of [3] and restoring the energy dimensions in order to be compatible with our current conventions we can show that a number of samples given by

are enough to control the statistical errors.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Hartse, J., Roggero, A. Faster spectral density calculation using energy moments.
*Eur. Phys. J. A* **59**, 41 (2023). https://doi.org/10.1140/epja/s10050-023-00952-6

Received:

Accepted:

Published:

DOI: https://doi.org/10.1140/epja/s10050-023-00952-6