1 Introduction

In 1905, Lorentz [33] introduced a kinetic model for electron transport in metals, which he argued should in the limit of low scatterer density be described by the linear Boltzmann equation. Although Lorentz’ paper predates the discovery of quantum mechanics, the Lorentz gas has since served as a fundamental model for chaotic transport in both the classical and quantum setting, with applications to radiative transfer, neutron transport, semiconductor physics, and other models of transport in low-density matter. There has been significant progress in the derivation of the linear Boltzmann equation from first principles in the case of classical transport, starting from the pioneering works [12, 25, 46] for random scatterer configurations, to the more recent derivation of new, generalised kinetic transport equations that highlight the limited validity of the Boltzmann equation for periodic [13, 39] and other aperiodic scatterer configurations [40].

In the quantum setting, the only complete derivation of the linear Boltzmann equation in the low-density limit is for random scatterer configurations [19], which followed analogous results in the weak-coupling limit [20, 45]. The theory of quantum transport in periodic potentials on the other hand is a well developed theory of condensed matter physics. The general consensus is transport in periodic potentials without the presence of any disorder is ballistic, that is particles move almost freely, with minimal interaction with the scatterers; there is no diffusion. In this paper we propose that this picture changes in the low-density limit, where under suitable rescaling of space and time units the quantum dynamics is asymptotically described by a random flight process with strong scattering, similar to the setting of random potentials in the work of Eng and Erdös [19]. Our work is motivated by Castella’s important studies [14,15,16] of both the weak-coupling and low-density limits for periodic potentials in the case of zero Bloch vector. Castella shows that the weak-coupling limit gives rise to a linear Boltzmann equation with memory [14]. The low-density limit on the other hand diverges [17], and only the introduction of physically motivated off-diagonal damping terms leads to a limit, which for small damping is compatible with the linear Boltzmann equation. As we will show here, the case of random or generic Bloch vector does not diverge in the Boltzmann–Grad limit, without any requirement for damping, but the limit process differs significantly from that described by the linear Boltzmann equation. Our results complement our recent paper [28] which establishes convergence rigorously up to second order in perturbation theory. The aim of the present paper is thus to give a derivation of all higher order terms and identify the full random flight process, conditional on an assumption on the distribution of lattice points in a particular scaling limit. A rigorous verification of this hypotheses seems currently out of reach.

We will here focus on the case when the particle wavelength h is comparable to the potential range r, and much smaller than the fundamental cell of the lattice. This choice of scaling means that a wave-packet will evolve semiclassically far away from the scatterers, but that any interaction with the potential is truly quantum. This is a scaling not traditionally discussed in homogenisation theory in which one usually assumes the characteristic wavelength is either much larger than the period (low-frequency homogenisation) or of the same or smaller order (high-frequency homogenisation); see for example [4,5,6, 8, 18, 26, 27, 30, 42, 43]. Our scaling is also different from that leading to the classic point scatterer (or s-wave scatterer, Fermi pseudo-potential), where the potential scale r is taken to zero with an appropriate renormalisation of the potential strength. In contrast to the setting of smooth finite-range potentials discussed in the present paper, periodic (and other) superpositions of point scatterers are exactly solvable [1,2,3, 24, 29, 31].

Our set-up is as follows. We assume throughout that the space dimension d is three or higher. Consider the Schrödinger equation

$$\begin{aligned} \frac{\mathrm {i}h}{2 \pi } \partial _t \psi (t,{{\varvec{x}}}) = H_{h,\lambda } \psi (t,{{\varvec{x}}}) \end{aligned}$$
(1.1)

where

$$\begin{aligned} H_{h,\lambda }=-\frac{h^2}{8\pi ^2}\; \Delta + \lambda \,{\text {Op}}(V) , \end{aligned}$$
(1.2)

\(\Delta \) is the standard d-dimensional Laplacian and \({\text {Op}}(V)\) denotes multiplication by the \({{\mathcal {L}}}\)-periodic potential

$$\begin{aligned} V({{\varvec{x}}}) = \sum _{{{\varvec{b}}}\in {{\mathcal {L}}}} W(r^{-1} ({{\varvec{x}}}-{{\varvec{b}}})), \end{aligned}$$
(1.3)

with W the single-site potential, scaled by \(r>0\), and \({{\mathcal {L}}}\) a full-rank Euclidean lattice in \({{\mathbb {R}}}^d\). We re-scale space units by a constant factor so that the co-volume of \({{\mathcal {L}}}\) (i.e. the volume of its fundamental cell) is one. (One example to keep in mind is the cubic lattice \({{\mathcal {L}}}= {{\mathbb {Z}}}^d\).) The coupling constant \(\lambda >0\) will remain fixed throughout, and h is a scaling parameter which measures the characteristic wave lengths of the quantum particle. We will assume throughout that h is comparable with the potential scaling r.

We assume that W is in the Schwartz class \({{\mathcal {S}}}({{\mathbb {R}}}^d)\) and real-valued. This short-range assumption on the potential is key—a small wavepacket moving through this potential should experience long stretches of almost free evolution, followed by occasional interactions with localised scatterers. One could also consider adding an external potential living on the macroscopic scale although we will not pursue that idea here.

We denote by

$$\begin{aligned} H_\lambda ^{\text {loc}}=-\frac{1}{8\pi ^2}\; \Delta + \lambda {\text {Op}}(W) \end{aligned}$$
(1.4)

the single-scatterer Hamiltonian for the unscaled potential W with \(h=1\). Its resolvent is denoted by \(G_\lambda (E)=(E-H_\lambda ^{\text {loc}})^{-1}\), and the corresponding T-operator is defined as

$$\begin{aligned} T(E)=\lambda {\text {Op}}(W) + \lambda ^2 {\text {Op}}(W)\, G_\lambda (E)\, {\text {Op}}(W). \end{aligned}$$
(1.5)

Rather than consider solutions of the Schrödinger equation directly, we instead consider the time evolution of a quantum observable A given by the Heisenberg evolution

$$\begin{aligned} A(t)=U_{h,\lambda }(t)\, A\, U_{h,\lambda }(-t) . \end{aligned}$$
(1.6)

Here \(U_{h,\lambda }(t)= \mathrm {e}^{-\frac{2\pi \mathrm {i}}{h} H_{h,\lambda } t}\) is the propagator corresponding to the Hamiltonian \(H_{h,\lambda }\).

Let us now take an observable \(A={\text {Op}}(a)\) given by the quantisation of a classical phase-space density \(a=a({{\varvec{x}}},{{\varvec{y}}})\), with \({{\varvec{x}}}\) denoting particle position and \({{\varvec{y}}}\) momentum. The question now is whether, in the low density limit \(r= h\rightarrow 0\) and with the appropriate rescaling of length and time units, the phase-space density of the time-evolved quantum observable A(t) (i.e. its principal symbol) can be described asymptotically by a function \(f(t,{{\varvec{x}}},{{\varvec{y}}})\) governed by a random flight process. Eng and Erdös [19] confirmed this in the case of random scatterer configurations, and established that the density \(f(t,{{\varvec{x}}},{{\varvec{y}}}\)) is a solution of the linear Boltzmann equation,

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \bigl (\partial _t+{{\varvec{y}}}\cdot \nabla _{{\varvec{x}}}\bigr ) f(t,{{\varvec{x}}},{{\varvec{y}}}) = \int _{{{\mathbb {R}}}^d} \big [ \Sigma ({{\varvec{y}}},{{\varvec{y}}}') f(t,{{\varvec{x}}},{{\varvec{y}}}') - \Sigma ({{\varvec{y}}}',{{\varvec{y}}}) f(t,{{\varvec{x}}},{{\varvec{y}}}) \big ] \,d{{\varvec{y}}}' &{} \\ f(0,{{\varvec{x}}},{{\varvec{y}}})=a({{\varvec{x}}},{{\varvec{y}}}) &{} \end{array}\right. } \end{aligned}$$
(1.7)

where \(\Sigma ({{\varvec{y}}},{{\varvec{y}}}')\) is the collision kernel of the single site potential \(W({{\varvec{x}}})\) and \({{\varvec{y}}}'\) and \({{\varvec{y}}}\) denote the incoming and outgoing momenta, respectively. The collision kernel is given by the formula

$$\begin{aligned} \Sigma ({{\varvec{y}}},{{\varvec{y}}}') =4 \pi ^2 \, |T({{\varvec{y}}},{{\varvec{y}}}')|^2 \delta \left( \tfrac{1}{2} \Vert {{\varvec{y}}}\Vert ^2-\tfrac{1}{2} \Vert {{\varvec{y}}}'\Vert ^2\right) \end{aligned}$$
(1.8)

where the T-matrix \(T({{\varvec{y}}},{{\varvec{y}}}')\) is the kernel of T(E) in momentum representation, with \(E=\tfrac{1}{2} \Vert {{\varvec{y}}}\Vert ^2\) (“on-shell”). The total scattering cross section is defined as

$$\begin{aligned} \Sigma _{{\text {tot}}}({{\varvec{y}}}) = \int _{{{\mathbb {R}}}^d} \Sigma ({{\varvec{y}}}',{{\varvec{y}}}) \, \mathrm {d}{{\varvec{y}}}'. \end{aligned}$$
(1.9)

Solutions of the linear Boltzmann equation can be written in terms of the collision series

$$\begin{aligned} f_{\mathrm {LB}}(t,{{\varvec{x}}},{{\varvec{y}}}) = \sum _{k=1}^\infty f_{\mathrm {LB}}^{(k)}(t,{{\varvec{x}}},{{\varvec{y}}}) \end{aligned}$$
(1.10)

with the zero-collision term

$$\begin{aligned} f_{\mathrm {LB}}^{(1)}(t,{{\varvec{x}}},{{\varvec{y}}}) = a( {{\varvec{x}}}- t {{\varvec{y}}},{{\varvec{y}}})\, \mathrm {e}^{- t \Sigma _{{\text {tot}}}({{\varvec{y}}})} , \end{aligned}$$
(1.11)

and the \((k-1)\)-collision term

$$\begin{aligned} f_{\mathrm {LB}}^{(k)}(t,{{\varvec{x}}},{{\varvec{y}}}) =&\int _{({{\mathbb {R}}}^d)^k}\int _{{{\mathbb {R}}}_{\ge 0}^k} \delta ({{\varvec{y}}}-{{\varvec{y}}}_1)\, a\bigg ( {{\varvec{x}}}-\sum _{j=1}^k u_j {{\varvec{y}}}_j ,{{\varvec{y}}}_k \bigg ) \nonumber \\&\times \rho _{\mathrm {LB}}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)\, \delta \bigg (t-\sum _{j=1}^k u_j\bigg ) \, \mathrm {d}{{\varvec{u}}}\, \mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k \end{aligned}$$
(1.12)

with

$$\begin{aligned} \rho _{\mathrm {LB}}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)= \prod _{i=1}^k \mathrm {e}^{-u_i \Sigma _{{\text {tot}}}({{\varvec{y}}}_i)}\; \prod _{j=1}^{k-1} \Sigma ({{\varvec{y}}}_j,{{\varvec{y}}}_{j+1}) . \end{aligned}$$
(1.13)

The product form of the density \(\rho _{\mathrm {LB}}^{(k)}\) shows that the corresponding random flight process is Markovian, and describes a particle moving along a random piecewise linear curve with momenta \({{\varvec{y}}}_i\) and exponentially distributed flight times \(u_i\).

The principal result of this paper is that, for periodic potentials of the form (1.3) and using the same scaling as in the random setting [19], there exists a limiting random flight process describing macroscopic transport. The derivation requires a hypothesis on the fine-scale distribution of lattice points which is discussed in detail as Assumption 1 in Sect. 6.

Theorem 1

(Main result) Under Assumption 1 on the distribution of lattice points, there exists an evolution operator L(t), distinct from that of the linear Boltzmann equation, such that for any \(a\in {{\mathcal {S}}}({{\mathbb {R}}}^d\times {{\mathbb {R}}}^d)\) we have

$$\begin{aligned} \Vert U_{h,\lambda }(t r^{1-d})\, {\text {Op}}_{r,h}(a)\, U_{h,\lambda }(-t r^{1-d}) - {\text {Op}}_{r,h}(L(t) a) \Vert _{{\text {HS}}}\rightarrow 0 \end{aligned}$$
(1.14)

where \({\text {Op}}_{r,h}(a)\) is the Weyl quantisation of the phase-space symbol a in the Boltzmann–Grad scaling and \(\Vert \cdot \Vert _{{{\text {HS}}}}\) is the Hilbert–Schmidt norm.

The precise scaling of the quantum observable \({\text {Op}}_{r,h}(a)\) is explained in Sect. 2. The limiting evolution operator L(t) is given by the series

$$\begin{aligned} L(t)a({{\varvec{x}}},{{\varvec{y}}})= f(t,{{\varvec{x}}},{{\varvec{y}}}) = \sum _{k=1}^\infty f^{(k)}(t,{{\varvec{x}}},{{\varvec{y}}}) , \end{aligned}$$
(1.15)

where \(f^{(k)}\) coincides with \(f_{\mathrm {LB}}^{(k)}\) for \(k=1\),

$$\begin{aligned} f^{(1)}(t,{{\varvec{x}}},{{\varvec{y}}}) = a( {{\varvec{x}}}- t {{\varvec{y}}},{{\varvec{y}}})\, \mathrm {e}^{- t \Sigma _{{\text {tot}}}({{\varvec{y}}})} , \end{aligned}$$
(1.16)

but deviates significantly at higher order. For \(k\ge 2\), the \((k-1)\)-collision term is given by

$$\begin{aligned} f^{(k)}(t,{{\varvec{x}}},{{\varvec{y}}}) =&\frac{1}{k!} \sum _{\ell ,m=1}^k \int _{({{\mathbb {R}}}^d)^k}\int _{{{\mathbb {R}}}_{\ge 0}^k} \delta ({{\varvec{y}}}-{{\varvec{y}}}_\ell )\, a\bigg ( {{\varvec{x}}}-\sum _{j=1}^k u_j {{\varvec{y}}}_j ,{{\varvec{y}}}_m \bigg ) \nonumber \\&\times \rho _{\ell m}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)\, \delta \bigg (t-\sum _{j=1}^k u_j\bigg ) \, \mathrm {d}{{\varvec{u}}}\, \mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k , \end{aligned}$$
(1.17)

with the collision densities

$$\begin{aligned} \rho _{\ell m}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) = \big | g_{\ell m}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \big |^2 \,\omega _k({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)\, \prod _{i=1}^{k} \mathrm {e}^{- u_i \Sigma _{{\text {tot}}}({{\varvec{y}}}_i)} . \end{aligned}$$
(1.18)

Here

$$\begin{aligned} \omega _k({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)= \prod _{j=1}^{k-1} \delta \big (\tfrac{1}{2} \Vert {{\varvec{y}}}_j\Vert ^2-\tfrac{1}{2}\Vert {{\varvec{y}}}_{j+1}\Vert ^2\big ) \end{aligned}$$
(1.19)

and \(g_{\ell m}^{(k)}\) are the coefficients of the matrix valued function

(1.20)

where \({{\mathbb {D}}}({{\varvec{z}}})={\text {diag}}(z_1,\ldots ,z_k)\) and \({{\mathbb {W}}}= {{\mathbb {W}}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)\) with entries

$$\begin{aligned} w_{ij} = {\left\{ \begin{array}{ll} 0 &{} (i=j) \\ -2\pi \mathrm {i}T({{\varvec{y}}}_i,{{\varvec{y}}}_j) &{} (i\ne j) . \end{array}\right. } \end{aligned}$$
(1.21)

The paths of integration in (1.20) are circles around the origin with radius strictly greater than \(r_0= k \max |w_{ij}|\). The matrix \({{\mathbb {G}}}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)\) is in fact the derivative \(\partial _{u_1}\cdots \partial _{u_k}\) of the Borel transform of the function \(F({{\varvec{u}}})=({{\mathbb {D}}}({{\varvec{u}}})^{-1} - {{\mathbb {W}}})^{-1}\). We furthermore note that the above formulas are independent of the choice of scatterer configuration \({{\mathcal {L}}}\), as in the classical setting. For the one-collision terms we will furthermore derive the following explicit representation in terms of the Lorentz–Boltzmann density (1.13) and J-Bessel functions,

$$\begin{aligned} \rho _{1 1}^{(2)}({{\varvec{u}}},{{\varvec{y}}}_1,{{\varvec{y}}}_2) = \rho _\mathrm {LB}^{(2)}({{\varvec{u}}},{{\varvec{y}}}_1,{{\varvec{y}}}_2) \, \bigg | \frac{u_1 T({{\varvec{y}}}_2,{{\varvec{y}}}_1)}{u_2 T({{\varvec{y}}}_1,{{\varvec{y}}}_2)}\bigg | \, \big | J_1\big (4\pi [u_1 u_2 T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1)]^{1/2} \big ) \big |^2 .\nonumber \\ \end{aligned}$$
(1.22)

and

$$\begin{aligned} \rho _{1 2}^{(2)}({{\varvec{u}}},{{\varvec{y}}}_1,{{\varvec{y}}}_2) = \rho _\mathrm {LB}^{(2)}({{\varvec{u}}},{{\varvec{y}}}_1,{{\varvec{y}}}_2) \, \big | J_0\big (4\pi [u_1 u_2 T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1)]^{1/2} \big ) \big |^2 \end{aligned}$$
(1.23)

The remaining matrix elements can be computed via the identities

$$\begin{aligned}&\rho _{22}^{(2)}(u_1,u_2,{{\varvec{y}}}_1,{{\varvec{y}}}_2) = \rho _{11}^{(2)}(u_2,u_1,{{\varvec{y}}}_2,{{\varvec{y}}}_1), \end{aligned}$$
(1.24)
$$\begin{aligned}&\rho _{21}^{(2)}(u_1,u_2,{{\varvec{y}}}_1,{{\varvec{y}}}_2) = \rho _{12}^{(2)}(u_2,u_1,{{\varvec{y}}}_2,{{\varvec{y}}}_1) . \end{aligned}$$
(1.25)

A notable difference with the solution (1.12) to the linear Boltzmann equation is that in (1.17) there is a non-zero probability that the final momentum \({{\varvec{y}}}_\ell \) is equal to the initial momentum \({{\varvec{y}}}_m\).

The paper is organised as follows. We will first explain in Sect. 2 the precise scaling needed to observe our limiting process and state the main result. In Sect. 3 we recall the well-known Floquet–Bloch decomposition for periodic potentials and in Sect. 4 we recall an explicit formula for the T-operator in our specific setting. Section 5 explains the perturbative approach to calculate the series expansion for the time evolution of A(t). This is followed by a discussion of the main hypothesis in this study in Sect. 6 which in brief can be viewed as a phase-space generalisation of the Berry–Tabor conjecture for the statistics of quantum energy levels for integrable systems [7, 37]. In Sect. 7 we provide an explicit computation of terms appearing in the formal series, and in Sect. 8 we prove that the series is absolutely convergent provided \(\lambda \) is small enough. In Sect. 9 we take the low-density limit using the formulas from Sect. 7 and show how the limiting object can be written in terms of the T-operator described in Sect. 4. In Sect. 11 we establish positivity of this limiting expansion and derive the formulas for (1.17). A key observation is that the one-collision term is distinctly different from the corresponding term for the linear Boltzmann equation. We conclude the paper with a discussion and outlook in Sect. 12. The appendix provides detailed background of the combinatorial structures used in this paper.

2 Microlocal Boltzmann–Grad Scaling

The phase space of the underlying classical Hamiltonian dynamics is \({\text {T}}({{\mathbb {R}}}^d)={{\mathbb {R}}}^d\times {{\mathbb {R}}}^d\), where the first component parametrises the position \({{\varvec{x}}}\) and the second the momentum \({{\varvec{y}}}\) of the particle. Given a function \(a:{\text {T}}({{\mathbb {R}}}^d)\rightarrow {{\mathbb {R}}}\), we associate with it the observable \(A={\text {Op}}(a)\) acting on functions \(f\in {{\mathcal {S}}}({{\mathbb {R}}}^d)\) through the Weyl quantisation

$$\begin{aligned} {\text {Op}}(a) f({{\varvec{x}}}) = \int _{{\text {T}}({{\mathbb {R}}}^d)} a\left( \tfrac{1}{2}({{\varvec{x}}}+{{\varvec{x}}}'),{{\varvec{y}}}\right) \, \mathrm {e}(({{\varvec{x}}}-{{\varvec{x}}}')\cdot {{\varvec{y}}})\, f({{\varvec{x}}}')\, \mathrm {d}{{\varvec{x}}}' \mathrm {d}{{\varvec{y}}}, \end{aligned}$$
(2.1)

where we have used the shorthand \(\mathrm {e}(z)=\mathrm {e}^{2\pi \mathrm {i}z}\). The Weyl quantisation is useful to capture the phase space distribution of quantum states. In the case of free quantum dynamics, with \(\lambda =0\) and \(h=1\), we have for example the well-known quantum-classical correspondence principle

$$\begin{aligned} U_{1,0}(t) {\text {Op}}(a) U_{1,0}(-t) = {\text {Op}}(L_0(t)a) \end{aligned}$$
(2.2)

with the classical free evolution \([L_0(t) a]({{\varvec{x}}},{{\varvec{y}}})=a({{\varvec{x}}}-t{{\varvec{y}}},{{\varvec{y}}})\). It is convenient to incorporate the scaling parameter \(h>0\) in (1.2) by setting

$$\begin{aligned} {\text {Op}}_h(a) f({{\varvec{x}}}) = h^{-d/2} \int _{{\text {T}}({{\mathbb {R}}}^d)} a\left( \tfrac{1}{2}({{\varvec{x}}}+{{\varvec{x}}}'),{{\varvec{y}}}\right) \, \mathrm {e}_h(({{\varvec{x}}}-{{\varvec{x}}}')\cdot {{\varvec{y}}})\, f({{\varvec{x}}}')\, \mathrm {d}{{\varvec{x}}}' \mathrm {d}{{\varvec{y}}}\end{aligned}$$
(2.3)

with \(\mathrm {e}_h(z)=\mathrm {e}^{\tfrac{2\pi \mathrm {i}}{h} z}\). Note that we have \({\text {Op}}_h(a)={\text {Op}}(D_{1,h} a)\) for \(D_{1,h} a({{\varvec{x}}},{{\varvec{y}}}) = h^{d/2} \, a({{\varvec{x}}}, h {{\varvec{y}}})\). We refer to \(D_{1,h}\) as the microlocal scaling. In particular, (2.2) becomes

$$\begin{aligned} U_{h,0}(t) {\text {Op}}_h(a) U_{h,0}(-t) = {\text {Op}}_h(L_0(t)a), \end{aligned}$$
(2.4)

The mean free path length of a particle travelling in a potential of the form (1.3) is asymptotic (for r small) to the inverse total scattering cross section of the single-site potential W [35, 40]; the total scattering cross section in turn equals \(r^{d-1}\), up to constants. In the low-density it is natural to measure length units in terms of the mean free path lengths or, equivalently, in units of \(r^{1-d}\). We refer to the corresponding scaling \(D_{r,1}\) defined by \(D_{r,1} a({{\varvec{x}}},{{\varvec{y}}}) = r^{d(d-1)/2} \, a( r^{d-1} {{\varvec{x}}}, {{\varvec{y}}})\) as the Boltzmann–Grad scaling, and the combined scaling

$$\begin{aligned} D_{r,h} a({{\varvec{x}}},{{\varvec{y}}}) = r^{d(d-1)/2} h^{d/2} \, a( r^{d-1} {{\varvec{x}}}, h {{\varvec{y}}}), \end{aligned}$$
(2.5)

as the microlocal Boltzmann–Grad scaling. We define the corresponding scaled Weyl quantisation by \({\text {Op}}_{r,h}={\text {Op}}\circ D_{r,h}\). The quantum-classical correspondence (2.2) for the free dynamics reads in this scaling

$$\begin{aligned} U_{h,0}(t r^{1-d}) {\text {Op}}_{r,h}(a) U_{h,0}(-t r^{1-d}) = {\text {Op}}_{r,h} (L_0(t)a). \end{aligned}$$
(2.6)

The key point here is that we require an extra scaling in time relative to the mean free path. The challenge for the present study is thus to understand the asymptotics of \(A(t r^{1-d})\) as in (1.6), for every fixed \(t>0\), with initial data \(A={\text {Op}}_{r,h}(a)\) and a in the Schwartz class \({{\mathcal {S}}}({\text {T}}({{\mathbb {R}}}^d))\) (i.e. a is infinitely differentiable and all its derivatives decay rapidly as \(\Vert {{\varvec{x}}}\Vert ,\Vert {{\varvec{y}}}\Vert \rightarrow \infty \)). The question is, more precisely, whether there is a family of linear operators L(t) so that

$$\begin{aligned} \Vert U_{h,\lambda }(t r^{1-d})\, {\text {Op}}_{r,h}(a)\, U_{h,\lambda }(-t r^{1-d}) - {\text {Op}}_{r,h}(L(t) a) \Vert _{{\text {HS}}}\rightarrow 0 \end{aligned}$$
(2.7)

in the Hilbert–Schmidt norm, defined as

$$\begin{aligned} \Vert X \Vert _{{\text {HS}}}= \langle X, X\rangle _{{\text {HS}}}^{1/2} , \qquad \langle X, Y\rangle _{{\text {HS}}}= {\text {Tr}}(X^\dagger \; Y). \end{aligned}$$
(2.8)

To understand (2.7), it is sufficient to establish the convergence of

$$\begin{aligned} \langle B, A(t r^{1-d}) \rangle _{{\text {HS}}}\rightarrow \langle b, L(t) a\rangle \end{aligned}$$
(2.9)

with A(t) as in (1.6), \(A={\text {Op}}_{r,h}(a)\), \(B={\text {Op}}_{r,h}(b)\), and \(a,b\in {{\mathcal {S}}}({\text {T}}({{\mathbb {R}}}^d))\). The inner product on the right hand side of (2.9) is defined by

$$\begin{aligned} \langle f, g\rangle = \int _{{\text {T}}({{\mathbb {R}}}^d)} \overline{f({{\varvec{x}}},{{\varvec{y}}})}\, g({{\varvec{x}}},{{\varvec{y}}})\, \mathrm {d}{{\varvec{x}}}\mathrm {d}{{\varvec{y}}}. \end{aligned}$$
(2.10)

We direct the reader towards [28, Appendix A] for an explanation of how (2.9) can be reformulated as a statement about solutions of the Schrödinger equation. As mentioned previously, we will here restrict our attention to the case when r is of the same order of magnitude as h, i.e. \(r=h\, c_0\) for fixed effective scattering radius \(c_0\). By adjusting W, we may in fact assume without loss of generality that \(c_0=1\). This is precisely the scaling used in [19] for the case of random potentials, although in a slightly different formulation in terms of Husimi functions for the phase-space presentation of quantum states.

3 Floquet–Bloch Decomposition

Floquet–Bloch theory allows us to reduce the quantum evolution in periodic potentials to invariant Hilbert spaces \({{\mathcal {H}}}_{{\varvec{\alpha }}}\) of quasiperiodic functions \(\psi \), satisfying

$$\begin{aligned} \psi ({{\varvec{x}}}+{{\varvec{b}}}) = \mathrm {e}({{\varvec{b}}}\cdot {{\varvec{\alpha }}}) \psi ({{\varvec{x}}}) , \end{aligned}$$
(3.1)

for all \({{\varvec{b}}}\in {{\mathcal {L}}}\) where \({{\varvec{\alpha }}}\in {{\mathbb {T}}}^*={{\mathbb {R}}}^d/{{\mathcal {L}}}^*\) is the quasimomentum and

$$\begin{aligned} {{\mathcal {L}}}^*=\{ {{\varvec{k}}}\in {{\mathbb {R}}}^d \mid {{\varvec{k}}}\cdot {{\varvec{b}}}\in {{\mathbb {Z}}}\text { for all } {{\varvec{b}}}\in {{\mathcal {L}}}\} \end{aligned}$$
(3.2)

is the dual (or reciprocal) lattice of \({{\mathcal {L}}}\). We denote by \({{\mathcal {H}}}_{{\varvec{\alpha }}}\) the Hilbert space of such functions that have finite \({\text {L}}^2\)-norm with respect to the inner product

$$\begin{aligned} \langle \psi , \varphi \rangle _{{\varvec{\alpha }}}= \int _{{{\mathbb {T}}}} \overline{\psi ({{\varvec{x}}})}\, \varphi ({{\varvec{x}}})\, \mathrm {d}{{\varvec{x}}}, \end{aligned}$$
(3.3)

with \({{\mathbb {T}}}={{\mathbb {R}}}^d/{{\mathcal {L}}}\). We define the corresponding Hilbert–Schmidt product for linear operators on \({{\mathcal {H}}}_{{\varvec{\alpha }}}\) by

$$\begin{aligned} \langle X, Y \rangle _{{{\text {HS}}},{{\varvec{\alpha }}}} = {\text {Tr}}(X^\dagger \, Y). \end{aligned}$$
(3.4)

For a given quasi-momentum \({{\varvec{\alpha }}}\in {{\mathbb {T}}}^*\), consider the Bloch functions

$$\begin{aligned} \varphi _{{\varvec{k}}}^{{\varvec{\alpha }}}({{\varvec{x}}})=\mathrm {e}(({{\varvec{k}}}+{{\varvec{\alpha }}})\cdot {{\varvec{x}}}), \qquad {{\varvec{k}}}\in {{\mathcal {L}}}^*, \end{aligned}$$
(3.5)

and define the Bloch projection \(\Pi _{{\varvec{\alpha }}}: {{\mathcal {S}}}({{\mathbb {R}}}^d) \rightarrow {{\mathcal {H}}}_{{\varvec{\alpha }}}\) by

$$\begin{aligned} \Pi _{{\varvec{\alpha }}}f({{\varvec{x}}}) = \sum _{{{\varvec{k}}}\in {{\mathcal {L}}}^*} \langle \varphi _{{\varvec{k}}}^{{\varvec{\alpha }}}, f\rangle \; \varphi _{{\varvec{k}}}^{{\varvec{\alpha }}}({{\varvec{x}}}) \end{aligned}$$
(3.6)

with inner product

$$\begin{aligned} \langle f, g\rangle = \int _{{{\mathbb {R}}}^d} \overline{f({{\varvec{x}}})}\, g({{\varvec{x}}})\, \mathrm {d}{{\varvec{x}}}. \end{aligned}$$
(3.7)

Note that, by Poisson summation,

$$\begin{aligned} \Pi _{{\varvec{\alpha }}}f({{\varvec{x}}}) = \sum _{{{\varvec{b}}}\in {{\mathcal {L}}}} \mathrm {e}({{\varvec{b}}}\cdot {{\varvec{\alpha }}}) f({{\varvec{x}}}-{{\varvec{b}}}), \end{aligned}$$
(3.8)

and hence that by integrating over \({{\varvec{\alpha }}}\in {{\mathbb {T}}}^*\) one regains \(f({{\varvec{x}}})\). The kernel of \(\Pi _{{\varvec{\alpha }}}\) is thus

$$\begin{aligned} \Pi _{{\varvec{\alpha }}}({{\varvec{x}}},{{\varvec{x}}}') = \sum _{{{\varvec{b}}}\in {{\mathcal {L}}}} \mathrm {e}({{\varvec{b}}}\cdot {{\varvec{\alpha }}}) \delta _{{\varvec{b}}}({{\varvec{x}}}-{{\varvec{x}}}'). \end{aligned}$$
(3.9)

Instead of (2.9) the plan is now to consider the convergence

$$\begin{aligned} \langle B,\Pi _{{\varvec{\alpha }}}A(t r^{1-d}) \rangle _{{\text {HS}}}\rightarrow \langle b,L(t) a\rangle \end{aligned}$$
(3.10)

for typical \({{\varvec{\alpha }}}\). The advantage is that we are working in a Hilbert space with discrete basis. One can then obtain information on (2.9) by integrating over \({{\varvec{\alpha }}}\). In fact we will argue that the right hand side of (3.10) is independent of \({{\varvec{\alpha }}}\) for almost every \({{\varvec{\alpha }}}\).

4 The T-operator for a Single Scatterer

Recall from (1.5) that the T-operator for the single scatterer potential W is defined by

$$\begin{aligned} T(E)=\lambda \,{\text {Op}}(W) + \lambda ^2 {\text {Op}}(W)\, G_\lambda (E)\, {\text {Op}}(W) \end{aligned}$$
(4.1)

in the half-plane \({\text {Im}}E>0\) where the resolvent \(G_\lambda (E)\) is resonance-free, and then extended by analytic continuation. The Born series for \(G_\lambda (E)\) leads to the formal series expansion

$$\begin{aligned} T(E) = \lambda \,{\text {Op}}(W) \sum _{n=0}^\infty (\lambda G_0(E){\text {Op}}(W))^n . \end{aligned}$$
(4.2)

Using \(\varphi _{{\varvec{y}}}({{\varvec{x}}})=\mathrm {e}({{\varvec{y}}}\cdot {{\varvec{x}}})\) as a basis for the momentum representation, the free resolvent \(G_0(E)\) has the kernel

$$\begin{aligned} G_0({{\varvec{y}}},{{\varvec{y}}}',E) = \langle \varphi _{{\varvec{y}}}, G_0(E) \varphi _{{{\varvec{y}}}'}\rangle = \frac{\delta ({{\varvec{y}}}-{{\varvec{y}}}')}{E - \tfrac{1}{2} \Vert {{\varvec{y}}}\Vert ^2} , \end{aligned}$$
(4.3)

and similarly \({\text {Op}}(W)\) has kernel \(\langle \varphi _{{\varvec{y}}}, {\text {Op}}(W) \varphi _{{{\varvec{y}}}'}\rangle ={\hat{W}}({{\varvec{y}}}-{{\varvec{y}}}')\). The T-matrix is defined as the kernel of T(E) in momentum representation, i.e.,

$$\begin{aligned} T({{\varvec{y}}},{{\varvec{y}}}',E)=\langle \varphi _{{\varvec{y}}}, T(E) \varphi _{{{\varvec{y}}}'}\rangle . \end{aligned}$$
(4.4)

It will be convenient to set \(E=\tfrac{1}{2}\Vert {{\varvec{y}}}\Vert ^2+\mathrm {i}\gamma \), with \({\text {Re}}\gamma \ge 0\), and define

$$\begin{aligned} T^\gamma ({{\varvec{y}}},{{\varvec{y}}}')= T\left( {{\varvec{y}}},{{\varvec{y}}}',\tfrac{1}{2}\Vert {{\varvec{y}}}\Vert ^2+\mathrm {i}\gamma \right) , \qquad g^\gamma ({{\varvec{y}}},{{\varvec{y}}}')= \frac{1}{\tfrac{1}{2}\Vert {{\varvec{y}}}\Vert ^2 - \tfrac{1}{2} \Vert {{\varvec{y}}}'\Vert ^2 + \mathrm {i}\gamma } . \end{aligned}$$
(4.5)

The corresponding perturbation series is

$$\begin{aligned} T^\gamma ({{\varvec{y}}},{{\varvec{y}}}')= \sum _{n=1}^\infty \lambda ^n T_n^\gamma ({{\varvec{y}}},{{\varvec{y}}}'), \end{aligned}$$
(4.6)

where the \(T_n^\gamma \) are defined by

$$\begin{aligned} T_1^\gamma ({{\varvec{y}}}_0,{{\varvec{y}}}_1) = {\hat{W}}({{\varvec{y}}}_0-{{\varvec{y}}}_1) \end{aligned}$$
(4.7)

and for \(n\ge 2\)

$$\begin{aligned}&T_n^\gamma ({{\varvec{y}}}_0,{{\varvec{y}}}_n) = \int _{({{\mathbb {R}}}^d)^{n-1}} {\hat{W}}({{\varvec{y}}}_0-{{\varvec{y}}}_1) \cdots {\hat{W}}({{\varvec{y}}}_{n-1}-{{\varvec{y}}}_n) \nonumber \\&\quad \times \bigg ( \prod _{j=1}^{n-1} g^\gamma ({{\varvec{y}}}_0,{{\varvec{y}}}_j) \bigg ) \,\mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_{n-1} . \end{aligned}$$
(4.8)

The analytic continuation of \(T^\gamma \) from \({\text {Re}}\,\gamma >0\) to the boundary \({\text {Re}}\,\gamma =0\) is obtained via the integral representation

$$\begin{aligned} T_n^\gamma ({{\varvec{y}}}_0,{{\varvec{y}}}_n)= & {} (-2 \pi \mathrm {i})^{n-1}\int _{{{\mathbb {R}}}_{\ge 0}^{n-1}} \bigg \{ \int _{({{\mathbb {R}}}^d)^{n-1}} {\hat{W}}({{\varvec{y}}}_0-{{\varvec{y}}}_1) \cdots {\hat{W}}({{\varvec{y}}}_{n-1}-{{\varvec{y}}}_n) \nonumber \\&\times e\bigg [\sum _{j=1}^{n-1} \theta _j \bigg ( \tfrac{1}{2}\Vert {{\varvec{y}}}_0\Vert ^2 - \tfrac{1}{2} \Vert {{\varvec{y}}}_j\Vert ^2 + \mathrm {i}\gamma \bigg ) \bigg ] \,\mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_{n-1} \bigg \} \,\mathrm {d}\theta _1\cdots \mathrm {d}\theta _{n-1}.\nonumber \\ \end{aligned}$$
(4.9)

The choice \(\gamma =0\) is referred to as “on-shell”. We drop the superscript if \(\gamma =0\), i.e., \(T({{\varvec{y}}},{{\varvec{y}}}')=T^0({{\varvec{y}}},{{\varvec{y}}}')\), \(T_n({{\varvec{y}}},{{\varvec{y}}}')=T_n^0({{\varvec{y}}},{{\varvec{y}}}')\). The T-matrix \(T({{\varvec{y}}},{{\varvec{y}}}')\) is then related to the on-shell scattering matrix via

$$\begin{aligned} S({{\varvec{y}}},{{\varvec{y}}}') = \delta ({{\varvec{y}}}-{{\varvec{y}}}') - 2\pi \mathrm {i}\, \delta \left( \tfrac{1}{2} \Vert {{\varvec{y}}}\Vert ^2-\tfrac{1}{2} \Vert {{\varvec{y}}}'\Vert ^2\right) \,T({{\varvec{y}}},{{\varvec{y}}}') . \end{aligned}$$
(4.10)

The unitarity of the S-matrix is equivalent to the relation, for \(\Vert {{\varvec{y}}}\Vert =\Vert {{\varvec{y}}}'\Vert \),

$$\begin{aligned} T({{\varvec{y}}},{{\varvec{y}}}')-{\overline{T}}({{\varvec{y}}}',{{\varvec{y}}}) = -2\pi \mathrm {i}\int _{{{\mathbb {R}}}^d} \delta \left( \tfrac{1}{2} \Vert {{\varvec{y}}}\Vert ^2-\tfrac{1}{2} \Vert {{\varvec{y}}}''\Vert ^2\right) \, T({{\varvec{y}}},{{\varvec{y}}}'') {\overline{T}}({{\varvec{y}}}',{{\varvec{y}}}'') \mathrm {d}{{\varvec{y}}}''. \end{aligned}$$
(4.11)

This in particular implies the optical theorem

$$\begin{aligned} {\text {Im}}\, T({{\varvec{y}}},{{\varvec{y}}}) = -\frac{1}{4\pi } \Sigma _{{\text {tot}}}({{\varvec{y}}}). \end{aligned}$$
(4.12)

We will now prove that the integrals defining (4.9) converge uniformly in \({\text {Re}}\gamma \ge 0\) in dimensions \(d > 2\). For \(f \in {{\mathcal {S}}}({{\mathbb {R}}}^{dk})\) and \(S \subset \{1,\dots ,k\}\) we denote by \(f_S \in {{\mathcal {S}}}({{\mathbb {R}}}^{dk})\) the partial inverse Fourier transform of f in the variables \({{\varvec{y}}}_i\) for \(i \in S\):

$$\begin{aligned} f_S({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) = \int _{{{\mathbb {R}}}^{dk}} f({{\varvec{z}}}_1,\dots ,{{\varvec{z}}}_k) \left[ \prod _{i \in S} \mathrm {e}({{\varvec{z}}}_k \cdot {{\varvec{y}}}_k)\right] \, \left[ \prod _{i\notin S} \delta ({{\varvec{z}}}_i -{{\varvec{y}}}_i)\right] \, \mathrm {d}{{\varvec{z}}}_1 \cdots \mathrm {d}{{\varvec{z}}}_k.\nonumber \\ \end{aligned}$$
(4.13)

We use the notation \(\langle x \rangle = \sqrt{1+x^2}\) and \(\langle {{\varvec{x}}}\rangle = \sqrt{1+\Vert {{\varvec{x}}}\Vert ^2}\).

Lemma 1

Let \(f \in {{\mathcal {S}}}({{\mathbb {R}}}^{dk})\). Then,

$$\begin{aligned}&\left| \int _{{{\mathbb {R}}}^{dk}} f({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \mathrm {e}\left( -\tfrac{1}{2}\theta _1\Vert {{\varvec{y}}}_1\Vert ^2-\cdots -\tfrac{1}{2}\theta _k\Vert {{\varvec{y}}}_k\Vert ^2\right) \, \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \right| \nonumber \\&\quad \le 2^{dk/4}\langle \theta _1 \rangle ^{-d/2} \cdots \langle \theta _k\rangle ^{-d/2} \, \sup _{S \subset \{1,\dots ,k\} } \Vert f_S \Vert _{L^1}. \end{aligned}$$
(4.14)

Proof

We partition \({{\mathbb {R}}}^k\) into \(2^k\) regions according to whether \(|\theta _i| \le 1\) or \(|\theta _i|>1\). Take \(S \subset \{1,\dots ,k\}\) and assume that for \(i \in S\), \(|\theta _i|>1\) and for \(i \notin S\), \(|\theta _i| \le 1\). We have that

$$\begin{aligned}&\int _{{{\mathbb {R}}}^{dk}} f({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \mathrm {e}\left( -\tfrac{1}{2}\theta _1\Vert {{\varvec{y}}}_1\Vert ^2-\cdots -\tfrac{1}{2}\theta _k\Vert {{\varvec{y}}}_k\Vert ^2\right) \, \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \nonumber \\&\quad = \int _{{{\mathbb {R}}}^{dk}} \int _{{{\mathbb {R}}}^{dk}} f_S({{\varvec{\eta }}}_1,\dots ,{{\varvec{\eta }}}_k) \left[ \prod _{i \in S} \mathrm {e}\left( \tfrac{1}{2} \theta _i^{-1} \Vert {{\varvec{\eta }}}_i\Vert ^2\right) \, \mathrm {e}\left( -\tfrac{1}{2} \theta _i \Vert {{\varvec{y}}}_i + \theta _i^{-1} {{\varvec{\eta }}}_i\Vert ^2\right) \right] \nonumber \\&\qquad \times \left[ \prod _{i\notin S} \mathrm {e}\left( -\tfrac{1}{2} \theta _i \Vert {{\varvec{y}}}_i\Vert ^2\right) \delta ({{\varvec{y}}}_i-{{\varvec{\eta }}}_i)\right] \,\mathrm {d}{{\varvec{\eta }}}_1 \cdots \mathrm {d}{{\varvec{\eta }}}_k \, \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k. \end{aligned}$$
(4.15)

Therefore, using the identity

$$\begin{aligned} \left| \int _{{{\mathbb {R}}}^d} \mathrm {e}\left( -\tfrac{1}{2} \theta _i \, \Vert {{\varvec{y}}}_i\Vert ^2\right) \, \mathrm {d}{{\varvec{y}}}_i \right| = |\theta _i|^{-d/2} \end{aligned}$$
(4.16)

we obtain

$$\begin{aligned}&\left| \int _{{{\mathbb {R}}}^{dk}} f({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \mathrm {e}\bigg (-\tfrac{1}{2}\theta _1\Vert {{\varvec{y}}}_1\Vert ^2-\cdots - \tfrac{1}{2}\theta _k\Vert {{\varvec{y}}}_k\Vert ^2\bigg ) \, \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \right| \nonumber \\&\quad \le \bigg (\prod _{i \in S} |\theta _i|^{-d/2} \bigg ) \Vert f_S\Vert _{L^1}. \end{aligned}$$
(4.17)

Taking a supremum over all \(2^k\) regions we see

$$\begin{aligned}&\left| \int _{{{\mathbb {R}}}^{dk}} f({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \mathrm {e}\left( -\tfrac{1}{2}\theta _1\Vert {{\varvec{y}}}_1\Vert ^2-\cdots -\tfrac{1}{2}\theta _k\Vert {{\varvec{y}}}_k\Vert ^2\right) \, \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \right| \nonumber \\&\quad \le \min \{1,|\theta _1|^{-d/2} \} \cdots \min \{1 , |\theta _k|^{-d/2} \}\sup _{S \subset \{1,\dots ,k\}} \Vert f_S \Vert _{L^1}. \end{aligned}$$
(4.18)

The result then follows since

$$\begin{aligned} \min \{1, |\theta _i|^{-d/2} \} = (\max \{1,|\theta _i|\})^{-d/2} \end{aligned}$$
(4.19)

and \(\max \{1,x\} \ge \tfrac{1}{\sqrt{2}} \langle x \rangle \). \(\square \)

Let us apply this Lemma in our situation, in particular to the inner integral in (4.9). For multi-indices \({{\varvec{\alpha }}}, {{\varvec{\beta }}}\) we define

$$\begin{aligned} {{\varvec{x}}}^{{\varvec{\beta }}}= x_1^{\beta _1} \cdots x_d^{\beta _d}, \quad D^{{\varvec{\alpha }}}= D_{{\varvec{x}}}^{{\varvec{\alpha }}}= \left( \tfrac{1}{2\pi \mathrm {i}} \partial _{x_1}\right) ^{\alpha _1} \cdots \left( \tfrac{1}{2\pi \mathrm {i}} \partial _{x_n}\right) ^{\alpha _n} \end{aligned}$$

and the norm

$$\begin{aligned} \Vert f \Vert _{M,N,p} = \sup _{\begin{array}{c} |{{\varvec{\alpha }}}|\le M\\ |{{\varvec{\beta }}}| \le N \end{array}} \Vert {{\varvec{x}}}^{{\varvec{\beta }}}(D^{{\varvec{\alpha }}}f)({{\varvec{x}}}) \Vert _{L^p}. \end{aligned}$$
(4.20)

Proposition 1

There exists a constant \(C_d\) depending only on the dimension d such that for all \({{\varvec{y}}}_0,{{\varvec{y}}}_n \in {{\mathbb {R}}}^d\), \(W\in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), \({{\varvec{\theta }}}\in {{\mathbb {R}}}^{n-1}\), \({\text {Re}}\,\gamma \ge 0\),

$$\begin{aligned}&\bigg | \int _{({{\mathbb {R}}}^d)^{n-1}} {\hat{W}}({{\varvec{y}}}_0-{{\varvec{y}}}_1) \cdots {\hat{W}}({{\varvec{y}}}_{n-1}-{{\varvec{y}}}_n) \nonumber \\&\qquad \times e\bigg [\sum _{j=1}^{n-1} \theta _j \bigg ( \tfrac{1}{2}\Vert {{\varvec{y}}}_0\Vert ^2 - \tfrac{1}{2} \Vert {{\varvec{y}}}_j\Vert ^2+\mathrm {i}\gamma \bigg ) \bigg ] \,\mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_{n-1} \bigg | \nonumber \\&\quad \le C_d^n\, \Vert W \Vert _{2d+2,d+1,1}^n \,\mathrm {e}^{-2\pi (\theta _1+\cdots +\theta _{n-1}) \gamma }\, \langle \theta _1 \rangle ^{-d/2} \cdots \langle \theta _{n-1}\rangle ^{-d/2}. \end{aligned}$$
(4.21)

Proof

The exponentially decaying factors can be pulled outside immediately. We then want to apply Lemma 1. Let \(S = \{s_1,\dots ,s_k\} \subset \{1,\dots ,n-1\}\) and consider the norm \( \left\| f_S\right\| _{L^1} \) where \(f({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_{n-1}):= {\hat{W}}({{\varvec{y}}}_0-{{\varvec{y}}}_1) \cdots {\hat{W}}({{\varvec{y}}}_{n-1}-{{\varvec{y}}}_n)\) with \({{\varvec{y}}}_0\) and \({{\varvec{y}}}_n\) constant. By definition we have that

$$\begin{aligned} \Vert f_S\Vert _{L^1} =&\int _{{{\mathbb {R}}}^{d(n-1)}} \bigg | \int _{{{\mathbb {R}}}^{d|S|}} {\hat{W}}({{\varvec{y}}}_0-{{\varvec{y}}}_{1}) \dots {\hat{W}}({{\varvec{y}}}_{n-1}-{{\varvec{y}}}_{n}) \nonumber \\&\times \left[ \prod _{i\in S} \mathrm {e}({{\varvec{y}}}_{i}\cdot {{\varvec{x}}}_{i}) \mathrm {d}{{\varvec{y}}}_{i}\right] \bigg | \left[ \prod _{i\notin S} \mathrm {d}{{\varvec{y}}}_i\right] \left[ \prod _{i\in S} \mathrm {d}{{\varvec{x}}}_{i}\right] . \end{aligned}$$
(4.22)

Let \({{\varvec{m}}}= (m_1,\dots ,m_d) \in {{\mathbb {Z}}}_{\ge 0}^{d}\) and put \(|{{\varvec{m}}}| = m_1 +\cdots +m_d\). We define the multinomial coefficient

$$\begin{aligned} \begin{pmatrix} N \\ {{\varvec{m}}}\end{pmatrix} = \frac{N!}{m_1!\cdots m_d! (N-m_1-\cdots -m_d)!}. \end{aligned}$$

and use that

$$\begin{aligned} \langle {{\varvec{x}}}\rangle ^N \le (1+|x_1|+\cdots +|x_d|)^N = \sum _{| {{\varvec{m}}}| \le N} \begin{pmatrix} N \\ {{\varvec{m}}}\end{pmatrix} \prod _{j=1}^d |x_j|^{m_j} \end{aligned}$$
(4.23)

to bound (4.22) above by

$$\begin{aligned} \begin{aligned} \Vert f_S\Vert _{L^1} =&\sum _{|{{\varvec{m}}}_{s_1}|\le d+1} \begin{pmatrix} d+1 \\ {{\varvec{m}}}_{s_1} \end{pmatrix} \cdots \sum _{|{{\varvec{m}}}_{s_k}|\le d+1} \begin{pmatrix} d+1 \\ {{\varvec{m}}}_{s_k} \end{pmatrix} \\&\times \int _{{{\mathbb {R}}}^{d(n-1)}} \bigg | \left[ \prod _{i \in S} {{\varvec{x}}}_{i}^{{{\varvec{m}}}_{i}}\right] \int _{{{\mathbb {R}}}^{d|S|}} {\hat{W}}({{\varvec{y}}}_0-{{\varvec{y}}}_{1}) \dots {\hat{W}}({{\varvec{y}}}_{n-1}-{{\varvec{y}}}_{n}) \\&\times \left[ \prod _{i\in S} \mathrm {e}({{\varvec{y}}}_{i}\cdot {{\varvec{x}}}_{i}) \mathrm {d}{{\varvec{y}}}_{i}\right] \bigg | \left[ \prod _{i\notin S} \mathrm {d}{{\varvec{y}}}_i\right] \left[ \prod _{i \in S} \langle {{\varvec{x}}}_i \rangle ^{-d-1} \mathrm {d}{{\varvec{x}}}_{i}\right] . \end{aligned} \end{aligned}$$
(4.24)

Integrating by parts with respect to \({{\varvec{y}}}_{i}\) for \(i \in S\) and pulling the absolute value inside the integral yields the upper bound

$$\begin{aligned} \Vert f_S\Vert _{L^1} \le&\bigg ( \int _{{{\mathbb {R}}}^d} \langle {{\varvec{x}}}\rangle ^{-d-1} \mathrm {d}{{\varvec{x}}}\bigg )^k \sum _{|{{\varvec{m}}}_{s_1}| \le d+1} \begin{pmatrix} d+1 \\ {{\varvec{m}}}_{s_1} \end{pmatrix} \cdots \sum _{|{{\varvec{m}}}_{s_k}| \le d+1} \begin{pmatrix} d+1 \\ {{\varvec{m}}}_{s_k} \end{pmatrix}\nonumber \\&\times \int _{{{\mathbb {R}}}^{d(n-1)}} \bigg | \bigg [\prod _{i\in S} D_{{{\varvec{y}}}_i}^{{{\varvec{m}}}_i}\bigg ] {\hat{W}}({{\varvec{y}}}_0-{{\varvec{y}}}_{1}) \dots {\hat{W}}({{\varvec{y}}}_{n-1}-{{\varvec{y}}}_{n}) \bigg | \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_{n-1}.\qquad \end{aligned}$$
(4.25)

Using the triangle inequality, the \({{\varvec{y}}}_i\) integral can be bounded above by a sum of \(\prod _{i\in S} 2^{|{{\varvec{m}}}_i|}\) terms of the form

$$\begin{aligned}&\int _{{{\mathbb {R}}}^{d(n-1)}} \big | \varphi _0({{\varvec{y}}}_0-{{\varvec{y}}}_{1}) \cdots \varphi _{n-1}({{\varvec{y}}}_{n-1}-{{\varvec{y}}}_{n}) \big | \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_{n-1} \nonumber \\&\quad = \int _{{{\mathbb {R}}}^{d(n-1)}} \big | \varphi _0({{\varvec{y}}}_0-{{\varvec{y}}}_{1}-\cdots -{{\varvec{y}}}_n) \varphi _1({{\varvec{y}}}_1) \cdots \varphi _{n-1}({{\varvec{y}}}_{n-1}) \big | \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_{n-1}\qquad \end{aligned}$$
(4.26)

where each \(\varphi _i\) is a derivative of \({\hat{W}}\) of order \(\le 2d+2\). Pulling absolute values inside tells us that (4.26) is bounded above by

$$\begin{aligned} \Vert \varphi _0 \Vert _{{\text {L}}^\infty } \Vert \varphi _1 \Vert _{{\text {L}}^1} \cdots \Vert \varphi _{n-1} \Vert _{{\text {L}}^1} . \end{aligned}$$
(4.27)

We have that

$$\begin{aligned} \begin{aligned} \Vert \varphi _0\Vert _{L^\infty }&\le \sup _{|{{\varvec{\alpha }}}|< 2d+2} \sup _{{{\varvec{y}}}\in {{\mathbb {R}}}^d} \left| D_{{{\varvec{y}}}}^{{\varvec{\alpha }}}\int _{{{\mathbb {R}}}^d} W({{\varvec{x}}}) \, \mathrm {e}(-{{\varvec{x}}}\cdot {{\varvec{y}}}) \mathrm {d}{{\varvec{x}}}\right| \\&= \sup _{|{{\varvec{\alpha }}}| < 2d+2} \sup _{{{\varvec{y}}}\in {{\mathbb {R}}}^d} \left| \int _{{{\mathbb {R}}}^d} {{\varvec{x}}}^{{\varvec{\alpha }}}W({{\varvec{x}}}) \, \mathrm {e}(-{{\varvec{x}}}\cdot {{\varvec{y}}}) \mathrm {d}{{\varvec{x}}}\right| \le \Vert W \Vert _{2d+2,0,1}. \end{aligned} \end{aligned}$$
(4.28)

For the \(L^1\) norms we similarly have

$$\begin{aligned} \begin{aligned} \Vert \varphi _i\Vert _{L^1}&\le \sup _{|{{\varvec{\alpha }}}|<2d+2} \int _{{{\mathbb {R}}}^d} \left| D_{{{\varvec{y}}}}^{{\varvec{\alpha }}}\int _{{{\mathbb {R}}}^d} W ({{\varvec{x}}}) \mathrm {e}(-{{\varvec{x}}}\cdot {{\varvec{y}}}) \mathrm {d}{{\varvec{x}}}\right| \, \mathrm {d}{{\varvec{y}}}\\&= \sup _{|{{\varvec{\alpha }}}|<2d+2} \int _{{{\mathbb {R}}}^d} \left| \int _{{{\mathbb {R}}}^d}{{\varvec{x}}}^{{{\varvec{\alpha }}}} W ({{\varvec{x}}}) \mathrm {e}(-{{\varvec{x}}}\cdot {{\varvec{y}}}) \mathrm {d}{{\varvec{x}}}\right| \, \mathrm {d}{{\varvec{y}}}\\&\le \sup _{|{{\varvec{\alpha }}}|<2d+2} \left( \int _{{{\mathbb {R}}}^d} \langle {{\varvec{y}}}\rangle ^{-d-1} \mathrm {d}{{\varvec{y}}}\right) \sum _{{{\varvec{m}}}\le d+1} \begin{pmatrix} d+1 \\ {{\varvec{m}}}\end{pmatrix} \int _{{{\mathbb {R}}}^d} \left| D_{{\varvec{x}}}^{{{\varvec{m}}}} {{\varvec{x}}}^{{{\varvec{\alpha }}}} W ({{\varvec{x}}})\right| \mathrm {d}{{\varvec{x}}}. \end{aligned} \end{aligned}$$
(4.29)

Note that \(D_{{\varvec{x}}}^{{\varvec{m}}}{{\varvec{x}}}^{{\varvec{\alpha }}}W({{\varvec{x}}})\) yields a sum of up to \(2^{|{{\varvec{m}}}|}\) terms each of which is of the form \({{\varvec{x}}}^{{\tilde{{{\varvec{\alpha }}}}}} D_{{{\varvec{x}}}}^{{\tilde{{{\varvec{m}}}}}} W({{\varvec{x}}})\) where \(|{\tilde{{{\varvec{\alpha }}}}}|<|{{\varvec{\alpha }}}|\) and \(|{\tilde{{{\varvec{m}}}}}| < |{{\varvec{m}}}|\). Hence, using the identity

$$\begin{aligned} \sum _{|{{\varvec{m}}}|<d+1} \begin{pmatrix} d+1 \\ {{\varvec{m}}}\end{pmatrix} 2^{|{{\varvec{m}}}|} = (2d+1)^{d+1} \end{aligned}$$
(4.30)

we obtain

$$\begin{aligned} \Vert \varphi _i\Vert _{L^1} \le \left( \int _{{{\mathbb {R}}}^d} \langle {{\varvec{y}}}\rangle ^{-d-1} \mathrm {d}{{\varvec{y}}}\right) (2d+1)^{d+1} \, \Vert W \Vert _{2d+2,d+1,1}. \end{aligned}$$
(4.31)

Finally then, combining these bounds with (4.25) and applying (4.30) we obtain

$$\begin{aligned} \Vert f_S\Vert _{L^1} \le \left( \int _{{{\mathbb {R}}}^d} \langle {{\varvec{x}}}\rangle ^{-d-1} \mathrm {d}{{\varvec{x}}}\right) ^{k+n-1} (2d+1)^{(k+n-1)(d+1)} \, \Vert W \Vert _{2d+2,d+1,1}^n. \end{aligned}$$
(4.32)

\(\square \)

This proposition shows that

$$\begin{aligned} |T_n^\gamma ({{\varvec{y}}}_0,{{\varvec{y}}}_n) | \le (2 \pi )^{n-1} C_d^n\, \Vert W \Vert _{2d+2,d+1,1}^n \bigg (\int _0^\infty \langle \theta \rangle ^{-d/2} \,\mathrm {d}\theta \bigg )^{n-1} \end{aligned}$$
(4.33)

for all \({\text {Re}}\gamma \ge 0\), provided \(d >2 \). Therefore, the series (4.6) converges absolutely, uniformly for all \({\text {Re}}\gamma \ge 0\), if

$$\begin{aligned} |\lambda | < \left( 2\pi C_d\, \Vert W \Vert _{2d+2,d+1,1} \int _0^\infty \langle \theta \rangle ^{-d/2} \,\mathrm {d}\theta \right) ^{-1}. \end{aligned}$$
(4.34)

5 The Perturbation Series

We set \(H_\lambda =H_{1,\lambda }\), \(U_\lambda (t)=U_{1,\lambda }(t)\), and note that

$$\begin{aligned} H_{h,\lambda }= h^2 H_{\lambda /h^2}, \qquad U_{h,\lambda }(t)= U_{\lambda /h^2}(h t) . \end{aligned}$$
(5.1)

The first term in (2.7) thus can be written

$$\begin{aligned} U_{\lambda /h^2}(t h r^{1-d}) \, {\text {Op}}_{r,h}(a) \, U_{\lambda /h^2}(-t h r^{1-d}) , \end{aligned}$$

which is the quantity of interest for this work. To simplify notation, we will write in the upcoming discussion \(\lambda \) for \(\lambda /h^2\) and t for \(t h r^{1-d}\) and later re-substitute when taking limits. Furthermore, we can declutter our expressions by passing to the so-called interaction picture,

$$\begin{aligned} U_{\lambda }(t) U_0(-t) \, {\text {Op}}_{r,h}(a) \, U_0(t) U_{\lambda }(-t). \end{aligned}$$

After the relevant calculations we then simply replace a by \(L_0(t) a\) due to (2.4). Because of the gauge invariance of A(t) in (1.6) under the substitution \(H_{h,\lambda }\mapsto H_{h,\lambda }+E\) for any \(E\in {{\mathbb {R}}}\), we may replace the potential V by \(V-\int _{{\mathcal {L}}}V({{\varvec{x}}})\mathrm {d}{{\varvec{x}}}\) in the following. This means that the potential now has the Fourier series

$$\begin{aligned} V({{\varvec{x}}})= r^d \sum _{{{\varvec{b}}}\in {{\mathcal {L}}}^*\setminus \{{{\varvec{0}}}\}} {\hat{W}}(r {{\varvec{b}}}) \,\mathrm {e}({{\varvec{b}}}\cdot {{\varvec{x}}}) , \end{aligned}$$
(5.2)

with \({\hat{W}}({{\varvec{y}}}) = \int W({{\varvec{x}}}) e(-{{\varvec{x}}}\cdot {{\varvec{y}}})\,\mathrm {d}{{\varvec{y}}}\). Thus we may ignore in the following expansions all terms with \({\hat{W}}({{\varvec{0}}})\); but note that we have not assumed here that \({\hat{W}}({{\varvec{0}}})=0\).

We proceed using Duhamel’s principle. In particular one has that

$$\begin{aligned} U_{\lambda }(t) = U_0(t) - 2 \pi \mathrm {i}\lambda \int _0^t U_\lambda (s)\, {\text {Op}}(V)\, U_0(t-s) \, \mathrm {d}s. \end{aligned}$$
(5.3)

Iterating this expression yields a formal perturbative expansion for \(U_\lambda (t) U_0(-t)\) and \(U_0(t) U_{\lambda }(-t)\). After multiplying these two series together one obtains as in [28, §5] the formal perturbative expansion

$$\begin{aligned}&{\text {Tr}}[ \Pi _{{\varvec{\alpha }}}U_\lambda (t ) U_0(-t) {\text {Op}}_{r,h}( a) U_0(t ) U_\lambda (-t ) {\text {Op}}_{r,h}(b) ] \nonumber \\&\quad = \sum _{n=0} ^{\infty } (2 \pi \mathrm {i}\lambda )^n {{\mathcal {I}}}_n(t) + O(r^\infty ), \end{aligned}$$
(5.4)

with \({{\mathcal {I}}}_0(t)= {\text {Tr}}[ \Pi _{{\varvec{\alpha }}}{\text {Op}}_{r,h}( a) {\text {Op}}_{r,h}(b) ]\) and for \(n\ge 1\)

$$\begin{aligned} {{\mathcal {I}}}_n(t) = \sum _{\ell =0}^{n} (-1)^\ell {{\mathcal {I}}}_{\ell ,n}(t) . \end{aligned}$$
(5.5)

For \(1\le \ell \le n-1\) we have, with the shorthand \({{\mathcal {P}}}={{\mathcal {L}}}^*+{{\varvec{\alpha }}}\),

$$\begin{aligned} \begin{aligned} {{\mathcal {I}}}_{\ell ,n}(t) =&r^{nd} h^d \sum _{\begin{array}{c} {{\varvec{p}}}_1,\dots ,{{\varvec{p}}}_n={{\varvec{p}}}_0\in {{\mathcal {P}}}\\ \text {non-consec} \end{array}} {{\mathcal {W}}}(r{{\varvec{p}}}_0,\ldots ,r{{\varvec{p}}}_n) \int _{{{\mathbb {R}}}^d} \int _{\begin{array}{c} 0<s_1<\cdots<s_\ell<t \\ 0<s_n<\cdots<s_{\ell +1}<t \end{array} } \\&\times \mathrm {e}\bigg (-\tfrac{1}{2} \, s_1\Vert {{\varvec{p}}}_0\Vert ^2+\tfrac{1}{2} \sum _{j=1}^{\ell -1} (s_j-s_{j+1}) \Vert {{\varvec{p}}}_j \Vert ^2 + \tfrac{1}{2} \, s_\ell \Vert {{\varvec{p}}}_\ell \Vert ^2 -\tfrac{1}{2} \, s_{\ell +1}\Vert {{\varvec{p}}}_\ell + r^{d-1} {{\varvec{\eta }}}\Vert ^2 \bigg ) \\&\times \mathrm {e}\bigg (\tfrac{1}{2} \sum _{j=\ell +1}^{n-1} (s_j-s_{j+1}) \Vert {{\varvec{p}}}_j +r^{d-1} {{\varvec{\eta }}}\Vert ^2 + \tfrac{1}{2} \, s_n \Vert {{\varvec{p}}}_n+ r^{d-1} {{\varvec{\eta }}}\Vert ^2\bigg ) \\&\quad \times {\tilde{a}} (-{{\varvec{\eta }}}, h ({{\varvec{p}}}_\ell +\tfrac{1}{2} r^{d-1}{{\varvec{\eta }}})) \, {\tilde{b}}( {{\varvec{\eta }}}, h ({{\varvec{p}}}_n+ \tfrac{1}{2} r^{d-1} {{\varvec{\eta }}})) \, \mathrm {d}{{\varvec{s}}}\, \mathrm {d}{{\varvec{\eta }}}, \end{aligned}\nonumber \\ \end{aligned}$$
(5.6)

where

$$\begin{aligned} {{\mathcal {W}}}({{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n)= \prod _{j=0}^{n-1} {\hat{W}}({{\varvec{y}}}_j-{{\varvec{y}}}_{j+1}) , \end{aligned}$$
(5.7)

and the summation “non-consec” is restricted to terms with \({{\varvec{p}}}_j\ne {{\varvec{p}}}_{j+1}\); recall the comment after (5.2). We make the variable substitutions \(u_0= s_1\), \(u_j = s_{j+1}-s_j\) for \(j = 1,\ldots , \ell -1\), and \(u_j = s_j - s_{j+1}\) for \(j = \ell +1,\ldots , n-1\) and \(u_n = s_n\). Let \(\Box _{\ell ,n}(t)\) denote the simplex

$$\begin{aligned}&\Box _{\ell ,n}(t)=\{ {{\varvec{u}}}=(u_0,\ldots ,u_n) \in {{\mathbb {R}}}_{\ge 0}^{n+1} \mid \nonumber \\&\quad u_0+\cdots +u_{\ell -1}< t,\; u_\ell =0,\; u_{\ell +1} + \cdots + u_n < t\} , \end{aligned}$$
(5.8)

and let \(\mathrm {d}^\perp {{\varvec{u}}}=\prod _{j\ne \ell } \mathrm {d}u_j\). Then

$$\begin{aligned} \begin{aligned} {{\mathcal {I}}}_{\ell ,n}(t)&=r^{nd} h^d \sum _{\begin{array}{c} {{\varvec{p}}}_1,\dots ,{{\varvec{p}}}_n={{\varvec{p}}}_0\in {{\mathcal {P}}}\\ \text {non-consec} \end{array}} {{\mathcal {W}}}(r{{\varvec{p}}}_0,\ldots ,r{{\varvec{p}}}_n)\int _{{{\mathbb {R}}}^d} \int _{\Box _{\ell ,n}(t)} \\&\quad \times \mathrm {e}\bigg (\tfrac{1}{2} \, \sum _{j=0}^{\ell -1} u_j (\Vert {{\varvec{p}}}_\ell \Vert ^2-\Vert {{\varvec{p}}}_j \Vert ^2) + \tfrac{1}{2} \sum _{j=\ell +1}^{n} u_j (\Vert {{\varvec{p}}}_j + r^{d-1} {{\varvec{\eta }}}\Vert ^2-\Vert {{\varvec{p}}}_\ell +r^{d-1}{{\varvec{\eta }}}\Vert ^2) \bigg ) \\&\quad \times {\tilde{a}} \bigg (-{{\varvec{\eta }}}, h \bigg ({{\varvec{p}}}_\ell +\tfrac{1}{2} r^{d-1}{{\varvec{\eta }}}\bigg )\bigg ) \, {\tilde{b}}\bigg ( {{\varvec{\eta }}}, h \bigg ({{\varvec{p}}}_n+ \tfrac{1}{2} r^{d-1} {{\varvec{\eta }}}\bigg )\bigg ) \, \mathrm {d}^\perp {{\varvec{u}}}\, \mathrm {d}{{\varvec{\eta }}}. \end{aligned}\nonumber \\ \end{aligned}$$
(5.9)

The terms \({{\mathcal {I}}}_{0,n}(t)\) and \({{\mathcal {I}}}_{n,n}(t)\) have an analogous representation.

As in the case of the T-matrix, it will be useful to embed these quantities in an analytic family by extending to complex energy. To this end, we define for \({\text {Re}}\gamma \ge 0\),

$$\begin{aligned} {{\mathcal {I}}}_{\ell ,n}^\gamma (t)= & {} r^{nd} h^d \sum _{\begin{array}{c} {{\varvec{p}}}_1,\dots ,{{\varvec{p}}}_n={{\varvec{p}}}_0\in {{\mathcal {P}}}\\ \text {non-consec} \end{array}} {{\mathcal {W}}}(r{{\varvec{p}}}_0,\ldots ,r{{\varvec{p}}}_n)\int _{{{\mathbb {R}}}^d} \int _{\Box _{\ell ,n}(t)}\nonumber \\&\times \mathrm {e}\bigg (\tfrac{1}{2} \, \sum _{j=0}^{\ell -1} u_j (\Vert {{\varvec{p}}}_\ell \Vert ^2-\Vert {{\varvec{p}}}_j \Vert ^2+\mathrm {i}\gamma ) \nonumber \\&\quad + \tfrac{1}{2} \sum _{j=\ell +1}^{n} u_j (\Vert {{\varvec{p}}}_j + r^{d-1} {{\varvec{\eta }}}\Vert ^2-\Vert {{\varvec{p}}}_\ell +r^{d-1}{{\varvec{\eta }}}\Vert ^2+\mathrm {i}\gamma ) \bigg ) \nonumber \\&\quad \times {\tilde{a}} \bigg (-{{\varvec{\eta }}}, h \bigg ({{\varvec{p}}}_\ell +\tfrac{1}{2} r^{d-1}{{\varvec{\eta }}}\bigg )\bigg ) \, {\tilde{b}}\bigg ( {{\varvec{\eta }}}, h \bigg ({{\varvec{p}}}_n+ \tfrac{1}{2} r^{d-1} {{\varvec{\eta }}}\bigg )\bigg ) \, \mathrm {d}^\perp {{\varvec{u}}}\, \mathrm {d}{{\varvec{\eta }}}.\qquad \end{aligned}$$
(5.10)

In the following, we will drop the superscript \(\gamma \) in the case \(\gamma =0\).

We wish to consider the limit of this quantity as \(h=r \rightarrow 0\), uniformly for \({\text {Re}}\gamma \ge 0\). The first simplification we make is to replace the second argument of \({\tilde{a}}\) and \({\tilde{b}}\) by \(h {{\varvec{p}}}_\ell \) and \(h {{\varvec{p}}}_n\) respectively which incurs an error of order \(r^d\). Recall now that, in view of (2.9) and (5.1), we are interested in the quantity \(h^{-2n}{{\mathcal {I}}}_{\ell ,n}^\gamma (t h r^{1-d})\). Since \(h^{-2n} r^{nd} h^d (h r^{1-d})^n = h^{d-n} r^n = r^d\) we see for \(h=r\)

$$\begin{aligned}&h^{-2n}{{\mathcal {I}}}_{\ell ,n}^\gamma (t h r^{1-d}) \nonumber \\&\quad = r^d \sum _{\begin{array}{c} {{\varvec{p}}}_1,\dots ,{{\varvec{p}}}_n={{\varvec{p}}}_0\in {{\mathcal {P}}}\\ \text {non-consec} \end{array}} H_{t,\ell ,n}^{r^{2-d}\gamma } \nonumber \\&\qquad \big ( r^{2-d} \big (\tfrac{1}{2}\Vert {{\varvec{p}}}_0\Vert ^2,\ldots ,\tfrac{1}{2}\Vert {{\varvec{p}}}_n\Vert ^2\big ), r {{\varvec{p}}}_0,\ldots ,r{{\varvec{p}}}_n \big ) (1+O(r^d)) , \end{aligned}$$
(5.11)

where

$$\begin{aligned} \begin{aligned}&H_{t,\ell ,n}^\gamma ({{\varvec{\xi }}},{{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n) ={{\mathcal {W}}}({{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n) \int _{{{\mathbb {R}}}^{d}} \int _{\Box _{\ell ,n}(t)} \\&\quad \times \mathrm {e}\bigg (- \sum _{j=0}^{\ell -1} u_j (\xi _j-\xi _\ell -\mathrm {i}\gamma ) + \sum _{j=\ell +1}^{n} u_j (\xi _j-\xi _\ell +\mathrm {i}\gamma +({{\varvec{y}}}_j-{{\varvec{y}}}_\ell )\cdot {{\varvec{\eta }}})\bigg ) \\&\quad \times {\tilde{a}} (-{{\varvec{\eta }}}, {{\varvec{y}}}_\ell )\, {\tilde{b}}( {{\varvec{\eta }}}, {{\varvec{y}}}_n) \, \mathrm {d}^\perp {{\varvec{u}}}\, \mathrm {d}{{\varvec{\eta }}}. \end{aligned} \end{aligned}$$
(5.12)

Using the definition of \({\tilde{a}}\) and \({\tilde{b}}\) and integrating over \({{\varvec{\eta }}}\) yields

$$\begin{aligned} \begin{aligned}&H_{t,\ell ,n}^\gamma ({{\varvec{\xi }}},{{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n)= {{\mathcal {W}}}({{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n) \int _{{{\mathbb {R}}}^{d}} \int _{\Box _{\ell ,n}(t)} \\&\quad \times \mathrm {e}\bigg (- \sum _{j=0}^{\ell -1} u_j (\xi _j-\xi _\ell -\mathrm {i}\gamma ) + \sum _{j=\ell +1}^{n} u_j (\xi _j-\xi _\ell +\mathrm {i}\gamma ) \bigg ) \\&\quad \times {{\mathcal {A}}}_{\ell ,n}({{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n,{{\varvec{u}}})\, \mathrm {d}^\perp {{\varvec{u}}}, \end{aligned} \end{aligned}$$
(5.13)

where

$$\begin{aligned} {{\mathcal {A}}}_{\ell ,n}({{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n,{{\varvec{u}}})= \int _{{{\mathbb {R}}}^d} a\bigg ( {{\varvec{x}}}- \sum _{j=\ell +1}^n u_j ({{\varvec{y}}}_j-{{\varvec{y}}}_\ell ),{{\varvec{y}}}_\ell \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_n) \, \mathrm {d}{{\varvec{x}}}. \end{aligned}$$
(5.14)

We recall here that we are working in the interaction picture. To return to the original lab frame, we replace a by the evolved symbol (i.e., replacing \({{\varvec{x}}}\) by \({{\varvec{x}}}-t{{\varvec{y}}}_\ell \)), so that

$$\begin{aligned} a\bigg ( {{\varvec{x}}}- \sum _{j=\ell +1}^n u_j ({{\varvec{y}}}_j-{{\varvec{y}}}_\ell ),{{\varvec{y}}}_\ell \bigg ) \end{aligned}$$

becomes

$$\begin{aligned} a\bigg ( {{\varvec{x}}}- \bigg (t-\sum _{j=\ell +1}^n u_j\bigg ){{\varvec{y}}}_\ell - \sum _{j=\ell +1}^n u_j {{\varvec{y}}}_j,{{\varvec{y}}}_\ell \bigg ) . \end{aligned}$$

One can interpret this as corresponding to a classical trajectory in which the particle initially has momentum \({{\varvec{y}}}_\ell \), undergoes straight line motion for time \((t-u_{\ell +1}-\cdots -u_n)\), and then experiences \(n-\ell \) collisions separated by straight line motion for times \(u_j\) with momenta \({{\varvec{y}}}_{j}\) for \(j= \ell +1,\dots ,n\).

6 The Poisson Model

We note that the momenta \({{\varvec{p}}}_j\) in the summation (5.11) are of order \(r^{-1}\), and that for \(r\rightarrow 0\)

$$\begin{aligned} |\{ {{\varvec{p}}}\in {{\mathcal {P}}}: \Vert {{\varvec{p}}}\Vert ^2 \le r^{-2} \}| \sim {\text {vol}}({{\mathcal {B}}}_1^d) \, r^{-d} \end{aligned}$$
(6.1)

where \({{\mathcal {B}}}_1^d\) is the d-dimensional unit ball. This means that the average spacing between consecutive values of the set \(\{ \Vert {{\varvec{p}}}\Vert ^2 \le r^{-2}:{{\varvec{p}}}\in {{\mathcal {P}}}\}\) is of the order \(r^{d-2}\). Thus (5.11) measures correlations between the \(\Vert {{\varvec{p}}}_j\Vert ^2\) precisely on the scale of the mean spacing. Starting with the influential work of Berry and Tabor in the context of quantum chaos [7], it has been conjectured that the statistics on this correlation scale should be governed by a one-dimensional Poisson process. Rigorous results towards a proof of this conjecture are mostly limited to two-point statistics, where the problem reduces to a variant of quantitative versions of the Oppenheim conjecture [9, 23, 34, 36, 38, 44]; results on higher correlation functions are obtained in [47].

Assumption 1

We assume in the following that in the \(r=h\rightarrow 0\) asymptotics of (5.11) the lattice \({{\mathcal {P}}}= {{\mathcal {L}}}^* + {{\varvec{\alpha }}}\) with \({{\mathcal {L}}}\) fixed (arbitrary) and \({{\varvec{\alpha }}}\) random can be replaced by a Poisson process in \({{\mathbb {R}}}^d\) with unit intensity.

This assumption should be thought of as a generalisation of the Berry–Tabor conjecture on the Poisson distribution of energy levels of quantum systems with integrable classical Hamiltonian. We assume both that the lengths of lattice vectors (which represent the energy levels) behave as if they belonged to a Poisson process, and that the angular distribution of the lattice vectors is uniform on the \((d-1)\)-sphere and independent of the length (on the correct scale).

Assumptions of this kind have previously been used in modeling spectral correlations of diffractive systems, see for instance [10, 11, 32]. To formulate Assumption 1 in precise terms, define \({{\mathcal {J}}}_{\ell ,n}^\gamma (t)\) via the relation

$$\begin{aligned}&h^{-2n}{{\mathcal {J}}}_{\ell ,n}^\gamma (t h r^{1-d}) \nonumber \\&\quad = r^d\; {{\mathbb {E}}}\sum _{\begin{array}{c} {{\varvec{p}}}_1,\dots ,{{\varvec{p}}}_n={{\varvec{p}}}_0\in {{\mathcal {P}}}\\ \text {non-consec} \end{array}} H_{t,\ell ,n}^{r^{2-d}\gamma }\big ( r^{2-d} \big (\tfrac{1}{2}\Vert {{\varvec{p}}}_0\Vert ^2,\ldots ,\tfrac{1}{2}\Vert {{\varvec{p}}}_n\Vert ^2\big ), r {{\varvec{p}}}_0,\ldots ,r{{\varvec{p}}}_n \big ) , \end{aligned}$$
(6.2)

where \({{\mathcal {P}}}\) is a Poisson point process in \({{\mathbb {R}}}^d\) with intensity one, and \({{\mathbb {E}}}\) denotes expectation. Then Assumption 1 should be understood as

$$\begin{aligned} \lim _{r=h\rightarrow 0} h^{-2n} \int _{{{\mathbb {R}}}^d/{{\mathcal {L}}}^*} \sum _{n=1}^\infty \sum _{\ell =0}^n (-1)^{\ell } \big [ {{\mathcal {I}}}_{\ell ,n}^\gamma (t h r^{1-d}) - {{\mathcal {J}}}_{\ell ,n}^\gamma (t h r^{1-d}) \big ] \mathrm {d}{{\varvec{\alpha }}}= 0 \end{aligned}$$
(6.3)

for all \({\text {Re}}\,\gamma \ge 0\), \(t>0\). Similarly, it is a conjecture that

$$\begin{aligned} \lim _{r=h\rightarrow 0} h^{-2n} \sum _{n=1}^\infty \sum _{\ell =0}^n (-1)^{\ell } \big [ {{\mathcal {I}}}_{\ell ,n}^\gamma (t h r^{1-d}) - {{\mathcal {J}}}_{\ell ,n}^\gamma (t h r^{1-d}) \big ] = 0 \end{aligned}$$
(6.4)

for Lebesgue almost every \({{\varvec{\alpha }}}\), and indeed for \({{\varvec{\alpha }}}\) satisfying a mild diophantine condition as in [36, 38]. Statement (6.4) is more subtle than (6.3), though the implication (6.4) \(\Rightarrow \) (6.3) would require uniform upper bounds for dominated convergence; cf. [28, Sect. 12]. Statement (6.3) is the only heuristic assumption made in this study.

As an illustrative example, fix \({{\varvec{\alpha }}}=(\sqrt{2},\sqrt{3})\) and consider the sequence \((\lambda _i, \theta _i)_{i \in {{\mathbb {N}}}}\) of elements of the set \(\{( \pi \Vert {{\varvec{n}}}+{{\varvec{\alpha }}}\Vert ^2, \frac{1}{2\pi } \arg ({{\varvec{n}}}+{{\varvec{\alpha }}}) ) \in {{\mathbb {R}}}_{\ge 0} \times [0,1) \mid {{\varvec{n}}}\in {{\mathbb {Z}}}^2 \} \) arranged in increasing order according to the first component where \(0\le \arg {{\varvec{z}}}< 2 \pi \) is the polar angle of \({{\varvec{z}}}\). Our assumption is concerned with the distribution of points \((\lambda _i,\theta _i)\) restricted to a strip \([R- \Delta R,R) \times [0,1)\) for \(\Delta R > 0\) fixed and \(R \rightarrow \infty \). Due to the choice of normalisation, a strip of this form contains roughly \(\Delta R\) points. Broadly speaking, the points contained in the strip should behave more and more randomly as R increases—see Fig. 1.

Fig. 1
figure 1

Scatter plots of \((\lambda _i,\theta _i)\) in the strip \([R-\Delta R,R) \times [0,1)\) for \(R = \pi \times 100^2\) and \(R = \pi \times 500^2\), respectively, with \(\Delta R = 10^4\). For large R we expect the point set to be modelled by a Poisson point process, cf. Assumption 1

The Berry–Tabor conjecture, and by extension our assumption, is more readily expressed in terms of the gap distribution. Consider the sequence \((\lambda _{i+1}-\lambda _i,\theta _i)\) for all points in the window \([R-\Delta R,R) \times [0,1)\). The Berry–Tabor conjecture states that in the limit \(R \rightarrow \infty \), the sequence of gaps \(\lambda _{i+1}-\lambda _i\) has an exponential distribution with mean 1. Our Assumption 1 then implies that in the limit \(R\rightarrow \infty \), the sequence of pairs \((\lambda _{i+1}-\lambda _i,\theta _i)\) is distributed according to the product of an exponential distribution with mean 1 and the uniform distribution on [0, 1)—see Fig. 2.

Fig. 2
figure 2

A scatter plot and histogram for the sequence \((\lambda _{i+1}-\lambda _i,\theta _i)\) for \(R=\pi \times 500^2\) and \(\Delta R=10^4\). The surface superimposed on the histogram is the density of the conjectured limiting distribution under Assumption 1

Let \({{\mathcal {J}}}_n^\gamma = \sum _{\ell =0}^n (-1)^\ell {{\mathcal {J}}}_{\ell ,n}^\gamma \). The principal objective of this paper is to now prove that the limit

$$\begin{aligned} \lim _{h=r\rightarrow 0} \sum _{n=1}^\infty (2 \pi \mathrm {i}\lambda h^{-2})^n \; {{\mathcal {J}}}_n^\gamma (t h r^{1-d}) \end{aligned}$$
(6.5)

exists for \(\lambda >0\) sufficiently small but fixed, every \(t>0\) and \({\text {Re}}\gamma >0\), and to evaluate the limit at \(\gamma =0\). The convergence is stated in Proposition 5 and explicit formulas for the limit are discussed in Sect. 11.

Now, in order to take the expectation value of the sum over the momenta in in (5.11), with \({{\mathcal {P}}}\) a Poisson point process, one needs to keep track of terms where the various \({{\varvec{p}}}_i\) and \({{\varvec{p}}}_j\) are equal or distinct. This is best done through the notion of set partitions, which are presented in detail in Appendix A. We denote by \(\Pi (n,k)\) the collection of set partitions \(\underline{F}=[F_1,\ldots ,F_k]\) of the set \(\{0,\ldots ,n\}\) into k blocks \(F_1,\ldots ,F_k\). Let us then define, for a given set partition \(\underline{F}\in \Pi (n,k)\), \({{\varvec{\xi }}}\in {{\mathbb {R}}}^{k}\), \({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k\in {{\mathbb {R}}}^d\)

$$\begin{aligned} H_{t,\ell ,\underline{F}}^\gamma ({{\varvec{\xi }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) = H_{t,\ell ,n}^\gamma (\iota _{\underline{F}}({{\varvec{\xi }}}),\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) , \end{aligned}$$
(6.6)

where \(\iota _{\underline{F}}({{\varvec{\xi }}})\) is the embedding \(\iota _{\underline{F}}: {{\mathbb {R}}}^k\rightarrow {{\mathbb {R}}}^{n+1}\) defined by \({{\varvec{\xi }}}=(\xi _1,\ldots ,\xi _k)\mapsto (x_0,\ldots ,x_n)\) where \((x_j=\xi _i \iff j\in F_i)\), and \(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)\) is the vector analogue \(\iota _{\underline{F}}: ({{\mathbb {R}}}^d)^k\rightarrow ({{\mathbb {R}}}^d)^{n+1}\). See Sect. A.3 for details.

With this we can write

$$\begin{aligned}&\sum _{\begin{array}{c} {{\varvec{p}}}_1,\dots ,{{\varvec{p}}}_n={{\varvec{p}}}_0\in {{\mathcal {P}}}\\ \text {non-consec} \end{array}} H_{t,\ell ,n}^{r^{2-d}\gamma }\big ( r^{2-d} \big (\tfrac{1}{2}\Vert {{\varvec{p}}}_0\Vert ^2,\ldots ,\tfrac{1}{2}\Vert {{\varvec{p}}}_n\Vert ^2\big ), r {{\varvec{p}}}_0,\ldots ,r{{\varvec{p}}}_n \big ) \nonumber \\&\quad = \sum _{k=1}^{n+1} \sum _{\underline{F}\in \Pi _\circ (n,k)} \sum _{\begin{array}{c} {{\varvec{p}}}_1,\dots ,{{\varvec{p}}}_k\in {{\mathcal {P}}}\\ \text {distinct} \end{array}} H_{t,\ell ,\underline{F}}^{r^{2-d}\gamma }\big ( r^{2-d} \big (\tfrac{1}{2}\Vert {{\varvec{p}}}_1\Vert ^2,\ldots ,\tfrac{1}{2}\Vert {{\varvec{p}}}_k\Vert ^2\big ), r {{\varvec{p}}}_1,\ldots ,r{{\varvec{p}}}_k \big ) .\nonumber \\ \end{aligned}$$
(6.7)

Note that for \({{\varvec{e}}}=(1,1,\ldots ,1)\in {{\mathbb {R}}}^k\) and any \(\omega \in {{\mathbb {R}}}\)

$$\begin{aligned} H_{t,\ell ,\underline{F}}^\gamma ({{\varvec{\xi }}}+\omega {{\varvec{e}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)=H_{t,\ell ,\underline{F}}^\gamma ({{\varvec{\xi }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k). \end{aligned}$$
(6.8)

This decomposition allows us to compute the expectation over \({{\mathcal {P}}}\). Specifically, Campbell’s theorem yields, for \({{\mathcal {P}}}\) a Poisson process with intensity one,

$$\begin{aligned} \begin{aligned}&{{\mathbb {E}}}\sum _{\begin{array}{c} {{\varvec{p}}}_1,\dots ,{{\varvec{p}}}_k\in {{\mathcal {P}}}\\ \text {distinct} \end{array}} H_{t,\ell ,\underline{F}}^{r^{2-d}\gamma }\big ( r^{2-d} \big (\tfrac{1}{2}\Vert {{\varvec{p}}}_1\Vert ^2,\ldots ,\tfrac{1}{2}\Vert {{\varvec{p}}}_k\Vert ^2\big ), r {{\varvec{p}}}_1,\ldots ,r{{\varvec{p}}}_k \big ) \\&\quad =\int _{({{\mathbb {R}}}^d)^k} H_{t,\ell ,\underline{F}}^{r^{2-d}\gamma }\big ( r^{2-d} \big (\tfrac{1}{2}\Vert {{\varvec{y}}}_1\Vert ^2,\ldots ,\tfrac{1}{2}\Vert {{\varvec{y}}}_k\Vert ^2\big ), r {{\varvec{y}}}_1,\ldots ,r{{\varvec{y}}}_k \big )\,\mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k\\&\quad =r^{-kd} \int _{({{\mathbb {R}}}^d)^k} H_{t,\ell ,\underline{F}}^{r^{-d}\gamma }\big ( r^{-d} \big (\tfrac{1}{2}\Vert {{\varvec{y}}}_1\Vert ^2,\ldots ,\tfrac{1}{2}\Vert {{\varvec{y}}}_k\Vert ^2\big ), {{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k \big )\,\mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k . \end{aligned} \end{aligned}$$
(6.9)

Due to its translation invariance, \(H_{t,\ell ,\underline{F}}^\gamma \) is determined by its values in the \(l^\text {th}\) coordinate plane \(\Sigma ^\perp =\{{{\varvec{x}}}\in {{\mathbb {R}}}^k: x_l=0\}\). It will be convenient in our calculations below to fix l so that \(\ell \in F_l\). We define the corresponding Fourier transform of \(H_{t,\ell ,\underline{F}}^\gamma \) restricted to \(\Sigma ^\perp \) by

$$\begin{aligned} G_{t,\ell ,\underline{F}}^\gamma \big ({{\varvec{\theta }}}, {{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) = \int _{\Sigma ^\perp } H_{t,\ell ,\underline{F}}^\gamma \big ({{\varvec{\xi }}}, {{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) \, e(-{{\varvec{\xi }}}\cdot {{\varvec{\theta }}})\, \mathrm {d}^\perp {{\varvec{\xi }}}, \end{aligned}$$
(6.10)

where \({{\varvec{\theta }}}\in \Sigma ^\perp \) and \(\mathrm {d}^\perp {{\varvec{\xi }}}= \prod _{\begin{array}{c} j = 1\\ j\ne l \end{array}}^k \mathrm {d}{{\varvec{\xi }}}_j\) denotes the standard Lebesgue measure on \(\Sigma ^\perp \). For \({{\varvec{\xi }}}\in \Sigma ^\perp \) we then have

$$\begin{aligned} H_{t,\ell ,\underline{F}}^\gamma \big ({{\varvec{\xi }}}, {{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k\big ) = \int _{\Sigma ^\perp } G_{t,\ell ,\underline{F}}^\gamma \big ( {{\varvec{\theta }}}, {{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k\big ) \, e({{\varvec{\xi }}}\cdot {{\varvec{\theta }}})\, \mathrm {d}^\perp {{\varvec{\theta }}}. \end{aligned}$$
(6.11)

Define the \((k\times (n+1))\)-matrix \(\Delta _{\ell ,\underline{F}}\) by

$$\begin{aligned} (\Delta _{\ell ,\underline{F}})_{ij}= {\left\{ \begin{array}{ll} 0 &{} \text {if } i=l \\ -1 &{} \text {if } j\in F_i \cap [0,\ell -1] \\ 1 &{} \text {if } j\in F_i \cap [\ell +1, n]\\ 0 &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$
(6.12)

In particular \((\Delta _{\ell ,\underline{F}})_{i\ell } = 0\) for all \(i=1,\dots ,k\). In view of (5.13) and (6.10) we find in the case \(\gamma =0\) that

$$\begin{aligned} G_{t,\ell ,\underline{F}}\big ( {{\varvec{\theta }}}, {{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k\big ) = {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k))\, {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \end{aligned}$$
(6.13)

with

$$\begin{aligned} {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) = \int _{\Box _{\ell ,n}(t)} {{\mathcal {A}}}_{\ell ,n}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}),{{\varvec{u}}}) \, \delta ({{\varvec{\theta }}}-\Delta _{\ell ,\underline{F}}{{\varvec{u}}})\, \mathrm {d}^\perp {{\varvec{u}}},\nonumber \\ \end{aligned}$$
(6.14)

and

$$\begin{aligned} \delta ({{\varvec{\theta }}}-\Delta _{\ell ,\underline{F}}{{\varvec{u}}})= \prod _{\begin{array}{c} i=1\\ \ell \notin F_i \end{array}}^k \delta \bigg (\theta _i+\sum _{j=0}^{\ell -1} u_j {\mathbb {1}}(j\in F_i)-\sum _{j=\ell +1}^{n} u_j {\mathbb {1}}(j\in F_i) \bigg ) . \end{aligned}$$
(6.15)

Thus if \(\ell \in F_l\)

$$\begin{aligned}&H_{t,\ell ,\underline{F}}({{\varvec{\xi }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) ={{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \int _{\Sigma ^\perp } \mathrm {e}\bigg (\sum _{\begin{array}{c} i=1\\ i\ne l \end{array}}^k \theta _i (\xi _i-\xi _l) \bigg ) \nonumber \\&\quad \times {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \, \mathrm {d}^\perp {{\varvec{\theta }}}. \end{aligned}$$
(6.16)

More generally, for \({\text {Re}}\,\gamma \ge 0\), we have the formula

$$\begin{aligned}&H_{t,\ell ,\underline{F}}^\gamma ({{\varvec{\xi }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) ={{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \nonumber \\&\quad \times \int _{\Sigma ^\perp } \mathrm {e}\bigg (\sum _{\begin{array}{c} i=1\\ i\ne l \end{array}}^k \big [ \theta _i (\xi _i-\xi _l) + \mathrm {i}|\theta _i| \gamma \big ] \bigg ) {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \, \mathrm {d}^\perp {{\varvec{\theta }}}. \end{aligned}$$
(6.17)

Our task is now to determine the convergence of

$$\begin{aligned} \begin{aligned}&\sum _{n=1}^\infty (2 \pi \mathrm {i}\lambda h^{-2})^n\; {{\mathcal {J}}}_{\ell ,n}^\gamma (t h r^{1-d}) \\&\quad = \sum _{n=1}^\infty (2 \pi \mathrm {i}\lambda )^n\sum _{\ell =0}^n (-1)^{\ell } \sum _{k=1}^n \sum _{\underline{F}\in \Pi _\circ (n,k)} \int _{{{\mathbb {R}}}^{dk}}{{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \\&\qquad \times \int _{\Sigma ^\perp } \mathrm {e}\bigg (\sum _{\begin{array}{c} i=1\\ i\ne l \end{array}}^k \big [ \theta _i (\xi _i-\xi _l) + \mathrm {i}|\theta _i| \gamma \big ] \bigg ) {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}(r^d {{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \, \mathrm {d}^\perp {{\varvec{\theta }}}\, \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \end{aligned} \end{aligned}$$
(6.18)

as \(r=h\rightarrow 0\) and calculate the limit.

7 Explicit Formulas

In this section we provide more explicit formulas for \({\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k})\). The main results are the expression (7.12) for general \({{\varvec{\theta }}}\) and (7.15) for \({{\varvec{\theta }}}= {{\varvec{0}}}\). We assume that \(\underline{F}\) is some set partition into k blocks with \(\ell \in F_l\). We define

$$\begin{aligned} \begin{aligned} I_-(\underline{F})&= \{ i \in \{1,\dots ,k\}\setminus \{ l\} \, \mid F_i \cap [0,\ell -1] \ne \emptyset \} , \\ I_+(\underline{F})&= \{ i \in \{1,\dots ,k\}\setminus \{l\} \, \mid F_i \cap [\ell +1,n] \ne \emptyset \}, \end{aligned} \end{aligned}$$
(7.1)

and similarly

$$\begin{aligned} \begin{aligned} I_-^*(\underline{F})&= \{ i \in \{1,\dots ,k\}\setminus \{l\} \, \mid F_i \cap [\ell +1,n] =\emptyset \}, \\ I_+^*(\underline{F})&= \{ i \in \{1,\dots ,k\}\setminus \{l\} \, \mid F_i \cap [0,\ell -1] =\emptyset \}. \end{aligned} \end{aligned}$$
(7.2)

We also write

$$\begin{aligned} J_- = I_- \cap I_-^*,\quad J_+ = I_+ \cap I_+^*,\quad J = I_- \cap I_+. \end{aligned}$$
(7.3)

Note that \(J_-\) (resp. \(J_+\)) contains indices corresponding to one-sided blocks of the partition, i.e. blocks, all of whose elements are less than or equal to (resp. greater than or equal to) \(\ell \). The set J contains indices corresponding to blocks which are not one-sided, i.e. they contain elements both less than and greater than \(\ell \). This provides a complete categorisation of blocks \(F_i\):

$$\begin{aligned} \{1,\dots ,k\} = \{l\} \cup J_- \cup J_+ \cup J. \end{aligned}$$
(7.4)

Let us furthermore define

$$\begin{aligned} \mu _i=\mu _{i,\ell ,\underline{F}} = |F_i \cap [0,\ell ]|-1, \qquad \nu _i=\nu _{i,\ell ,\underline{F}} = |F_i \cap [\ell ,n]|-1 . \end{aligned}$$
(7.5)

Combining (5.14) and (6.14) we have

$$\begin{aligned}&{\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \nonumber \\&\quad =\int _{\Box _{\ell ,n}(t)} \left( \int _{{{\mathbb {R}}}^d} a\bigg ( {{\varvec{x}}}- \sum _{\begin{array}{c} i=1\\ i\ne l \end{array}}^k({{\varvec{y}}}_i-{{\varvec{y}}}_l)\sum _{j \in F_i \cap [\ell +1,n]} u_j ,{{\varvec{y}}}_l \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_1) \, \mathrm {d}{{\varvec{x}}}\right) \nonumber \\&\qquad \times \prod _{\begin{array}{c} i=1\\ i\ne l \end{array}}^k \delta \bigg (\theta _i+\sum _{j \in F_i \cap [0,\ell -1]} u_j -\sum _{j \in F_i \cap [\ell +1,n]} u_j \bigg ) \mathrm {d}^\perp {{\varvec{u}}}. \end{aligned}$$
(7.6)

First we can freely integrate over all \(u_j\) for \(j \in F_l\). This yields a factor of

$$\begin{aligned} \frac{(t- \sum _{j=0, j\notin F_l}^{\ell -1} u_j)_+^{\mu _l} (t- \sum _{j=\ell +1, j\notin F_l}^{n} u_j)_+^{\nu _l}}{\mu _l! \nu _l!} \end{aligned}$$
(7.7)

where \((x)_+ = \max \{0 , x \}\). If we use the convention \((x)_+^0 = \mathbb {1}[x > 0]\) then, writing \(r_i = \sum _{j \in F_i \cap [0,\ell -1]} u_j\) and \(s_i = \sum _{j \in F_i \cap [\ell +1,n]} u_j\), we obtain

$$\begin{aligned} \begin{aligned}&{\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \\&\quad = \int _{{{\mathbb {R}}}_{\ge 0}^{|I_-|}}\int _{{{\mathbb {R}}}_{\ge 0}^{|I_+|}} \bigg ( \int _{{{\mathbb {R}}}^d} a\bigg ( {{\varvec{x}}}- \sum _{i\in I_+}({{\varvec{y}}}_i-{{\varvec{y}}}_l) s_i ,{{\varvec{y}}}_l \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_1) \, \mathrm {d}{{\varvec{x}}}\bigg ) \\&\qquad \times \frac{(t- \sum _{i \in I_- } r_i)_+^{\mu _l} (t- \sum _{i \in I_+ } s_i)_+^{\nu _l}}{\mu _l! \nu _l!} \prod _{i\in J_-} \Lambda _i^-(r_i) \delta (\theta _i+r_i)\,\mathrm {d}r_i\\&\qquad \times \prod _{i\in J_+} \Lambda _i^+(s_i) \delta (\theta _i -s_i ) \,\mathrm {d}s_i \prod _{i\in J} \Lambda _i^-(r_i) \Lambda _i^+(s_i) \delta (\theta _i+r_i -s_i ) \,\mathrm {d}r_i\,\mathrm {d}s_i , \end{aligned} \end{aligned}$$
(7.8)

with

$$\begin{aligned} \Lambda _i^-(r_i) = \int _{{{\mathbb {R}}}_{\ge 0}^{\mu _i+1}} \delta \bigg ( r_i - \sum _{j \in F_i \cap [0,\ell -1]} u_j \bigg ) \prod _{j \in F_i \cap [0,\ell -1]} \mathrm {d}u_j = \frac{r_i^{\mu _i}}{\mu _i !} \end{aligned}$$
(7.9)

and

$$\begin{aligned} \Lambda _i^+(s_i) = \int _{{{\mathbb {R}}}_{\ge 0}^{\nu _i+1}} \delta \bigg ( s_i - \sum _{j \in F_i \cap [\ell +1,n]} u_j \bigg ) \prod _{j \in F_i \cap [\ell +1,n]} \mathrm {d}u_j = \frac{s_i^{\nu _i}}{\nu _i !}. \end{aligned}$$
(7.10)

In other words

$$\begin{aligned} \begin{aligned}&{\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \\&\quad =\int _{{{\mathbb {R}}}_{\ge 0}^{|I_-|}}\int _{{{\mathbb {R}}}_{\ge 0}^{|I_+|}} \bigg ( \int _{{{\mathbb {R}}}^d} a\bigg ( {{\varvec{x}}}- \sum _{i\in I_+}({{\varvec{y}}}_i-{{\varvec{y}}}_l) s_i ,{{\varvec{y}}}_l \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_1) \, \mathrm {d}{{\varvec{x}}}\bigg ) \\&\qquad \times \frac{(t- \sum _{i \in I_- } r_i)_+^{\mu _l} (t- \sum _{i \in I_+ } s_i)_+^{\nu _l}}{\mu _l!\nu _l!} \prod _{i\in J_-} \frac{r_i^{\mu _i}}{\mu _i !} \delta (\theta _i+r_i)\,\mathrm {d}r_i\\&\qquad \times \prod _{i\in J_+} \frac{s_i^{\nu _i}}{\nu _i !} \delta (\theta _i -s_i ) \,\mathrm {d}s_i \prod _{i\in J} \frac{r_i^{\mu _i} s_i^{\nu _i}}{\mu _i ! \nu _i !} \delta (\theta _i+r_i -s_i ) \,\mathrm {d}r_i\,\mathrm {d}s_i . \end{aligned} \end{aligned}$$
(7.11)

Integrating over \(r_i\) for \(i \in I_- = J_- \cup J\) and \(s_i\) for \(i \in J_+\) yields

$$\begin{aligned} \begin{aligned}&{\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \\&\quad =\int _{{{\mathbb {R}}}_{\ge 0}^{|J|}} \bigg ( \int _{{{\mathbb {R}}}^d} a\bigg ( {{\varvec{x}}}- \sum _{i\in J_+}({{\varvec{y}}}_i-{{\varvec{y}}}_l) \theta _i - \sum _{i\in J}({{\varvec{y}}}_i-{{\varvec{y}}}_l) s_i ,{{\varvec{y}}}_l \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_1) \, \mathrm {d}{{\varvec{x}}}\bigg ) \\&\qquad \times \frac{1}{\mu _l!\nu _l!} \left( t + \sum _{i \in J_- } \theta _i - \sum _{i \in J} (s_i-\theta _i) \right) _+^{\mu _l} \left( t- \sum _{i \in J_+ } \theta _i - \sum _{i \in J} s_i \right) _+^{\nu _l}\\&\qquad \times \prod _{i\in J_-} \frac{(-\theta _i)_+^{\mu _i}}{\mu _i !} \prod _{i\in J_+} \frac{(\theta _i)_+^{\nu _i}}{\nu _i !} \prod _{i\in J} \frac{(s_i-\theta _i)_+^{\mu _i} s_i^{\nu _i}}{\mu _i ! \nu _i !}\mathrm {d}{{\varvec{s}}}. \end{aligned} \end{aligned}$$
(7.12)

When \({{\varvec{\theta }}}\rightarrow {{\varvec{0}}}\) note that the integrand vanishes unless \(\mu _i = 0\) for all \(i \in J_-\) and \(\nu _i = 0\) for all \(i \in J_+\), that is to say that a partition \(\underline{F}\) only contributes in the limit if the only one-sided blocks are singletons. This motivates the definition of \(\Omega (n,k) \subset \{0,\ldots , n\}\times \Pi _\circ (n,k)\) as the set of marked partitions \((\ell ,\underline{F})\) such that every block \(F_i\) not containing \(\ell \) either (i) is a singleton or (ii) contains at least one number strictly less than \(\ell \) and at least one number strictly greater than \(\ell \) (cf. Appendix A.2).

The support of \({\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k})\) as a function of \({{\varvec{\theta }}}\) is contained in the domain

$$\begin{aligned} \{{{\varvec{\theta }}}\in \Sigma ^\perp \mid -t\le \theta _i\le 0\, \forall i\in J_-,\; 0\le \theta _i\le t\, \forall i\in J_+ \}; \end{aligned}$$
(7.13)

and is continuous in this domain in a sufficiently small neighbourhood of the origin. Putting \({{\varvec{\theta }}}={{\varvec{0}}}\) in (7.12) yields

$$\begin{aligned} {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{0}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) =&\int _{{{\mathbb {R}}}_{\ge 0}^{|J|}} \bigg ( \int _{{{\mathbb {R}}}^d} a\bigg ( {{\varvec{x}}}- \sum _{i\in J}({{\varvec{y}}}_i-{{\varvec{y}}}_l) s_i ,{{\varvec{y}}}_l \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_1) \, \mathrm {d}{{\varvec{x}}}\bigg ) \nonumber \\&\times \frac{1}{\mu _l!\nu _l!} \left( t - \sum _{i \in J} s_i \right) _+^{\mu _l+\nu _l}\prod _{i\in J_- \cup J_+ } \mathbb {1}(|F_i|=1) \prod _{i\in J} \frac{s_i^{\mu _i+\nu _i}}{\mu _i ! \nu _i !} \,\mathrm {d}{{\varvec{s}}}.\nonumber \\ \end{aligned}$$
(7.14)

In other words, for \((\ell ,\underline{F})\in \Omega (n,k)\) we have

$$\begin{aligned}&{\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{0}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) =\int _{{{\mathbb {R}}}_{\ge 0}^k} \bigg (\int _{{{\mathbb {R}}}^d} a\bigg ( {{\varvec{x}}}- \sum _{j=1}^k u_j ({{\varvec{y}}}_j-{{\varvec{y}}}_l),{{\varvec{y}}}_l \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_1) \, \mathrm {d}{{\varvec{x}}}\bigg ) \nonumber \\&\quad \times \bigg (\prod _{i=1}^k \frac{u_i^{\mu _{i}+\nu _{i}} }{\mu _{i}! \nu _{i}!}\bigg ) \, \delta (u_1+\cdots +u_k-t) \, \mathrm {d}{{\varvec{u}}}, \end{aligned}$$
(7.15)

whereas for \((\ell ,\underline{F})\notin \Omega (n,k)\) we have that \({\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{0}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k})=0\).

8 Decay Estimates

In this section we prove Proposition 3 which establishes the absolute and uniform convergence of the series defining (6.18). The proof will be similar in spirit to that of Proposition 1, and the key ingredient is the following decay estimate.

Proposition 2

Let \(a,b\in {{\mathcal {S}}}({\text {T}}({{\mathbb {R}}}^d))\). There exists a constant \(C_{a,b,d}\) such that for all \(W\in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), \(t>0\), \({\text {Re}}\gamma \ge 0\), \(r>0\),

$$\begin{aligned} \begin{aligned}&\bigg |\int _{({{\mathbb {R}}}^d)^k} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \mathrm {e}\bigg (\sum _{\begin{array}{c} i=1\\ i\ne l \end{array}}^k \big [\theta _i (\tfrac{1}{2}\Vert {{\varvec{y}}}_i\Vert ^2-\tfrac{1}{2}\Vert {{\varvec{y}}}_l\Vert ^2) + \mathrm {i}|\theta _i| \gamma \big ] \bigg ) \\[-0.3cm]&\qquad \times {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}(r^d {{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \,\mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k \bigg | \\&\quad \le C_{a,b,d}^n \langle t \rangle ^n \, \Vert W\Vert _{2d+2,d+1,1}^n\, \bigg (\prod _{i=1}^k \frac{1}{|F_i|!} \bigg ) \, \bigg (\prod _{\begin{array}{c} i=1\\ i \ne l \end{array}}^k \mathrm {e}^{-2 \pi |\theta _i| \gamma } \langle \theta _i \rangle ^{-d/2} \bigg ). \end{aligned} \end{aligned}$$
(8.1)

Proof

We first write

$$\begin{aligned} \begin{aligned}&\bigg |\int _{({{\mathbb {R}}}^d)^k} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \mathrm {e}\bigg (\sum _{\begin{array}{c} i=1\\ i\ne l \end{array}}^k \big [\theta _i (\tfrac{1}{2}\Vert {{\varvec{y}}}_i\Vert ^2-\tfrac{1}{2}\Vert {{\varvec{y}}}_l\Vert ^2) + \mathrm {i}|\theta _i| \gamma \big ] \bigg ) \\&\qquad \times {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}(r^d {{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \,\mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k \bigg | \\&\quad \le \mathrm {e}^{-2 \pi \sum _{i \ne l} |\theta _i| \gamma } \bigg | \int _{({{\mathbb {R}}}^d)^{k}} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \mathrm {e}\bigg (\tfrac{1}{2} \sum _{i=1}^k \theta _i \Vert {{\varvec{y}}}_i\Vert ^2 \bigg ) \\&\qquad \times {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}(r^d {{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \,\mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k\bigg | \end{aligned} \end{aligned}$$
(8.2)

where we have used the shorthand \(\theta _l := -\sum _{i=1,i \ne l}^k \theta _i\). The inner integral can be treated using Lemma 1, so all we must do is compute the relevant norm appearing inside the Lemma. Let \(K=\{ 1,\dots ,k \}\) and \(S =\{s_1,\dots ,s_p\} \subset K\). We need to compute the norm \(\Vert f_S\Vert _{L^1}\) where

$$\begin{aligned} f({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) = {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)) \, {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}(r^d {{\varvec{\theta }}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k). \end{aligned}$$
(8.3)

We have

$$\begin{aligned}&\Vert f_S\Vert _{L^1} = \int _{{{\mathbb {R}}}^{dk}}\bigg | \int _{{{\mathbb {R}}}^{d|S|}} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)) \, {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}(r^d {{\varvec{\theta }}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \nonumber \\&\quad \times \left[ \prod _{i \in S} \mathrm {e}({{\varvec{y}}}_{i}\cdot {{\varvec{x}}}_{i}) \mathrm {d}{{\varvec{y}}}_{i}\right] \bigg | \left[ \prod _{i\in K\setminus S} \mathrm {d}{{\varvec{y}}}_i\right] \left[ \prod _{i\in S} \mathrm {d}{{\varvec{x}}}_{i}\right] \end{aligned}$$
(8.4)

and, more explicitly,

$$\begin{aligned} \begin{aligned} \Vert f_S\Vert _{L^1} =&\int _{{{\mathbb {R}}}^{dk}}\bigg | \int _{{{\mathbb {R}}}^{d|S|}} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)) \, \int _{{{\mathbb {R}}}_{\ge 0}^{|J|}} \int _{{{\mathbb {R}}}^d} {\tilde{a}}({{\varvec{\eta }}},{{\varvec{y}}}_l) \, {\tilde{b}}(-{{\varvec{\eta }}},{{\varvec{y}}}_1) \\&\times \frac{1}{\mu _l! \nu _l!} \bigg ( t + r^d\sum _{i \in J_- } \theta _i - \sum _{i \in J} (s_i-r^d\theta _i) \bigg )_+^{\mu _l} \bigg (t- r^d\sum _{i \in J_+ } \theta _i - \sum _{i \in J} s_i \bigg )_+^{\nu _l}\\&\times \prod _{i\in J_-} \frac{(-r^d\theta _i)_+^{\mu _i}}{\mu _i !} \prod _{i\in J_+} \frac{(r^d\theta _i)_+^{\nu _i}}{\nu _i !} \prod _{i\in J} \frac{(s_i-r^d\theta _i)_+^{\mu _i} s_i^{\nu _i}}{\mu _i ! \nu _i !}\\&\times \bigg [\prod _{i \in S} \mathrm {e}({{\varvec{y}}}_{i}\cdot ({{\varvec{x}}}_{i}-{{\varvec{\eta }}}\tau _{i})) \mathrm {d}{{\varvec{y}}}_{i}\bigg ] \,\mathrm {d}{{\varvec{s}}}\, \mathrm {d}{{\varvec{\eta }}}\bigg | \bigg [\prod _{i\in K\setminus S} \mathrm {d}{{\varvec{y}}}_i \bigg ] \bigg [\prod _{i\in S} \mathrm {d}{{\varvec{x}}}_{i}\bigg ] \end{aligned} \end{aligned}$$
(8.5)

where we write

$$\begin{aligned} \tau _i = {\left\{ \begin{array}{ll} r^d \theta _i &{} i \in J_+ \\ s_i &{} i \in J . \end{array}\right. } \end{aligned}$$
(8.6)

We pull the integral over \({{\varvec{\eta }}}\) and \({{\varvec{s}}}\) outside the absolute value and bound \(\Vert f_S\Vert _{L^1}\) above by

$$\begin{aligned} \begin{aligned}&\int _{{{\mathbb {R}}}^{d (k+1)}}\bigg | \int _{{{\mathbb {R}}}^{d|S|}} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)) \,{\tilde{a}}({{\varvec{\eta }}},{{\varvec{y}}}_l) \, {\tilde{b}}(-{{\varvec{\eta }}},{{\varvec{y}}}_1) \\&\quad \times \bigg [\prod _{i \in S} \mathrm {e}({{\varvec{y}}}_{i}\cdot {{\varvec{x}}}_{i} )\mathrm {d}{{\varvec{y}}}_{i}\bigg ] \bigg | \bigg [\prod _{i\in K\setminus S} \mathrm {d}{{\varvec{y}}}_i\bigg ] \bigg [\prod _{i\in S} \mathrm {d}{{\varvec{x}}}_{i}\bigg ] \, \mathrm {d}{{\varvec{\eta }}}\\&\quad \times \int _{{{\mathbb {R}}}_{\ge 0}^{|J|} } \frac{1}{\mu _l! \nu _l!} \bigg ( t + r^d\sum _{i \in J_- } \theta _i - \sum _{i \in J} (s_i-r^d\theta _i) \bigg )_+^{\mu _l} \bigg (t- r^d\sum _{i \in J_+ } \theta _i - \sum _{i \in J} s_i \bigg )_+^{\nu _l}\\&\quad \times \prod _{i\in J_-} \frac{(-r^d\theta _i)_+^{\mu _i}}{\mu _i !} \prod _{i\in J_+} \frac{(r^d\theta _i)_+^{\nu _i}}{\nu _i !} \prod _{i\in J} \frac{(s_i-r^d\theta _i)_+^{\mu _i} s_i^{\nu _i}}{\mu _i ! \nu _i !} \, \mathrm {d}{{\varvec{s}}}. \end{aligned} \end{aligned}$$
(8.7)

This can be bounded above by

$$\begin{aligned}&\frac{t^{|J|}}{|J|!}\left[ \prod _{i=1}^k \frac{t^{{\tilde{\mu }}_i+ {\tilde{\nu }}_i}}{{\tilde{\mu }}_i! {\tilde{\nu }}_i!}\right] \int _{{{\mathbb {R}}}^{d (k+1)}} \bigg | \int _{{{\mathbb {R}}}^{d|S|}} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)) \, {\tilde{a}}({{\varvec{\eta }}},{{\varvec{y}}}_l) \, {\tilde{b}}(-{{\varvec{\eta }}},{{\varvec{y}}}_1) \nonumber \\&\quad \times \left[ \prod _{i \in S} \mathrm {e}({{\varvec{y}}}_{i}\cdot {{\varvec{x}}}_{i} )\mathrm {d}{{\varvec{y}}}_{i}\right] \bigg | \left[ \prod _{i\in K\setminus S} \mathrm {d}{{\varvec{y}}}_i\right] \left[ \prod _{i\in S} \mathrm {d}{{\varvec{x}}}_{i}\right] \, \mathrm {d}{{\varvec{\eta }}}\end{aligned}$$
(8.8)

where we use the convention \({\tilde{\mu }}_i = (\mu _i)_+ = \max \{0,\mu _i\}\). As in the proof of Proposition 1 we bound this above by

$$\begin{aligned}&\frac{t^{|J|}}{|J|!}\left[ \prod _{i=1}^k \frac{t^{{\tilde{\mu }}_i+ {\tilde{\nu }}_i}}{{\tilde{\mu }}_i! {\tilde{\nu }}_i!}\right] \,\sum _{|{{\varvec{m}}}_{s_1}|<d+1} \begin{pmatrix} d+1 \\ {{\varvec{m}}}_{s_1} \end{pmatrix} \cdots \sum _{|{{\varvec{m}}}_{s_k}| < d+1} \begin{pmatrix} d+1 \\ {{\varvec{m}}}_{s_k} \end{pmatrix} \nonumber \\&\quad \int _{{{\mathbb {R}}}^{d (k+1)}} \bigg | \left[ \prod _{i \in S} {{\varvec{x}}}_i^{{{\varvec{m}}}_i}\right] \int _{{{\mathbb {R}}}^{d|S|}} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)) \, {\tilde{a}}({{\varvec{\eta }}},{{\varvec{y}}}_l) \, {\tilde{b}}(-{{\varvec{\eta }}},{{\varvec{y}}}_1) \nonumber \\&\qquad \times \left[ \prod _{i \in S} \mathrm {e}({{\varvec{y}}}_{i}\cdot {{\varvec{x}}}_{i} )\mathrm {d}{{\varvec{y}}}_{i}\right] \bigg | \left[ \prod _{i\in K\setminus S} \mathrm {d}{{\varvec{y}}}_i\right] \left[ \prod _{i\in S} \langle {{\varvec{x}}}_i\rangle ^{-d-1} \mathrm {d}{{\varvec{x}}}_{i}\right] \, \mathrm {d}{{\varvec{\eta }}}. \end{aligned}$$
(8.9)

Integrating by parts with respect to \({{\varvec{y}}}_i\) for \(i\in S\) and pulling the absolute value inside the integral gives the upper bound

$$\begin{aligned}&\frac{t^{|J|}}{|J|!}\left[ \prod _{i=1}^k \frac{t^{{\tilde{\mu }}_i+ {\tilde{\nu }}_i}}{{\tilde{\mu }}_i! {\tilde{\nu }}_i!}\right] \, \left( \int _{{{\mathbb {R}}}^d} \langle {{\varvec{x}}}\rangle ^{-d-1} \mathrm {d}{{\varvec{x}}}\right) ^{p} \,\sum _{|{{\varvec{m}}}_{s_1}|<d+1} \begin{pmatrix} d+1 \\ {{\varvec{m}}}_{s_1} \end{pmatrix} \cdots \sum _{|{{\varvec{m}}}_{s_p}| < d+1} \begin{pmatrix} d+1 \\ {{\varvec{m}}}_{s_p} \end{pmatrix} \nonumber \\&\quad \int _{{{\mathbb {R}}}^{d (k+1)}} \bigg | \left[ \prod _{i \in S} D_{{{\varvec{y}}}_i}^{{{\varvec{m}}}_i}\right] {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)) \, {\tilde{a}}({{\varvec{\eta }}},{{\varvec{y}}}_l) \, {\tilde{b}}(-{{\varvec{\eta }}},{{\varvec{y}}}_1) \bigg | \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \, \mathrm {d}{{\varvec{\eta }}}. \end{aligned}$$
(8.10)

If \(l \ne 1\), each \({{\varvec{y}}}_i\) appears \(2|F_i|\) times in the product of \({{\mathcal {W}}}\), \({\tilde{a}}\) and \({\tilde{b}}\), except for \({{\varvec{y}}}_1\) and \({{\varvec{y}}}_l\) which appear \(2|F_i| + 1\) times (due to their appearance in \({\tilde{a}}\) and \({\tilde{b}}\)). If \(l=1\), then each \({{\varvec{y}}}_i\) appears \(2|F_i|\) times, except for \({{\varvec{y}}}_1\) which appears \(2|F_i|+2\) times. Either way we can find a constant \(c_d\) such that the number of terms inside the absolute value can be bounded above by

$$\begin{aligned} c_d^{pd} |F_1|^{d+1} \cdots |F_k|^{d+1}. \end{aligned}$$

Applying the triangle inequality yields a sum of terms of the form

$$\begin{aligned} \int _{{{\mathbb {R}}}^{d (k+1)}} \bigg | \Phi (\iota _{\underline{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)) \,{\mathfrak {a}} ({{\varvec{\eta }}},{{\varvec{y}}}_l) {\mathfrak {b}} (-{{\varvec{\eta }}},{{\varvec{y}}}_1) \bigg | \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \, \mathrm {d}{{\varvec{\eta }}}\end{aligned}$$
(8.11)

where

$$\begin{aligned} \Phi ({{\varvec{\eta }}}_0,{{\varvec{\eta }}}_1,\dots ,{{\varvec{\eta }}}_n) = \varphi _1({{\varvec{\eta }}}_0-{{\varvec{\eta }}}_1) \cdots \varphi _{n }({{\varvec{\eta }}}_{n-1}-{{\varvec{\eta }}}_n), \end{aligned}$$
(8.12)

each \(\varphi _i\) is a derivative of \({\hat{W}}\) of order \(\le 2d+2\) and \({\mathfrak {a}}\) and \({\mathfrak {b}}\) are derivatives of \({\tilde{a}}\) and \({\tilde{b}}\) with respect to the second argument of order \(\le d+1\). Define the map \(\kappa : \{0,\dots ,n\} \rightarrow \{1,\dots ,k\}\) implicitly dependent on \(\underline{F}\) by \(\kappa (i) = j\) if \(i \in F_j\). We then have that

$$\begin{aligned} \Phi (\iota _{\underline{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) ) = \varphi _1({{\varvec{y}}}_{\kappa (0)}-{{\varvec{y}}}_{\kappa (1)}) \cdots \varphi _{n}({{\varvec{y}}}_{\kappa (n-1)} - {{\varvec{y}}}_{\kappa (n)}). \end{aligned}$$
(8.13)

By the definition of \(\Omega (n,k)\) we have that \(\kappa (0)= \kappa (n) = 1\). For \(i = 2,\dots ,k\), we define the partial inverse

$$\begin{aligned} \kappa ^{-1}(i) = \min \{ j \in \{1,\dots ,n-1\} \mid \kappa (j) = i \}. \end{aligned}$$

For \(i =2,\dots ,k\), the first factor of \(\Phi (\iota _{\underline{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k))\) in which \({{\varvec{y}}}_i\) appears is then by definition

$$\begin{aligned} \varphi _{\kappa ^{-1}(i)}({{\varvec{y}}}_{\kappa (\kappa ^{-1}(i)-1)} -{{\varvec{y}}}_i). \end{aligned}$$

We define \({{\mathcal {K}}}\subset \{1,\dots ,n\}\) to be the image of \(\kappa ^{-1}(\{2,\dots , k\}\setminus \{l\})\). Equation (8.11) can thus be bounded above by

$$\begin{aligned} \begin{aligned}&\left[ \prod _{i \in \{1,\dots ,n\} \setminus {{\mathcal {K}}}} \Vert \varphi _i \Vert _{L^\infty } \right] \\&\quad \times \int _{{{\mathbb {R}}}^{d (k+1)}} \bigg | {\mathfrak {a}} ({{\varvec{\eta }}},{{\varvec{y}}}_l) {\mathfrak {b}}(-{{\varvec{\eta }}},{{\varvec{y}}}_1) \left[ \prod _{i=1}^{|{{\mathcal {K}}}|} \varphi _{i}({{\varvec{y}}}_{\kappa (i-1)} - {{\varvec{y}}}_{\kappa (i)}) \right] \bigg | \mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \, \mathrm {d}{{\varvec{\eta }}}. \end{aligned} \end{aligned}$$
(8.14)

Make the variable substitutions \({{\varvec{y}}}_{\kappa (i)} \rightarrow {{\varvec{y}}}_{\kappa (i-1)}-{{\varvec{y}}}_{\kappa (i)}\) to bound this above by

$$\begin{aligned} \bigg [\prod _{i \in \{1,\dots ,n\} \setminus {{\mathcal {K}}}} \Vert \varphi _i \Vert _{L^\infty } \bigg ] \left( \int _{{{\mathbb {R}}}^{3d}} | {\mathfrak {a}}({{\varvec{\eta }}},{{\varvec{y}}}) {\mathfrak {b}}(-{{\varvec{\eta }}},{{\varvec{y}}}')| \mathrm {d}{{\varvec{y}}}\mathrm {d}{{\varvec{y}}}'\mathrm {d}{{\varvec{\eta }}}\right) \, \bigg [\prod _{i=1}^{|{{\mathcal {K}}}|} \Vert \varphi _i\Vert _{L_1} \bigg ]. \end{aligned}$$
(8.15)

As in (4.28) we have that

$$\begin{aligned} \Vert \varphi _i\Vert _{L^\infty } \le \Vert W \Vert _{2d+2,0,1} \end{aligned}$$
(8.16)

and as in (4.31) we have that

$$\begin{aligned} \Vert \varphi _i\Vert _{L^1} \le \left( \int _{{{\mathbb {R}}}^d} \langle {{\varvec{y}}}\rangle ^{-d-1} \mathrm {d}{{\varvec{y}}}\right) \, (2d+1)^{d+1} \, \Vert W \Vert _{2d+2,d+1,1}. \end{aligned}$$
(8.17)

Combining with the fact that the functions \({\tilde{a}}\) and \({\tilde{b}}\) are Schwartz class implies that there exists a constant \( c_{a,b,d}\) such that

$$\begin{aligned} \Vert f_S\Vert _{L^1} \le c_{a,b,d}^n \Vert W \Vert _{2d+2,d+1,2}^n \, \frac{t^{|J|}}{|J|!}\left[ \prod _{i=1}^k \frac{t^{{\tilde{\mu }}_i+{\tilde{\nu }}_i}}{{\tilde{\mu }}_i! {\tilde{\nu }}_i!} |F_i|^{d+1}\right] . \end{aligned}$$
(8.18)

We then use the fact that

$$\begin{aligned} |F_i| = {\left\{ \begin{array}{ll} \mu _i+1 &{} i \in J_- \\ \nu _i + 1 &{} i \in J_+ \\ \mu _i+\nu _i+2 &{} i \in J \\ \mu _i+\nu _i+1 &{} i = l \end{array}\right. } \end{aligned}$$
(8.19)

so in particular for \(i \in J_+\) or \(i \in J_-\)

$$\begin{aligned} \frac{1}{\mu _i!} = \frac{|F_i|}{|F_i|!} \quad \text { or } \quad \frac{1}{\nu _i!} = \frac{|F_i|}{|F_i|!} \end{aligned}$$
(8.20)

respectively. For \(i \in J\) we have

$$\begin{aligned} \frac{1}{\mu _i! \nu _i!} \le \frac{|F_i|^2}{|F_i|!} 2^{|F_i|-2} \end{aligned}$$
(8.21)

and for \(i = l\) we have

$$\begin{aligned} \frac{1}{\mu _l! \nu _l!} \le \frac{|F_i|}{|F_i|!} 2^{|F_i|-1}. \end{aligned}$$
(8.22)

For simplicity we bound all of these uniformly by

$$\begin{aligned} \frac{|F_i|^2}{|F_i|!} 2^{|F_i|}. \end{aligned}$$
(8.23)

Using the fact that \(|J| + \sum _{i=1}^k ({\tilde{\mu }}_i+{\tilde{\nu }}_i) = n-k+1\) thus yields

$$\begin{aligned} \Vert f_S\Vert _{L^1} \le c_{a,b,d}^n \frac{t^{n-k+1}}{|J|!} \, \Vert W \Vert _{2d+2,d+1,2}^n \left[ \prod _{i=1}^k \frac{|F_i|^{d+3}}{|F_i|!} \, 2^{|F_i|} \right] . \end{aligned}$$
(8.24)

Finally, observe that, since \(\sum _{i=1}^k |F_i| = n+1\), we have that

$$\begin{aligned} \prod _{i=1}^k 2^{|F_i|} = 2^{n+1}, \quad \prod _{i=1}^k |F_i|^{d+3} < 2^{(n+1)(d+3)}. \end{aligned}$$
(8.25)

This completes the proof. \(\square \)

This upper bound allows us to ensure convergence of the series (6.18).

Proposition 3

Let \(a,b\in {{\mathcal {S}}}({\text {T}}({{\mathbb {R}}}^d))\), \(W\in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), \(t>0\) and set

$$\begin{aligned} R= \bigg (2 \pi \, C_{a,b,d} \, \langle t \rangle \Vert W\Vert _{2d+2,d+1,1} \max \left\{ 1 , \int _{{{\mathbb {R}}}} \langle \theta \rangle ^{-d/2} \mathrm {d}\theta \right\} \bigg )^{-1}. \end{aligned}$$
(8.26)

Then the series

$$\begin{aligned} \sum _{n=1}^\infty (2 \pi \mathrm {i}\lambda h^{-2})^n \; {{\mathcal {J}}}_{\ell ,n}^\gamma (t r^{2-d}) \end{aligned}$$
(8.27)

in (6.18) converges absolutely for all \(|\lambda | < R\), uniformly in \({\text {Re}}\gamma \ge 0\), \(0< r\le 1\).

Proof

First of all, by integrating over \({{\varvec{\theta }}}\) we can obtain from Proposition 2

$$\begin{aligned} \begin{aligned}&\sum _{n=1}^\infty (2 \pi \mathrm {i}|\lambda |)^n\sum _{\ell =0}^n \sum _{k=1}^n \sum _{\underline{F}\in \Pi _\circ (n,k)} \bigg | \int _{{{\mathbb {R}}}^{dk}}{{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \\&\qquad \times \int _{\Sigma ^\perp } \mathrm {e}\bigg (\sum _{\begin{array}{c} i=1\\ i\ne l \end{array}}^k \big [\theta _i \big (\tfrac{1}{2}\Vert {{\varvec{y}}}_i\Vert ^2-\tfrac{1}{2}\Vert {{\varvec{y}}}_l\Vert ^2\big ) + \mathrm {i}|\theta _i| \gamma \big ] \bigg )\\&\qquad {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}(r^d {{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \, \mathrm {d}^\perp {{\varvec{\theta }}}\mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \bigg | \\&\quad \le \sum _{n=1}^\infty (n+1) (2 \pi A |\lambda | \langle t \rangle \Vert W\Vert _{2d+2,d+1,1})^n \sum _{k=1}^n \sum _{\underline{F}\in \Pi _\circ (n,k)} \left( \prod _{i=1}^k \frac{1}{|F_i|!} \right) , \end{aligned} \end{aligned}$$
(8.28)

where

$$\begin{aligned} A = C_{a,b,d} \,\max \left\{ 1 , \int _{{{\mathbb {R}}}} \langle \theta \rangle ^{-d/2} \mathrm {d}\theta \right\} . \end{aligned}$$

We can replace the set \(\Pi _\circ (n,k)\) by the set of all partitions of \(\{1,\dots ,n\}\) into k blocks to obtain the upper bound

$$\begin{aligned} \sum _{\underline{F}\in \Pi _\circ (n,k)} \bigg ( \prod _{i=1}^k \frac{1}{|F_i|!} \bigg ) < \frac{1}{(n+1)!} \sum _{m_1+\dots +m_k = n+1} \begin{pmatrix} n+1 \\ m_1, m_2, \dots , m_{k}\end{pmatrix} =\frac{ k^{n+1}}{(n+1)!}.\nonumber \\ \end{aligned}$$
(8.29)

Inserting this into our upper bound yields

$$\begin{aligned} (8.28) < \sum _{n=1}^\infty (2 \pi A |\lambda | \langle t \rangle \Vert W\Vert _{2d+2,d+1,1})^n \frac{ n^{n+1}}{(n-1)!} \end{aligned}$$
(8.30)

which converges for \(2 \pi A |\lambda | \langle t \rangle \Vert W\Vert _{2d+2,d+1,1} < \mathrm {e}^{-1}\) by Stirling’s formula. \(\square \)

9 The Microlocal Boltzmann–Grad Limit

In this section we combine the results of Sects. 7 and 8 to prove Proposition 4 which establishes the limit of the full perturbation series. Given \((\ell ,\underline{F})\in \Omega (n,k)\), let \({{\mathcal {C}}}^\perp _{\ell ,\underline{F}}\) be the set of \({{\varvec{\theta }}}\in \Sigma ^\perp \) with ith coordinate \(\theta _i\) ranging over

$$\begin{aligned} {\left\{ \begin{array}{ll} {{\mathbb {R}}}_{\le 0} &{} \text {if } F_i=\{j\} \hbox { with } j<\ell \\ {{\mathbb {R}}}_{\ge 0} &{} \text {if } F_i=\{j\} \hbox { with } j>\ell \\ 0 &{} \text {if } \ell \in F_i\\ {{\mathbb {R}}}&{} \text {if } \ell \notin F_i \hbox { and } |F_i|>1. \end{array}\right. } \end{aligned}$$
(9.1)

For \((\ell ,\underline{F})\in \Omega (n,k)\) we define

$$\begin{aligned} D_{\ell ,\underline{F}}^\gamma ({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) = \int _{{{\mathcal {C}}}^\perp _{\ell ,\underline{F}}} \mathrm {e}\bigg (\sum _{\begin{array}{c} i=1\\ i\ne l \end{array}}^k \big [\theta _i \big (\tfrac{1}{2}\Vert {{\varvec{y}}}_i\Vert ^2-\tfrac{1}{2}\Vert {{\varvec{y}}}_l\Vert ^2\big ) + \mathrm {i}|\theta _i| \gamma \big ] \bigg ) \, \mathrm {d}^\perp {{\varvec{\theta }}}, \end{aligned}$$
(9.2)

which converges for \({\text {Re}}\gamma >0\) and can be extended (in the distributional sense) by analytic continuation to \({\text {Re}}\gamma \ge 0\). In other words, we have that

$$\begin{aligned} D_{\ell ,\underline{F}}^\gamma ({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)= \prod _{\begin{array}{c} i=1\\ \ell \notin F_i \end{array}}^k d_{\ell ,\underline{F},i}^\gamma ({{\varvec{y}}}_l,{{\varvec{y}}}_i) \end{aligned}$$
(9.3)

with

$$\begin{aligned} d_{\ell ,\underline{F},i}^\gamma ({{\varvec{y}}},{{\varvec{y}}}')=\frac{1}{2\pi \mathrm {i}} \times {\left\{ \begin{array}{ll} -g^{\gamma }({{\varvec{y}}},{{\varvec{y}}}') &{} \text {if } F_i=\{j\} \hbox { with } j<\ell \\ \overline{g^{\gamma }({{\varvec{y}}},{{\varvec{y}}}')} &{} \text {if } F_i=\{j\} \hbox { with } j>\ell \\ \overline{g^\gamma ({{\varvec{y}}},{{\varvec{y}}}')} - g^\gamma ({{\varvec{y}}},{{\varvec{y}}}') &{} \text {if } |F_i|>1, \end{array}\right. } \end{aligned}$$
(9.4)

with \(g^\gamma \) as in (4.5). Note furthermore that

$$\begin{aligned} \frac{\overline{g^\gamma ({{\varvec{y}}},{{\varvec{y}}}')} - g^{\gamma }({{\varvec{y}}},{{\varvec{y}}}')}{2\pi \mathrm {i}} = \frac{1}{\pi }\;\frac{\gamma }{\left( \tfrac{1}{2}\Vert {{\varvec{y}}}\Vert ^2-\tfrac{1}{2}\Vert {{\varvec{y}}}'\Vert ^2\right) ^2+\gamma ^2} \rightarrow \delta \left( \tfrac{1}{2}\Vert {{\varvec{y}}}\Vert ^2-\tfrac{1}{2}\Vert {{\varvec{y}}}'\Vert ^2\right) , \end{aligned}$$
(9.5)

as \(\gamma \rightarrow 0\).

Proposition 4

Let \(a,b\in {{\mathcal {S}}}({\text {T}}({{\mathbb {R}}}^d))\), \(W\in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), \(t>0\), \(|\lambda | < R\) with R as in (8.26), and \({\text {Re}}\gamma \ge 0\). Then

$$\begin{aligned} \begin{aligned}&\lim _{h=r\rightarrow 0} \sum _{n=1}^\infty (2 \pi \mathrm {i}\lambda h^{-2})^n \; {{\mathcal {J}}}_n^\gamma (t h r^{1-d}) \\&\quad = \sum _{n=1}^\infty (2 \pi \mathrm {i}\lambda )^n \sum _{k=1}^n \sum _{(\ell ,\underline{F})\in \Omega (n,k)} (-1)^\ell \int _{({{\mathbb {R}}}^d)^k} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \\&\qquad \times \, {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{0}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) D_{\ell ,\underline{F}}^\gamma ({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)\,\mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k . \end{aligned} \end{aligned}$$
(9.6)

Proof

In view of the uniform convergence of the series in n (Proposition 3), it is sufficient to establish convergence term by term. Now

$$\begin{aligned} \begin{aligned}&h^{2n}\; {{\mathcal {J}}}_{\ell ,n}^\gamma (t h r^{1-d}) \\&\quad = \sum _{k=1}^n \sum _{\underline{F}\in \Pi _\circ (n,k)} \int _{\Sigma ^\perp } \bigg ( \int _{{{\mathbb {R}}}^{dk}}{{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \\&\qquad \times \mathrm {e}\bigg (\sum _{\begin{array}{c} i=1\\ i\ne l \end{array}}^k \big [\theta _i \big (\tfrac{1}{2}\Vert {{\varvec{y}}}_i\Vert ^2-\tfrac{1}{2}\Vert {{\varvec{y}}}_l\Vert ^2 \big ) + \mathrm {i}|\theta _i| \gamma \big ] \bigg ) {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}(r^d {{\varvec{\theta }}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \,\mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \bigg )\, \mathrm {d}^\perp {{\varvec{\theta }}}. \end{aligned} \end{aligned}$$
(9.7)

Due to the uniform decay \(\prod _{i\ne l} \langle \theta _i \rangle ^{-d/2}\) guaranteed by Proposition 2, the outer integral converges uniformly in \(r>0\) (and \({\text {Re}}\gamma \ge 0\)), and we can therefore take the limit \(r\rightarrow 0\) inside. Relations (7.12) and (7.15) tell us that the only-non zero terms come from the marked partitions \((\ell ,\underline{F})\in \Omega (n,k)\) and for \({{\varvec{\theta }}}\in {{\mathcal {C}}}^\perp _{\ell ,\underline{F}}\). \(\square \)

10 The Collision Series

The main result of this section is Proposition 5 which specialises Proposition 4 to the case of \(\gamma =0\). Let us define

$$\begin{aligned}&{{\mathcal {T}}}_{n}({{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n) = (-2 \pi \mathrm {i})^n \prod _{j=0}^{n-1} T({{\varvec{y}}}_j,{{\varvec{y}}}_{j+1}) , \qquad {{\mathcal {T}}}_{0}({{\varvec{y}}}_0)=1, \end{aligned}$$
(10.1)
$$\begin{aligned}&{{\mathcal {T}}}_{\ell ,n}({{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n) = {{\mathcal {T}}}_{\ell }({{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_\ell ) \overline{{{\mathcal {T}}}_{n-\ell }}({{\varvec{y}}}_n,\ldots ,{{\varvec{y}}}_{\ell }) \nonumber \\&\quad = (2 \pi \mathrm {i})^n (-1)^\ell \prod _{j=0}^{\ell -1} T({{\varvec{y}}}_j,{{\varvec{y}}}_{j+1}) \prod _{j=\ell }^{n-1} T^\dagger ({{\varvec{y}}}_{j},{{\varvec{y}}}_{j+1}) , \end{aligned}$$
(10.2)

and for \({{\varvec{m}}}\in {{\mathbb {Z}}}_{> 0}^{n}\),

$$\begin{aligned} {{\mathcal {T}}}_{\ell ,n,{{\varvec{m}}}} ( {{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n) = (2 \pi \mathrm {i})^n (-1)^\ell \prod _{j=0}^{\ell -1} T_{m_j}({{\varvec{y}}}_j,{{\varvec{y}}}_{j+1})\prod _{j=\ell }^{n-1} T_{m_j}^\dagger ({{\varvec{y}}}_{j},{{\varvec{y}}}_{j+1}) . \end{aligned}$$
(10.3)

Note that

$$\begin{aligned} \sum _{{{\varvec{m}}}\in {{\mathbb {Z}}}_{> 0}^{n}}\lambda ^{m_1+\ldots +m_{n}} {{\mathcal {T}}}_{\ell ,n,{{\varvec{m}}}}({{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n) = {{\mathcal {T}}}_{\ell ,n}({{\varvec{y}}}_0,\ldots ,{{\varvec{y}}}_n). \end{aligned}$$
(10.4)

Define furthermore

$$\begin{aligned}&{{\mathcal {R}}}^{(k)}(t) = \sum _{n=k-1}^\infty \sum _{(\ell ,\underline{F})\in {\widehat{\Omega }}(n,k) } \int _{({{\mathbb {R}}}^d)^{k}} {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{0}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k})\nonumber \\&\quad \times {{\mathcal {T}}}_{\ell ,n}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \, \omega _k({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)\, \mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k. \end{aligned}$$
(10.5)

Note that in (10.5) for \(k>1\) only terms with \(n\ge k\) contribute.

Proposition 5

Let \(a,b\in {{\mathcal {S}}}({\text {T}}({{\mathbb {R}}}^d))\), \(W\in {{\mathcal {S}}}({{\mathbb {R}}}^d)\), \(t>0\), \(|\lambda | < R\) with R as in (8.26). Then

$$\begin{aligned} \lim _{h=r\rightarrow 0} \sum _{n=0}^\infty (2 \pi \mathrm {i}\lambda h^{-2})^n \; {{\mathcal {J}}}_n (t h r^{1-d}) = \sum _{k=1}^\infty {{\mathcal {R}}}^{(k)}(t) . \end{aligned}$$
(10.6)

Proof

We begin from the result of Proposition 4. Let \((\ell ,\underline{F}) \in \Omega (n,k)\) and let \((\ell ',\underline{F}') \in {\widehat{\Omega }}(n',k')\) be the corresponding reduced marked partition. Order the blocks of \((\ell ,\underline{F})\) such that the following three conditions hold:

  1. (1)

    \(|F_i|>1\) for \(i=1,\dots ,k'-1\),

  2. (2)

    \(\ell \in F_{k'}\),

  3. (3)

    \(|F_i|=1\) for \(i=k'+1,\dots ,k\).

Define \(i_{k'+1},\dots ,i_n\) so that \(F_j = \{i_j\}\) for \(j=k'+1,\dots ,n\). We can then write

$$\begin{aligned} \delta (\Delta _{\ell ,\underline{F}}{{\varvec{u}}})= \bigg (\prod _{i=1}^{k'-1} \delta \bigg (\sum _{j=0}^{\ell -1} u_j {\mathbb {1}}(j\in F_i)-\sum _{j=\ell +1}^{n} u_j {\mathbb {1}}(j\in F_i) \bigg ) \bigg ) \bigg ( \prod _{j=k'+1}^n \delta (u_{i_j}) \bigg ).\nonumber \\ \end{aligned}$$
(10.7)

By first integrating over \(u_{i_{k'+1}},\dots ,u_{i_{n}}\), and then relabelling the remaining \(u_i\) variables with the indices \(0,\dots ,n'\) (preserving their order) we obtain

$$\begin{aligned}&\int _{\Box _{\ell ,n}(t)} {{\mathcal {A}}}_{\ell ,n}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k),{{\varvec{u}}}) \, \delta (\Delta _{\ell ,\underline{F}} {{\varvec{u}}}) \, \mathrm {d}^\perp {{\varvec{u}}}\nonumber \\&\quad = \int _{\Box _{\ell ',n'}(t)} {{\mathcal {A}}}_{\ell ',n'}(\iota _{\underline{F}'}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k'}), {{\varvec{u}}}) \, \delta (\Delta _{\ell ',\underline{F}'} {{\varvec{u}}}) \, \mathrm {d}^\perp {{\varvec{u}}}, \end{aligned}$$
(10.8)

or in other words

$$\begin{aligned} {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{0}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) = {\widehat{{{\mathcal {A}}}}}_{t,\ell ',\underline{F}'}({{\varvec{0}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k'}). \end{aligned}$$
(10.9)

Furthermore, by (9.4) every non-singleton in \(\underline{F}\) contains indices both to the left and right of \(\ell \), so every such term yields a delta function and we see that the distribution \(D_{\ell ,\underline{F}}\) is given by

$$\begin{aligned} D_{\ell , \underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) = \bigg (\prod _{i=1}^{k'-1} \delta \big (\tfrac{1}{2} \Vert {{\varvec{y}}}_i\Vert ^2-\tfrac{1}{2} \Vert {{\varvec{y}}}_{k'}\Vert ^2\bigg )\bigg )\bigg (\prod _{ i = k'+1}^k d_{\ell ,\underline{F},i}({{\varvec{y}}}_{k'},{{\varvec{y}}}_i)\bigg ).\nonumber \\ \end{aligned}$$
(10.10)

This allows us to replace instances of \(\Vert {{\varvec{y}}}_{k'}\Vert ^2\) with \(\Vert {{\varvec{y}}}_j\Vert ^2\) for any \(j = 1,\ldots ,k'-1\). Let

$$\begin{aligned} 0=j_1<\cdots<j_\mu<\ell = j_{\mu +1}< \cdots < j_{\mu +\nu +1} = n \end{aligned}$$

be the list of elements of \(\{0,\ldots ,n\}\) that lie in non-singleton blocks. For \(i=1,\dots ,\mu +\nu \) we define \(m_i = j_{i+1} - j_i -1\) as the number of singletons between \(j_{i+1}\) and \(j_i\), and set \(M_i=m_1+\ldots +m_i\). Thus \(M_{\mu +\nu }=M\) is the total number of singletons in \(\underline{F}\). From the definition (9.4) one can see that

$$\begin{aligned} \prod _{ i = k'+1}^k d_{\ell ,\underline{F},i}^\gamma ({{\varvec{y}}}_{k'},{{\varvec{y}}}_i) =\frac{(-1)^{M_\mu }}{(2 \pi \mathrm {i})^M} \bigg ( \prod _{i=k'+1}^{k'+M_\mu } g^\gamma ({{\varvec{y}}}_{k'},{{\varvec{y}}}_i) \bigg )\bigg ( \prod _{i=k'+M_\mu +1}^{k'+M} \overline{ g^\gamma ({{\varvec{y}}}_{k'},{{\varvec{y}}}_i)} \bigg ).\nonumber \\ \end{aligned}$$
(10.11)

Combining the above with the definition of the T-matrix allows us to obtain

$$\begin{aligned}&\int _{{{\mathbb {R}}}^{Md}} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) ) D_{\ell , \underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) \prod _{ i = k'+1}^k \mathrm {d}{{\varvec{y}}}_i \nonumber \\&\quad = \frac{(-1)^{M_\mu }}{(2\pi \mathrm {i})^M} \int _{{{\mathbb {R}}}^{Md}} {{\mathcal {W}}}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) ) \omega _{k'}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k'}) \nonumber \\&\qquad \times \bigg ( \prod _{i=k'+1}^{k'+M_\mu } g({{\varvec{y}}}_{k'},{{\varvec{y}}}_i) \bigg ) \bigg ( \prod _{i=k'+M_\mu +1}^{k'+M} \overline{ g({{\varvec{y}}}_{k'},{{\varvec{y}}}_i)} \bigg ) \, \prod _{ i = k'+1}^k \mathrm {d}{{\varvec{y}}}_i. \end{aligned}$$
(10.12)

Due to the Dirac delta functions appearing in \(\omega _{k'}\), for \(i \in [k'+1, k'+M_\mu ]\) we can replace \(g({{\varvec{y}}}_{k'},{{\varvec{y}}}_i)\) with \(g({{\varvec{y}}}_{i_-},{{\varvec{y}}}_i)\) where \(i_-\) is defined to be the largest non-singleton element smaller than i. Similarly, for \(i \in [k'+M_\mu + 1, k'+M]\) we replace \(\overline{g({{\varvec{y}}}_{k'},{{\varvec{y}}}_i)}\) with \(\overline{g({{\varvec{y}}}_{i_+},{{\varvec{y}}}_i)}\) where \(i_+\) is defined to be the smallest non-singleton element larger than i. This allows us to conclude that (10.12) is equal to

$$\begin{aligned} \frac{(-1)^{M_\mu } }{(2\pi \mathrm {i})^M} \frac{(-1)^{\ell '}}{(2\pi \mathrm {i})^{n'}} {{\mathcal {T}}}_{\ell ',n',(m_1+1,\dots ,m_M+1)}(\iota _{\underline{F}'}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k'}) ) \, \omega _{k'}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_{k'}). \end{aligned}$$
(10.13)

Now use the fact that \(M+n' = n\) and \(M_\mu +\ell ' = \ell \) we can write

$$\begin{aligned} \frac{(-1)^{M_\mu } }{(2\pi \mathrm {i})^M} \frac{(-1)^{\ell '}}{(2\pi \mathrm {i})^{n'}}=\frac{(-1)^\ell }{(2\pi \mathrm {i})^n} \end{aligned}$$
(10.14)

and so

$$\begin{aligned} \begin{aligned}&\lim _{h=r\rightarrow 0} \sum _{n=0}^\infty (2 \pi \mathrm {i}\lambda h^{-2})^n \; {{\mathcal {I}}}_n(t h r^{1-d}) \\&\quad = \sum _{n=0}^\infty \sum _{k=1}^{n} \sum _{(\ell ,\underline{F})\in {\widehat{\Omega }}(n,k)} \sum _{{{\varvec{m}}}\in {{\mathbb {Z}}}_{\ge 0}^n}\lambda ^{n+m_1+\ldots +m_n} \int _{({{\mathbb {R}}}^d)^{k}} {\widehat{{{\mathcal {A}}}}}_{t,\ell ,\underline{F}}({{\varvec{0}}},{{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_{k}) \\&\qquad \times {{\mathcal {T}}}_{\ell ,n,(m_1+1,\dots ,m_M+1)}(\iota _{\underline{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \, \omega _{k}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \;\mathrm {d}{{\varvec{y}}}_1 \cdots \mathrm {d}{{\varvec{y}}}_k \end{aligned} \end{aligned}$$
(10.15)

where on the right hand side we now write k, \(\ell \), n instead of \(k'\), \(\ell '\), \(n'\). The result then follows by changing the summation variables \(m_j \rightarrow m_j -1\) and using (10.4). \(\square \)

11 The Limit Process

In this section we derive explicit formulas for \({{\mathcal {R}}}^{(k)}(t)\), assuming throughout that \(k\ge 1\). These show in particular that \({{\mathcal {R}}}^{(k)}(t)\) can be expressed as the \((k-1)\)-collision term with a real and non-negative kernel. The main results are equations (11.15) and (11.26) which together yield the formula (1.17), as well as equations (11.17) and (11.28) which respectively yield the expressions (1.22) and (1.23) for \(\rho _{11}^{(2)}\) and \(\rho _{12}^{(2)}\) in terms of Bessel functions.

Let us write \({{\mathcal {R}}}^{(k)}(t) = {{\mathcal {R}}}_{\mathrm {d}}^{(k)}(t) + {{\mathcal {R}}}_{\mathrm {off}}^{(k)}(t)\), where \({{\mathcal {R}}}_{\mathrm {d}}^{(k)}(t)\), \({{\mathcal {R}}}_{\mathrm {off}}^{(k)}(t)\) are as in the definition of \({{\mathcal {R}}}^{(k)}(t)\) (10.5), with \({\widehat{\Omega }}(n,k)\) replaced by \({\widehat{\Omega }}_\mathrm {d}(n,k)\) and \({\widehat{\Omega }}_\mathrm {off}(n,k)\), respectively.

11.1 Diagonal Terms

This is the case \(0,\ell \in F_1\) and so

$$\begin{aligned}&{{\mathcal {R}}}_{\mathrm {d}}^{(k)}(t) = \frac{1}{(k-1)!} \int _{{{\mathbb {R}}}_{\ge 0}^k} \int _{({{\mathbb {R}}}^d)^{k+1}} a\bigg ( {{\varvec{x}}}- \sum _{j=1}^k u_j ({{\varvec{y}}}_j-{{\varvec{y}}}_1),{{\varvec{y}}}_1 \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_1) \nonumber \\&\quad \times \rho _{11}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \, \delta (u_1+\cdots +u_k-t) \, \mathrm {d}{{\varvec{x}}}\, \mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k\, \mathrm {d}{{\varvec{u}}}\end{aligned}$$
(11.1)

with the function

$$\begin{aligned}&\rho _{11}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \nonumber \\&\quad = \omega _k({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) \sum _{n=k-1}^\infty \sum _{(\ell ,\underrightarrow{F})\in \underrightarrow{{\widehat{\Omega }}}_{\mathrm {d}}(n,k) } {{\mathcal {T}}}_{\ell ,n}(\iota _{\underrightarrow{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \prod _{i=1}^k \frac{u_i^{\mu _{i}+\nu _{i}} }{\mu _{i}! \nu _{i}!}. \end{aligned}$$
(11.2)

We have here used the symmetry of the integrand under permutation of the indices of the \({{\varvec{y}}}_i\) with \(i\ge 2\), and taken an average over all ordered partitions in \(\underrightarrow{{\widehat{\Omega }}}_{\mathrm {d}}(n,k)\) rather than the original sum over the unordered partitions in \({\widehat{\Omega }}_{\mathrm {d}}(n,k)\).

We now apply the bijection \((\ell ,\underrightarrow{F})\mapsto (\underrightarrow{F}^+,\underrightarrow{F}^-)\) in (A.9) (Lemma 4), which, together with the relation

$$\begin{aligned} {{\mathcal {T}}}_{\ell ,n}(\iota _{\underrightarrow{F}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)) = {{\mathcal {T}}}_n(\iota _{\underrightarrow{F}^+}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) ) \, \overline{{{\mathcal {T}}}_n(\iota _{\underrightarrow{F}^-}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) )} , \end{aligned}$$
(11.3)

yields

(11.4)

Next we use the bijection \(\underrightarrow{F}\mapsto (\underrightarrow{F}',{{\varvec{m}}})\) (A.7) to non-consecutive ordered partitions , which yields

(11.5)

with \(|F_i|_{{{\varvec{m}}}} = \sum _{j\in F_i} m_j\).

We identify \({{\varvec{m}}}\in {{\mathbb {Z}}}_{\ge 0}^n\) with \(({{\varvec{m}}}^{(1)},\ldots ,{{\varvec{m}}}^{(k)})\in {{\mathbb {Z}}}_{\ge 0}^{|F_1|}\times \cdots \times {{\mathbb {Z}}}_{\ge 0}^{|F_k|}\). For each i we then use the identity

$$\begin{aligned} \sum _{m_1,\dots ,m_p=0}^\infty f(m_1+\dots +m_p) = \sum _{\mu =0}^\infty \frac{(\mu +p-1)!}{\mu ! (p-1)!} f(\mu ) \end{aligned}$$
(11.6)

with \(p=|F_i|\) to write

$$\begin{aligned}&\sum _{{{\varvec{m}}}\in {{\mathbb {Z}}}^n} \bigg ( \prod _{i=1}^k \frac{u_i^{|F_i|+|F_i|_{{{\varvec{m}}}} -1} (-2\pi \mathrm {i}T({{\varvec{y}}}_i,{{\varvec{y}}}_i))^{|F_i|_{{{\varvec{m}}}} }}{(|F_i|+|F_i|_{{{\varvec{m}}}} -1)!} \bigg ) \nonumber \\&\quad = \prod _{i=1}^k \bigg ( \sum _{\mu =0}^\infty \frac{u_i^{|F_i|+\mu -1} (-2\pi \mathrm {i}T({{\varvec{y}}}_i,{{\varvec{y}}}_i))^{\mu }}{\mu ! (|F_i|-1)!} \bigg ). \end{aligned}$$
(11.7)
$$\begin{aligned}&\quad =\prod _{i=1}^k \bigg ( \frac{u_i^{|F_i|-1}\mathrm {e}^{-2\pi \mathrm {i}u_i T({{\varvec{y}}}_i,{{\varvec{y}}}_i)}}{(|F_i|-1)!} \bigg ) \end{aligned}$$
(11.8)

This yields in view of the optical theorem (4.12),

(11.9)

When \(k=1\) we are summing over partitions into 1 block. Note that is empty unless \(n=0\), in which case contains only the partition \([\{0\}]\). Formula (11.9) therefore yields

$$\begin{aligned} \rho _{11}^{(1)}(u_1,{{\varvec{y}}}_1) = \mathrm {e}^{-u_1 \Sigma _{{\text {tot}}}({{\varvec{y}}}_1)}. \end{aligned}$$
(11.10)

When \(k\ge 2\), using the results in Appendix A.4, we can write (11.9) as

$$\begin{aligned} \rho _{11}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) = \omega _k({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) \big | g_{11}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \big |^2 \prod _{i=1}^{k} \mathrm {e}^{- u_i \Sigma _{{\text {tot}}}({{\varvec{y}}}_i)} . \end{aligned}$$
(11.11)

Here \(g_{11}^{(k)}\) is the 11 coefficient of the \(k\times k\) matrix-valued function (recall (1.20)),

(11.12)

with the diagonal matrix \({{\mathbb {D}}}({{\varvec{z}}})={\text {diag}}(z_1,\ldots ,z_k)\) and the matrix \({{\mathbb {W}}}={{\mathbb {W}}}({{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)\) with coefficients

$$\begin{aligned} w_{ij} = {\left\{ \begin{array}{ll} 0 &{} (i=j) \\ -2\pi \mathrm {i}T({{\varvec{y}}}_i,{{\varvec{y}}}_j) &{} (i\ne j) . \end{array}\right. } \end{aligned}$$
(11.13)

If we extend the definition of (11.11) to

$$\begin{aligned} \rho _{\ell m}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) = \omega _k({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) \big | g_{\ell m}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \big |^2 \prod _{i=1}^{k} \mathrm {e}^{- u_i \Sigma _{{\text {tot}}}({{\varvec{y}}}_i)} ,\nonumber \\ \end{aligned}$$
(11.14)

we see that, by symmetry under the permutation of indices, the function (11.1) can be written as

$$\begin{aligned}&{{\mathcal {R}}}_{\mathrm {d}}^{(k)}(t) = \frac{1}{k!} \sum _{\ell =1}^k \int _{{{\mathbb {R}}}_{\ge 0}^k} \int _{({{\mathbb {R}}}^d)^{k+1}} a\bigg ( {{\varvec{x}}}- \sum _{j=1}^k u_j ({{\varvec{y}}}_j-{{\varvec{y}}}_1),{{\varvec{y}}}_\ell \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_\ell ) \nonumber \\&\quad \times \rho _{\ell \ell }^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \, \delta (u_1+\cdots +u_k-t) \, \mathrm {d}{{\varvec{x}}}\, \mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k\, \mathrm {d}{{\varvec{u}}}, \end{aligned}$$
(11.15)

which yields the diagonal part of the expression (1.17).

In the case \(k=2\), Eq. (A.28) yields an explicit formula in terms of the \(J_1\)-Bessel function (assuming here \(u_1,u_2>0\)),

$$\begin{aligned} g_{11}^{(2)}(u_1,u_2) = -2\pi {\sqrt{\frac{u_1}{u_2}}}\, (T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1))^{1/2} \,J_1(4\pi (u_1 u_2 T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1))^{1/2}) .\nonumber \\ \end{aligned}$$
(11.16)

We conclude

$$\begin{aligned} \rho _{11}^{(2)}(u_1,u_2,{{\varvec{y}}}_1,{{\varvec{y}}}_2) =&4\pi ^2 \, |T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1)|\, \omega _2({{\varvec{y}}}_1,{{\varvec{y}}}_2)\, \mathrm {e}^{- u_1 \Sigma _{{\text {tot}}}({{\varvec{y}}}_1)- u_2 \Sigma _{{\text {tot}}}({{\varvec{y}}}_2)} \nonumber \\&\times \frac{u_1}{u_2}\, \big | J_1\big (4\pi (u_1 u_2 T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1))^{1/2} \big ) \big |^2 . \end{aligned}$$
(11.17)

This formula can also be obtained directly from the combinatorial expression (11.9): note that for n even the only is

$$\begin{aligned} \underrightarrow{F}= [ \{0,2,4,\ldots ,n-2,n\}, \{1,3,5,\ldots ,n-1\}]; \end{aligned}$$
(11.18)

and for n odd is empty. Hence

$$\begin{aligned} \rho _{11}^{(2)}(u_1,u_2,{{\varvec{y}}}_1,{{\varvec{y}}}_2) =&\omega _2({{\varvec{y}}}_1,{{\varvec{y}}}_2)\,\mathrm {e}^{- u_1 \Sigma _{{\text {tot}}}({{\varvec{y}}}_1)- u_2 \Sigma _{{\text {tot}}}({{\varvec{y}}}_2)} \nonumber \\&\times \bigg | \sum _{m=1}^\infty (-4\pi ^2 T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1))^m \frac{u_1^{m} u_2^{m-1}}{m! (m-1)!} \bigg |^2. \end{aligned}$$
(11.19)

By shifting the summation index this can be written

$$\begin{aligned}&\rho _{11}^{(2)}(u_1,u_2,{{\varvec{y}}}_1,{{\varvec{y}}}_2) = \omega _2({{\varvec{y}}}_1,{{\varvec{y}}}_2)\,\mathrm {e}^{- u_1 \Sigma _{{\text {tot}}}({{\varvec{y}}}_1)- u_2 \Sigma _{{\text {tot}}}({{\varvec{y}}}_2)} \nonumber \\&\quad \times \bigg | 4\pi ^2 T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1) u_1 \sum _{m=0}^\infty (-4\pi ^2 T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1))^m \frac{u_1^{m} u_2^{m}}{m! (m+1)!} \bigg |^2.\nonumber \\ \end{aligned}$$
(11.20)

Equation (11.17) then follows from the series representation of the Bessel function

$$\begin{aligned} J_n(z) = (\tfrac{1}{2} z)^n \sum _{k=0}^\infty \frac{(-\tfrac{1}{4} z^2)^k}{k! (k+n)!}. \end{aligned}$$
(11.21)

11.2 Off-Diagonal Terms

Here \(0\in F_1\) and \(\ell \in F_k\), and thus

$$\begin{aligned}&{{\mathcal {R}}}_{\mathrm {off}}^{(k)}(t) = \frac{1}{(k-2)!} \int _{{{\mathbb {R}}}_{\ge 0}^k} \int _{({{\mathbb {R}}}^d)^{k+1}} a\bigg ( {{\varvec{x}}}- \sum _{j=1}^k u_j ({{\varvec{y}}}_j-{{\varvec{y}}}_k),{{\varvec{y}}}_k \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_1) \nonumber \\&\quad \times \rho _{1k}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \, \delta (u_1+\cdots +u_k-t) \, \mathrm {d}{{\varvec{x}}}\, \mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k\, \mathrm {d}{{\varvec{u}}}\end{aligned}$$
(11.22)

with the function

$$\begin{aligned}&\rho _{1k}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \nonumber \\&\quad =\omega _k({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) \sum _{n=k-1}^\infty \sum _{(\ell ,\underrightarrow{F})\in \underrightarrow{{\widehat{\Omega }}}_{\mathrm {off}}(n,k) } {{\mathcal {T}}}_{\ell ,n}(\iota _{\underrightarrow{F}}({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k)) \prod _{i=1}^k \frac{u_i^{\mu _{i}+\nu _{i}} }{\mu _{i}! \nu _{i}!}. \end{aligned}$$
(11.23)

The argument is identical to the diagonal case, and we obtain

(11.24)

In the case \(k=1\), there is no off-diagonal term as is empty for every n.

Furthermore, for \(k\ge 2\) (again using the results in Appendix A.4),

$$\begin{aligned} \rho _{1k}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) = \omega _k({{\varvec{y}}}_1,\ldots ,{{\varvec{y}}}_k) \big | g_{1k}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \big |^2 \prod _{i=1}^{k} \mathrm {e}^{- u_i \Sigma _{{\text {tot}}}({{\varvec{y}}}_i)} , \end{aligned}$$
(11.25)

with \({{\mathbb {G}}}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k)\) as in (11.12). Again, by symmetry under the permutation of indices, (11.22) can be expressed as

$$\begin{aligned}&{{\mathcal {R}}}_{\mathrm {off}}^{(k)}(t) = \frac{1}{k!} \sum _{\begin{array}{c} \ell ,m=1\\ \ell \ne m \end{array}}^k \int _{{{\mathbb {R}}}_{\ge 0}^k} \int _{({{\mathbb {R}}}^d)^{k+1}} a\bigg ( {{\varvec{x}}}- \sum _{j=1}^k u_j ({{\varvec{y}}}_j-{{\varvec{y}}}_1),{{\varvec{y}}}_m \bigg ) b( {{\varvec{x}}}, {{\varvec{y}}}_\ell ) \nonumber \\&\quad \times \rho _{\ell m}^{(k)}({{\varvec{u}}},{{\varvec{y}}}_1,\dots ,{{\varvec{y}}}_k) \, \delta (u_1+\cdots +u_k-t) \, \mathrm {d}{{\varvec{x}}}\, \mathrm {d}{{\varvec{y}}}_1\cdots \mathrm {d}{{\varvec{y}}}_k\, \mathrm {d}{{\varvec{u}}}, \end{aligned}$$
(11.26)

which yields the off-diagonal part of (1.17).

For \(k=2\), Eq. (A.29) yields

$$\begin{aligned} g_{12}^{(2)}(u_1,u_2) = -2\pi \mathrm {i}T({{\varvec{y}}}_1,{{\varvec{y}}}_2) \, J_0(4\pi (u_1 u_2 T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1))^{1/2} ) . \end{aligned}$$
(11.27)

We conclude

$$\begin{aligned}&\rho _{12}^{(2)}(u_1,u_2,{{\varvec{y}}}_1,{{\varvec{y}}}_2) = 4\pi ^2 \, |T({{\varvec{y}}}_1,{{\varvec{y}}}_2)|^2\, \omega _2({{\varvec{y}}}_1,{{\varvec{y}}}_2)\, \mathrm {e}^{- u_1 \Sigma _{{\text {tot}}}({{\varvec{y}}}_1)- u_2 \Sigma _{{\text {tot}}}({{\varvec{y}}}_2)} \nonumber \\&\quad \times \big | J_0\big (4\pi (u_1 u_2 T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1))^{1/2} \big ) \big |^2 . \end{aligned}$$
(11.28)

Let us derive this formula also directly from the combinatorial expression (11.24): for n odd the only element of is

$$\begin{aligned} \underrightarrow{F}= [\{0,2,4,\ldots ,n-1\}, \{1,3,5,\ldots ,n\} ]; \end{aligned}$$
(11.29)

and for n even is empty. Hence

$$\begin{aligned}&\rho _{12}^{(2)}(u_1,u_2,{{\varvec{y}}}_1,{{\varvec{y}}}_2) =\omega _2({{\varvec{y}}}_1,{{\varvec{y}}}_2)\, \mathrm {e}^{- u_1 \Sigma _{{\text {tot}}}({{\varvec{y}}}_1)- u_2 \Sigma _{{\text {tot}}}({{\varvec{y}}}_2)}\,\nonumber \\&\quad \times \bigg | \sum _{m=0}^\infty \frac{u_1^{m}u_2^{m} }{(m!)^2} (-2\pi \mathrm {i})^{2m+1} T({{\varvec{y}}}_1,{{\varvec{y}}}_2) (T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1))^{m} \bigg |^2 . \end{aligned}$$
(11.30)

This can be written

$$\begin{aligned}&\rho _{12}^{(2)}(u_1,u_2,{{\varvec{y}}}_1,{{\varvec{y}}}_2) = 4 \pi ^2 |T({{\varvec{y}}}_1,{{\varvec{y}}}_2)|^2 \,\omega _2({{\varvec{y}}}_1,{{\varvec{y}}}_2)\, \mathrm {e}^{- u_1 \Sigma _{{\text {tot}}}({{\varvec{y}}}_1)- u_2 \Sigma _{{\text {tot}}}({{\varvec{y}}}_2)}\nonumber \\&\quad \times \bigg | \sum _{m=0}^\infty \frac{1 }{(m!)^2} (-1)^{m} \left( 2 \pi \sqrt{u_1 u_2 \, T({{\varvec{y}}}_1,{{\varvec{y}}}_2) T({{\varvec{y}}}_2,{{\varvec{y}}}_1)}\right) ^{2m} \bigg |^2 . \end{aligned}$$
(11.31)

Identifying the summation as a \(J_0\) Bessel function yields the result.

12 Discussion

The main conclusion of this work is that quantum transport in a periodic potential converges, in the microlocal Boltzmann–Grad limit, to a limiting random flight process. Unlike in the random setting, there is a positive probability that a path of the limit process revisits the same momentum several times. This is ultimately a consequence of the Floquet–Bloch reduction to discrete Hilbert spaces. The only hypothesis, Assumption 1, in our derivation is that Bloch momenta have asymptotically the same fine-scale distribution as a Poisson point process. This assumption can be viewed as a phase-space extension of the Berry–Tabor conjecture in quantum chaos [7, 37], which to-date has been confirmed only in special cases [9, 23, 34, 36, 38, 44, 47]. In the setting discussed in this paper, present techniques permit a rigorous analysis up to second order perturbation theory which, perhaps surprisingly, is consistent with the linear Boltzmann equation as well as our limit process. Thus extending the perturbative analysis to higher order terms unconditionally is an important open challenge. This would require the rigorous understanding of higher-order correlation functions for lattice point statistics, and we refer the reader to [47] for the best current results in this direction.

It follows from standard invariance principles for Markov processes that for large times the solution of the linear Boltzmann equation is governed by Brownian motion with the standard diffusive mean-square displacement (i.e., linear in time) [22, 46]. Therefore, the work of Eng and Erdös [19] for random potentials implies convergence to Brownian motion, if we first take the Boltzmann–Grad and then the diffusive limit. (Note that Erdös, Salmhofer and Yau [21, 22] have established convergence to Brownian motion in long-time/weak-coupling scaling limits directly, i.e., without first taking the weak-coupling limit to obtain the linear Boltzmann equation as in [20].) An immediate challenge is thus to understand the diffusive nature of the random flight process derived in the present paper. Recall that in the classical setting the Boltzmann–Grad limit of the periodic Lorentz gas does not satisfy the linear Boltzmann equation [13, 39], and we have superdiffusion with a \(t\log t\) mean-square displacement [41]. A further challenge is to expand our current understanding to more singular single-site potentials (such as hard core and/or long-range potentials) and to include background electromagnetic fields.