1 Introduction and Results

The Polaron models the slow movement of an electron in a polar crystal. The electron drags a cloud of polarization along and thus appears heavier, it has an “effective mass” larger then its mass without the interaction. In the Fröhlich model of the Polaron, the Hamiltonian describing the interaction of the electron with the lattice vibrations is the operator given by

$$\begin{aligned} H_{\alpha } = \frac{1}{2} p^2 \otimes \mathbbm {1} + \mathbbm {1}\otimes \textbf{N} + \frac{\sqrt{\alpha }}{\sqrt{2} \pi } \int _{{\mathbb {R}}^3} \frac{1}{|k|}\big (e^{i k \cdot x} a_k + e^{-i k \cdot x} a^*_k \big ) \, \textrm{d} k \end{aligned}$$

that acts on \(L^2(\mathbb {R}^3) \otimes \mathcal F\) (where \(\mathcal F\) is the bosonic Fock space over \(L^2(\mathbb {R}^3)\)). Here x and p are the position and momentum operators of the electron, \(\textbf{N}\equiv \int _{{\mathbb {R}}^3} a^*_k a_k\, \mathrm d k\) is the number operator, the creation and annihilation operators \(a_k^*\) and \(a_k\) satisfy the canonical commutation relations \([a^*_k, a_{k'}] = \delta (k-k')\) and \(\alpha >0\) is the coupling constant. The Hamiltonian commutes with the total momentum operator \(p\otimes \mathbbm {1} + \mathbbm {1} \otimes P_f\), where \(P_f \equiv \int _{\mathbb {R}^3} k a_k^* a_k \mathrm dk\) is the momentum operator of the field, and has a fiber decomposition in terms of the Hamiltonians

$$\begin{aligned} H_\alpha (P) = \frac{1}{2} (P - P_f)^2 + \textbf{N}+ \frac{\sqrt{\alpha }}{\sqrt{2} \pi } \int _{{\mathbb {R}}^3} \frac{1}{|k|}(a_k + a^*_k) \, \mathrm dk \end{aligned}$$

at fixed total momentum P (which act on \(\mathcal F\)). Of particular interest is the energy-momentum relation \(E_\alpha (P) :=\inf {\text {spec}}(H_\alpha (P))\). It is known that \(E_\alpha \) is rotationally symmetric and has a global minimum at \(P=0\), which is believed to be unique [DS20]. The effective mass \(m_{\text {eff}}(\alpha )\) is defined as the inverse curvature at \(P=0\) i.e.

$$\begin{aligned} E_\alpha (P) - E_\alpha (0) = \frac{1}{2m_{\text {eff}}(\alpha )} |P|^2 + o(|P|^2). \end{aligned}$$

While the asympotics of the ground state energy \(E_\alpha (0)\) in the strong coupling limit \(\alpha \rightarrow \infty \) are known, the asymptotics of the effective mass \(m_\text {eff}(\alpha )\) still remain an important open problem. For the ground state energy, Donsker and Varadhan [DV83] gave a probabilistic proof (using the path integral formulation) of Pekars conjecture [Pek49] that states

$$\begin{aligned} \lim _{\alpha \rightarrow \infty } \frac{E_\alpha (0)}{\alpha ^2} = \min _{\psi \in H^1, \Vert \psi \Vert _{L^2} = 1} \Big [ \Vert \nabla \psi \Vert ^2_{L_2} - \sqrt{2}\iint _{\mathbb {R}^3 \times \mathbb {R}^3} \mathrm dx \mathrm dy \, \frac{|\psi (x) \psi (y)|^2}{|x-y|} \Big ]. \end{aligned}$$
(1.1)

There also exists a functional analytic proof by Lieb and Thomas [LT97], that additionally contains explicit error bounds. Formal computations by Landau and Pekar [LP48] suggest that

$$\begin{aligned} \lim _{\alpha \rightarrow \infty } \frac{m_{\text {eff}}(\alpha )}{\alpha ^4} = \frac{16 \sqrt{2}\pi }{3} \int _{\mathbb {R}^3} \mathrm dx \, |\psi (x)|^4, \quad \psi \text { minimizer of (1.1)}, \end{aligned}$$
(1.2)

and Feynman [Fey55] obtains the same \(\alpha ^4\)-asymptotics (but not the prefactor) by completely different methods. Rigorous results on the effective mass have been obtained only much later and they are significantly weaker: Lieb and Seiringer [LS20] show that \( \lim _{\alpha \rightarrow \infty } m_\text {eff}(\alpha ) = \infty \) by estimating

$$\begin{aligned} \frac{1}{2m_\text {eff}(\alpha )} \;\leqslant \;\lim _{P\rightarrow 0} \frac{1}{|P|^2} \big ( \langle \phi _{\alpha , P}, H_\alpha (P) \phi _{\alpha , P} \rangle - E_\alpha (0)\big ) \end{aligned}$$

with suitably chosen trial states \(\phi _{\alpha , P}\). For models with stronger regularity assumptions (excluding the Fröhlich Polaron), quantitative estimates on the effective mass were recently obtained in [MS21]. Our goal is to show that there exists some constant \(c>0\) such that for all \(\alpha >0\)

$$\begin{aligned} m_\text {eff}(\alpha ) \;\geqslant \;c a^{2/5} \end{aligned}$$
(1.3)

and thereby giving a first quantitative growth bound for the effective mass of the Fröhlich Polaron.

We give a short introduction into the probabilistic representation of the effective mass. We refer the reader to [DS20] for details in this representation, and to [Moe06] for a review covering functional analytic properties of the Polaron. An application of the Feynman-Kac formula to the semigroup \((e^{-TH_\alpha })_{T \;\geqslant \;0}\) leads to the path measure

$$\begin{aligned} \mathbb P_{\alpha , T}(\mathrm d\textbf{x}) = \frac{1}{Z_{\alpha , T}}\exp \bigg ( \frac{\alpha }{2} \int _{0}^T\int _{0}^T \frac{e^{-|t-s|}}{|\textbf{x}_{t}- \textbf{x}_{s}|} \, \mathrm ds \mathrm dt \bigg ) \mathcal W(\mathrm d \textbf{x}) \end{aligned}$$
(1.4)

in finite volume \(T>0\), where \(\mathcal W\) is the distribution of three dimensional Brownian motion and the partition function \(Z_{\alpha , T}\) is a normalization constant. In particular, the partition function can be expressed as the matrix element \(Z_{\alpha , T} = \langle \Omega , e^{-T H_\alpha (0)} \Omega \rangle \), where \(\Omega \) is the Fock vacuum. This leads to \(E_\alpha (0) = - \lim _{T\rightarrow \infty } \log (Z_{\alpha , T})/T\), which was applied (up to boundary conditions) in the aforementioned proof of Pekars conjecture by Donsker and Varadhan. Spohn conjectured [Spo87] that the path measure (in infinite volume) converges under diffusive rescaling to Brownian motion with some diffusion constant \(\sigma ^2(\alpha )\), and showed that the effective mass then has the representation

$$\begin{aligned} m_{\text {eff}}(\alpha ) = (\sigma ^2(\alpha ))^{-1}. \end{aligned}$$

The existence of the infinite volume measure and the validity of the central limit theorem were shown by Mukherjee and Varadhan [MV19] for a restricted range of coupling parameters. The proof relies on a representation of the measures (1.4) as a mixture of Gaussian measures, the mixing measure being the distribution of a point process on \(\{(s, t)\in \mathbb {R}^2: s<t\} \times (0, \infty )\) which has a renewal structure. This approach was extended in [BP21] to a broader class of path measures and a functional central limit theorem, and, for the Fröhlich Polaron, to all coupling parameters (relying on known spectral properties of \(H_\alpha (0)\)). We also refer the reader to [MV21, BMPV22], where proofs are given that do not need the spectral properties of \(H_\alpha (0)\). We use this point process representation and derive a “variational like” formula for the effective mass (see Proposition 3.2). We obtain the estimate (1.3) by minimizing over sub-processes of the full point process.

Without additional effort, we can allow for a bit more generality and look at the probability measures \(\mathbb P_{\alpha , T}\) on \(C([0, \infty ), \mathbb {R}^d)\) defined by

$$\begin{aligned} \mathbb P_{\alpha , T}(\mathrm d\textbf{x}) = \frac{1}{Z_{\alpha , T}}\exp \bigg ( \frac{\alpha }{2} \int _{0}^T\int _{0}^T g(t-s) v(\textbf{x}_{s, t}) \, \mathrm ds \mathrm dt \bigg ) \mathcal W(\mathrm d \textbf{x}) \end{aligned}$$
(1.5)

where \(d\;\geqslant \;2\), \(v(x) = 1/|x|^\gamma \) with \(\gamma \in [1, 2)\) and \(g:[0, \infty ) \rightarrow (0, \infty )\) is a probability density with finite first moment satisfying \(\sup _{t\;\geqslant \;0} (1+t)g(t) <\infty \) and \(\textbf{x}_{s, t} :=\textbf{x}_t - \textbf{x}_s\) for \(\textbf{x}\in C([0, \infty ), \mathbb {R}^d)\) and \(s, t\;\geqslant \;0\). By the results in [BMPV22], these conditions are sufficient for the existence of an infinite volume measure and the validity of a functional central limit theorem in infinite volume. By the proof of the central limit theorem given in [MV19]Footnote 1 the respective diffusion constant \(\sigma ^2(\alpha )\) can also be obtained by taking the “diagonal limit”, that is

$$\begin{aligned} \sigma ^2(\alpha ) = \lim _{T \rightarrow \infty } \frac{1}{dT} \mathbb E_{\alpha , T}[|X_{0, T}|^2]. \end{aligned}$$

Here \(X_t(\textbf{x}) :=\textbf{x}_t\) for any \(\textbf{x}\in C([0, \infty ), \mathbb {R}^d)\) and \(t\;\geqslant \;0\), that is we denote by X the random variable that is given by the identity on \(C([0, \infty ) , {\mathbb {R}}^d)\).

Theorem 1.1

There exists a constant \(C>0\) (depending on \(\gamma \), g and d) such that

$$\begin{aligned} \sigma ^2(\alpha ) \;\leqslant \;C \alpha ^{-2/(2+d)} \end{aligned}$$

for all \(\alpha >0\). Consequently, there exists a constant \(c>0\) such that the effective mass of the Fröhlich Polaron satisfies \(m_{{\text {eff}}}(\alpha ) \;\geqslant \;c \alpha ^{2/5}\) for all \(\alpha >0\).

As we will point out in Remark 2.1, the results obtained in [BMPV22] imply for the Fröhlich Polaron with minor additional effort the existence of some \(\tilde{c}>0\) such that \(\sigma ^2(\alpha ) \;\geqslant \;e^{-\tilde{c}\alpha ^2}\) for reasonably large \(\alpha \).

2 Point Process Representation

By using the identity \( \frac{1}{|x|} = \sqrt{\frac{2}{\pi }} \int _0^\infty \mathrm du \, e^{-u^2 |x|^2/2} \) and expanding the exponential into a series, Mukherjee and Varadhan [MV19] represented the path measure of the Fröhlich Polaron as a mixture of Gaussian measures \(\textbf{P}_{\xi , u}\), the mixing measure \(\widehat{\Theta }_{\alpha , T}\) being the distribution of a suitable point process on \(\{(s, t)\in \mathbb {R}^2:\, s<t\} \times [0, \infty )\). In the following, we give an introduction to the point process representation in the form given in [BP21]. We will additionally use the invariance of (1.5) under replacing v by \(v_\varepsilon (x) :=v(x) + \varepsilon \) for some \(\varepsilon \;\geqslant \;0\). While this transformation does not change the measure \(\mathbb P_{\alpha , T}\), we will obtain different point process representations depending on the choice of \(\varepsilon \).

Let \(\Gamma _{\alpha , T}\) be the distribution of a Poisson point process on \(\triangle :=\{(s, t)\in \mathbb {R}^2:\, s<t\}\) with intensity measure

$$\begin{aligned} \mu _{\alpha , T}(\mathrm ds \mathrm dt) :=\alpha g(t-s) \mathbbm {1}_{\{0 \;\leqslant \;s<t\;\leqslant \;T\}}\mathrm ds \mathrm dt. \end{aligned}$$

By expanding the exponential into a series and interchanging the order of summation and integration, we obtain for any \(A\in \mathcal B(C([0, \infty ), \mathbb {R}^d))\) and any \(\varepsilon \;\geqslant \;0\)

(2.1)

where \(\textbf{N}_f(\triangle )\) is the set of finite integer valued measures on \(\triangle \). We will view \(\xi = \sum _{i=1}^n \delta _{(s_i, t_i)}\in \textbf{N}_f(\triangle )\) as a collection of intervals \(\{[s_i, t_i]: 1\;\leqslant \;i \;\leqslant \;n\}\).

The measure \(\Gamma _{\alpha , T}\) can be interpreted in the following way: Consider a \(M/G/\infty \)-queue started empty at time 0 where the arrival process \(\sum _{n=0}^\infty \delta _{s_n}\) is a homogeneous Poisson process with rate \(\alpha \), and the service times \((\tau _n)_n\) are iid with density g and are independent of the arrival process. Consider the process \(\eta = \sum _{n=1}^\infty \delta _{(s_n, t_n)}\) where \(t_n :=s_n + \tau _n\) is for \(n\in {\mathbb N}\) the departure of customer n. Then the process off all customers in \(\eta \) that arrive and depart before T has distribution \(\Gamma _{\alpha , T}\).

For \(\xi \in \textbf{N}_f(\triangle )\) we define

$$\begin{aligned} F_\varepsilon (\xi ) :=\mathbb E_\mathcal W\Big [\prod _{(s, t)\in {\text {supp}}(\xi ) }v_\varepsilon (X_{s, t}) \Big ] \end{aligned}$$

and the perturbed point process \(\widehat{\Gamma }_{\alpha , T}^\varepsilon \) by

$$\begin{aligned} \widehat{\Gamma }_{\alpha , T}^\varepsilon (\mathrm d\xi ) :=\frac{e^{c_{\alpha , T}}}{Z_{\alpha , T}^\varepsilon } F_\varepsilon (\xi ) \Gamma _{\alpha , T}(\mathrm d\xi ). \end{aligned}$$

Then Eq. (2.1) yields the representation \(\mathbb P_{\alpha , T}(\cdot ) = \int \widehat{\Gamma }_{\alpha , T}^\varepsilon (\mathrm d \xi ) \textbf{P}^\varepsilon _\xi (\cdot )\) of \(\mathbb P_{\alpha , T}\) as a mixture of the perturbed path measures

$$\begin{aligned} \textbf{P}^\varepsilon _\xi (\mathrm d \textbf{x}) :=\frac{1}{F_\varepsilon (\xi )} \prod _{(s, t)\in {\text {supp}}(\xi )}v_\varepsilon (\textbf{x}_{s, t}) \, \mathcal W(\mathrm d \textbf{x}). \end{aligned}$$

The function \(F_\varepsilon \) depends in an intricate manner on the configuration \(\xi \), depending on number, length and relative position of the intervals. Nevertheless, the perturbed measure \(\widehat{\Gamma }^\varepsilon _{\alpha , T}\) can still be interpreted as a queuing process by identifying \((s,t) \in {\text {supp}}(\xi )\) with a service interval [st] as indicated above. Under the given assumptions on g and v, \(\widehat{\Gamma }^\varepsilon _{\alpha , T}\) can then be expressed in terms of an iid sequence of “clusters” of overlapping intervals, separated by exponentially distributed dormant periods not covered by any interval. The processes of increments can be drawn independently along these dormant periods and clusters (according to the kernel \((\xi , A) \mapsto \textbf{P}^\varepsilon _\xi (A)\)). This yields the existence of the infinite volume limit as well as the functional central limit theorem [BP21, BMPV22].

In order to make the connection to the representation used in [MV19], we notice that for all \(x\in \mathbb {R}^d \setminus \{0\}\)

$$\begin{aligned} v_\varepsilon (x) = 1/|x|^\gamma + \varepsilon = \int _{[0, \infty )} \nu _\varepsilon (\mathrm du) e^{-u^2 |x|^2/2} \end{aligned}$$

where

$$\begin{aligned} \nu _\varepsilon (\mathrm du) = \frac{2^{(2-\gamma )/2}}{\Gamma (\gamma /2)} u^{\gamma -1} \mathrm du + \varepsilon \delta _0(\mathrm du). \end{aligned}$$

It will turn out to be useful to make the atoms of out point process distinguishable. We will abuse notation and identify probability measures on \(\textbf{N}_f(\triangle )\) with symmetric probability measures on \(\triangle ^\cup :=\bigcup _{n=0}^\infty \triangle ^n\). With Eq. (2.1), we obtain

$$\begin{aligned} \mathbb P_{\alpha , T}(A) = \frac{e^{c_{\alpha , T}}}{Z_{\alpha , T}^\varepsilon } \int _{\triangle ^\cup } \Gamma _{\alpha , T}(\mathrm d\xi ) \int _{[0, \infty )^{N(\xi )}} \nu _\varepsilon ^{\otimes N(\xi )}(\mathrm du) \int _A \mathcal W( \mathrm d \textbf{x}) e^{-\sum _{i=1}^{N(\xi )} u_i^2 |\textbf{x}_{s_i, t_i}|^2/2} \end{aligned}$$

where \(N :=\sum _{n=0}^\infty n\cdot \mathbbm {1}_{\triangle ^n}\). We define for \(\xi = ((s_i, t_i))_{1\;\leqslant \;i \;\leqslant \;N(\xi )} \in \triangle ^\cup \) and \(u \in [0, \infty )^{N(\xi )}\) the Gaussian measures

$$\begin{aligned} \textbf{P}_{\xi , u}(\mathrm d\textbf{x}) :=\frac{1}{\phi (\xi , u)} e^{-\sum _{i=1}^{N(\xi )} u_i^2 |\textbf{x}_{s_i, t_i}|^2/2} \, \mathcal W(\mathrm d \textbf{x}) \end{aligned}$$
(2.2)

where \(\phi (\xi , u) :=\mathbb E_\mathcal W\big [e^{-\sum _{i=1}^{N(\xi )} u_i^2 |X_{s_i, t_i}|^2/2}\big ]\) and, for \(\xi \in \triangle ^\cup \) withFootnote 2\(F_\varepsilon (\xi ) < \infty \) the measures

$$\begin{aligned} \kappa _\varepsilon (\xi , \mathrm du) :=\frac{\phi (\xi , u)}{F_\varepsilon (\xi )} \nu _\varepsilon ^{\otimes N(\xi )}(\mathrm du) \end{aligned}$$

and obtain

$$\begin{aligned} \mathbb P_{\alpha , T}(\cdot ) = \int _{\triangle ^\cup } \widehat{\Gamma }_{\alpha , T}^\varepsilon (\mathrm d\xi ) \int _{[0, \infty )^{N(\xi )}} \kappa _\varepsilon (\xi , \mathrm du) \, \textbf{P}_{\xi , u}(\cdot ). \end{aligned}$$
(2.3)

We will view \((s, t, u) \in \triangle \times [0, \infty )\) as an interval [st] equipped with the mark u. In contrast to the representation in [MV19], we draw a sample of the mixing measure \(\widehat{\Theta }_{\alpha , T}\) (which is the distribution of a point process on \(\triangle \times [0, \infty )\)) in two steps: First, draw a sample \(\xi \) according to \(\widehat{\Gamma }_{\alpha , T}^0\) and then draw marks u according to the kernel \(\kappa _0(\xi , \cdot )\). This added flexibility will be useful later. If we define

$$\begin{aligned} \sigma ^2_T(\xi , u) :=\frac{1}{dT}\mathbb E_{\textbf{P}_{\xi , u}}[|X_{0, T}|^2] \end{aligned}$$

we get with the previous considerations

$$\begin{aligned} \frac{1}{dT}\mathbb E_{\alpha , T}[|X_{0, T}|^2]= \int _{\triangle ^\cup } \widehat{\Gamma }_{\alpha , T}^\varepsilon (\mathrm d\xi ) \int _{[0, \infty )^{N(\xi )}} \kappa _\varepsilon (\xi , \mathrm du) \, \sigma ^2_T(\xi , u). \end{aligned}$$
(2.4)

Taking the limit \(T\rightarrow \infty \) in (2.4) then yields the diffusion constant i.e. the inverse of the effective mass.

One can also express the perturbed measure \(\widehat{\Gamma }_{\alpha , T}^\varepsilon \) in terms of the path measures \(\mathbb P_{\alpha , T}\) (a similar calculation was already made in [MV21, (4.18)]). The Laplace transform of \(\widehat{\Gamma }^\varepsilon _{\alpha , T}\) evaluated at the measurable function \(f: \triangle \rightarrow [0, \infty )\) is

$$\begin{aligned} \int _{\textbf{N}_f(\triangle )} \widehat{\Gamma }_{\alpha , T}^\varepsilon (\mathrm d\xi ) e^{-\int _\triangle f \mathrm d \xi }&= \frac{1}{Z_{\alpha , T}^\varepsilon } \sum _{n=0}^\infty \frac{1}{n!} \int _{\triangle ^n} \mu _{\alpha , T}^{\otimes n}(\mathrm ds \mathrm dt) \mathbb E_\mathcal W\Big [ \prod _{i=1}^{n} v_\varepsilon (X_{s_i, t_i}) e^{-f(s_i, t_i)}\Big ] \\&= \frac{1}{Z_{\alpha , T}^\varepsilon } \mathbb E_\mathcal W\bigg [ \sum _{n=0}^\infty \frac{1}{n!} \bigg (\int _{\triangle } \mu _{\alpha , T}(\mathrm ds \mathrm dt) v_\varepsilon (X_{s, t}) e^{-f(s, t)} \bigg )^n \bigg ] \\&= \frac{1}{Z_{\alpha , T}^\varepsilon } \mathbb E_\mathcal W\bigg [\exp \bigg (\int _{\triangle } \mu _{\alpha , T}(\mathrm ds \mathrm dt) v_\varepsilon (X_{s, t}) e^{-f(s, t)} \bigg ) \bigg ] \\&= \mathbb E_{\alpha , T}\bigg [\exp \bigg (\int _{\triangle } \mu _{\alpha , T}(\mathrm ds \mathrm dt) v_\varepsilon (X_{s, t}) \big (e^{-f(s, t)}-1\big ) \bigg ) \bigg ]. \end{aligned}$$

That is, \(\widehat{\Gamma }_{\alpha , T}^\varepsilon \) is the distribution of a Cox processFootnote 3 with driving measure \(\mu _{\alpha , T}(\mathrm ds \mathrm dt) v_\varepsilon (X_{s, t})\) with \((X_t)_{t} \sim \mathbb P_{\alpha , T}\).

The introduction of \(v_\varepsilon \) with \(\varepsilon > 0\), which in the context of (1.5) is a simple shift of energy, thus creates a component of \(\widehat{\Gamma }_{\alpha , T}^\varepsilon \) that is quite easy to understand. Indeed, \(\widehat{\Gamma }_{\alpha , T}^\varepsilon \) is the distribution of the sum \(\eta _{\alpha , T} + \eta _{\alpha , T}^\varepsilon \) where \(\eta _{\alpha , T} \sim \widehat{\Gamma }_{\alpha , T}^0\) and where the Poisson point process \(\eta _{\alpha , T}^\varepsilon \sim \Gamma _{\varepsilon \alpha , T}\) is independent of \(\eta _{\alpha , T}\). \(\eta _{\alpha , T}^\varepsilon \) leads to additional intervals, but on the other hand, for given \(\xi \), under the measure \(\kappa _\varepsilon (\xi , \cdot )\) now u has components that are equal to zero with positive probability. Intervals of \(\xi \) corresponding to these components will be invisible to the measure \(\textbf{P}_{\xi ,u}\) as can be seen directly from its definition (2.2). These two effects balance since \(\mathbb P_{\alpha ,T}\) is independent of \(\varepsilon \). However, given a realization \(\xi \) of \(\eta _{\alpha , T} + \eta _{\alpha , T}^\varepsilon \) it is not the case that \(\kappa _\varepsilon (\xi , \cdot )\) just produces zero components of u precisely for those intervals coming from \(\eta _{\alpha , T}^\varepsilon \); on the contrary, \(\kappa _\varepsilon (\xi ,\cdot )\) does not depend on whether any given interval in \(\xi \) was produced by \(\eta _{\alpha , T}\) or by \(\eta _{\alpha , T}^\varepsilon \). The proof of Theorem 1.1 now uses the following strategy:

  1. (1)

    We show in Lemma 3.1 that \(u \mapsto \sigma ^2_T(\xi ,u)\) is decreasing along each coordinate axis. This in particular implies that \(\sigma ^2_T(\xi ,u)\) increases if intervals are deleted from \(\xi \), which corresponds to setting the suitable components of u to zero. While not necessaryFootnote 4 for the proof of Theorem 1.1, we also give a characterization of \(\sigma ^2_T(\xi ,u)\) in terms of a \(L^2\)-distance which emphasizes the variational type flavor of our approach and might be useful for further refining the method.

  2. (2)

    The dependence of the distribution \(\kappa _\varepsilon (\xi , \cdot )\) on the whole configuration \(\xi \) prevents us from simply deleting intervals from \(\xi \) in order to obtain an estimate on the variance in (2.4). To get around this, we fix some \(C(\alpha )\) (to be determined later), and show that \(\kappa _\varepsilon (\xi ,\cdot )\) stochastically dominates a kernel that assigns the marks 0 and \(C(\alpha )\) independently to each interval in \(\xi \) with suitable probabilities. This is done in Sect. 4.

  3. (3)

    The two previous items mean that we obtain an upper bound on the value of (2.4) when we first replace \(\kappa _\varepsilon \) by our new kernel and then delete intervals from \(\xi \). On the other hand, for interval configurations \(\xi \) that consist only of non-overlapping intervals, \(\sigma ^2_T(\xi ,u)\) can be computed explicitly, see Lemma 3.3. By deleting a sufficient number of intervals (including all those coming from \(\eta _{\alpha ,T}\)), and by optimizing over the parameter C, we obtain our result.

Remark 2.1

For the Fröhlich Polaron, i.e. for \(d=3\), \(\gamma =1\) and \(g(t) = e^{-t}\) for all \(t\;\geqslant \;0\), the point process representation implies the existence of some constant \(\tilde{c}>0\) such that \(\sigma ^2(\alpha ) \;\geqslant \;e^{-\tilde{c} \alpha ^2}\) for all reasonably large \(\alpha >0\). Given \(\xi \in \textbf{N}_f(\triangle )\) and \(r\in \mathbb {R}\) let \(N_r(\xi ) :=\xi (\{(s,t): s\;\leqslant \;r \;\leqslant \;t\})\) be the number of intervals that contain r. We call t dormant under \(\xi \), if \(N_t(\xi ) = 0\), else we call t active under \(\xi \). Under \(\xi \), the interval [0, T] can be decomposed into alternating dormant and active periods. Let

$$\begin{aligned} \mathcal D_T(\xi ) :=\lambda (\{t \in [0, T]: N_t(\xi ) = 0\}) \end{aligned}$$

(where \(\lambda \) is the Lebesgue measure) be the sum of lengths of all dormant periods in [0, T]. One can easily convince oneself that \( \sigma ^2_T(\xi , u) \;\geqslant \;\frac{1}{T}\mathcal D_T(\xi ) \) for all \(u\in [0, \infty )^n\) (this follows from Proposition 3.2 later in the text as well). By applying Jensens inequality to the measure \( \frac{1}{T} \mathrm dt \, \mathbb P_{\alpha , T}(\mathrm d \textbf{x})\) and using Brownian scaling in order to interpret the resulting expression in terms of the derivative of \(\psi (\alpha ) :=\lim _{T\rightarrow \infty } \log (Z_{\alpha , T})/T = -E_\alpha (0)\), it was shown in [BMPV22] that

$$\begin{aligned} \liminf _{T \rightarrow \infty } \frac{1}{T}\int _0^T \widehat{\Gamma }_{\alpha , T}^0(N_t = 0) \, \mathrm dt = \liminf _{T \rightarrow \infty } \frac{1}{T}\int _0^T \frac{Z_{\alpha , t} Z_{\alpha , T-t}}{Z_{\alpha , T}} \, \mathrm dt \;\geqslant \;e^{-\tfrac{3}{2} \alpha \psi '(\alpha )}. \end{aligned}$$

By Fubini’s Theorem

$$\begin{aligned} \int _0^T \widehat{\Gamma }_{\alpha , T}^0(N_t = 0) \, \mathrm dt = \mathbb E_{\widehat{\Gamma }^0_{\alpha , T}}[\mathcal D_T] \end{aligned}$$

and thus \(\sigma ^2(\alpha ) \;\geqslant \;e^{-\tfrac{3}{2} \alpha \psi '(\alpha )}\) by (2.4). By convexity of \(\psi \) and since \(\lim _{\alpha \rightarrow \infty } \psi (\alpha )/\alpha ^2 \in (0, \infty )\) we have

$$\begin{aligned} \psi '(\alpha ) \;\leqslant \;\frac{\psi (2\alpha ) - \psi (\alpha )}{\alpha } \in \mathcal O(\alpha ) \end{aligned}$$

and hence there exists some \(\tilde{c}>0\), \(\alpha _0 >0\) such that \(\sigma ^2(\alpha ) \;\geqslant \;e^{-\tilde{c}\alpha ^2}\) for all \(\alpha \;\geqslant \;\alpha _0\).

3 Properties of \(\sigma _T^2(\xi , u)\)

We derive a few properties of \(\sigma _T^2(\xi , u)\). First, we will show that \(\sigma _T^2(\xi , \cdot )\) is decreasing with respect to the partial order on \([0, \infty )^{N(\xi )}\) defined by \(u\;\leqslant \;\tilde{u}\) iff \(u_i\;\leqslant \;\tilde{u}_i\) for all \(1\;\leqslant \;i \;\leqslant \;N(\xi )\). We will then express \(\sigma _T^2(\xi , u)\) in terms of a \(L^2\)-distance and calculate \(\sigma _T^2(\xi , u)\) in case that \(\xi \) consists of pairwise disjoint intervals.

Lemma 3.1

For fixed \(\xi \in \triangle ^\cup \), the function \(\sigma ^2_T(\xi , \cdot )\) is decreasing on \([0, \infty )^{N(\xi )}\).

Proof

Let \( \xi = ((s_i, t_i))_{1\;\leqslant \;i \;\leqslant \;N(\xi )} \in \triangle ^\cup \). Then

$$\begin{aligned} \frac{\partial \sigma ^2_T(\xi , u)}{\partial u_i}&= \frac{\partial }{\partial u_i} \frac{\mathbb E_\mathcal W\big [|X_{0, T}|^2 e^{-\sum _{j=1}^{N(\xi )} u_j^2 |X_{s_j, t_j}|^2/2}\big ]}{\mathbb E_\mathcal W\big [e^{-\sum _{j=1}^{N(\xi )} u_j^2 |X_{s_j, t_j}|^2/2}\big ]} \\&=-u_i {\text {Cov}}_{\textbf{P}_{\xi , u}}(|X_{0, T}|^2,|X_{s_i, t_i}|^2). \end{aligned}$$

By Isserlis’ theorem (and as the coordinate processes are iid under the centered Gaussian measure \(\textbf{P}_{\xi , u}\))

$$\begin{aligned} {\text {Cov}}_{\textbf{P}_{\xi , u}}(|X_{0, T}|^2,|X_{s_i, t_i}|^2)&= d \cdot {\text {Cov}}_{\textbf{P}_{\xi , u}}((X_{0, T}^1)^2,(X_{s_i, t_i}^1)^2) \\&= 2d\cdot \mathbb E_{\textbf{P}_{\xi , u}}\big [X_{0, T}^1 X_{s_i, t_i}^1 \big ]^2 \;\geqslant \;0. \end{aligned}$$

\(\square \)

In particular, \(\sigma ^2_{T}(\xi , u)\) is increasing under deletion of a marked interval \((s_i, t_i, u_i)\) (set \(u_i = 0\)).

As already used in [MV19], the normalization constant \(\phi (\xi , u)\) can also be expressed in terms of a determinant of the covariance matrix of a suitable Gaussian vector. To see this, let \((B_t)_{t\;\geqslant \;0}\) be an one dimensional Brownian motion and \((Z_n)_n\) be an iid sequence of \(\mathcal N(0, 1)\) distributed random variables, independent of \((B_t)_{t\;\geqslant \;0}\). Then, by Lemma 6.1 of the appendix, we have

$$\begin{aligned} \phi (\xi , u) = \frac{1}{\det ( C(\xi , u))^{d/2}} \end{aligned}$$

where \(C(\xi , u)\) is the covariance matrix of the Gaussian vector

Proposition 3.2

The diffusion constant has the representation

$$\begin{aligned} \sigma ^2(\alpha )= \lim _{T \rightarrow \infty } \int _{\triangle ^\cup } \widehat{\Gamma }_{\alpha , T}^\varepsilon (\mathrm d\xi ) \int _{[0, \infty )^{N(\xi )}} \kappa _\varepsilon (\xi , \mathrm du) \, \sigma ^2_T(\xi , u) \end{aligned}$$

where for \(\xi = ((s_i, t_i))_{1\;\leqslant \;i \;\leqslant \;N(\xi )} \in \triangle ^\cup \) and \(u\in [0, \infty )^{N(\xi )}\)

$$\begin{aligned} \sigma ^2_T(\xi , u) = \frac{1}{T}{\text {dist}}_{L^2}\Big (B_{0, T}, {\text {span}}\{u_i B_{s_i, t_i} + Z_i: \, 1\;\leqslant \;i \;\leqslant \;N(\xi )\}\Big )^2. \end{aligned}$$
(3.1)

Proof

It is left to show Equality (3.1). We have

$$\begin{aligned} T \sigma ^2_T(\xi , u) = -\frac{2/d}{\phi (\xi , u)}\frac{\partial }{ \partial v } \mathbb E_\mathcal W\Big [ e^{- \tfrac{1}{2} v |X_{0, T}|^2- \sum _{i=1}^{N(\xi )} \tfrac{1}{2} u_i^2 |X_{s_i, t_i}|^2 } \Big ] \Big |_{v=0}. \end{aligned}$$
(3.2)

For \(v, w \in \mathbb {R}\) let \(C_T(\xi , u, v, w) \in \mathbb {R}^{({N(\xi )}+1)\times ({N(\xi )}+1)}\) be the covariance matrix of the Gaussian vector . Then, with Lemma 6.1, Eq. (3.2) becomes

$$\begin{aligned} T \sigma _T^2(\xi , u)&= - \frac{2}{d} \det (C(\xi , u))^{d/2} \frac{\partial }{ \partial v }\frac{1}{\det (C_T(\xi , u, v, 1))^{d/2}}\Big |_{v=0} \\&= \frac{1}{\det (C(\xi , u))} \frac{\partial }{ \partial v } \det (C_T(\xi , u, v, 1)) \Big |_{v=0} \end{aligned}$$

since \(\det (C_T(\xi , u, 0, 1)) = \det (C(\xi , u))\). By using that the determinant is multilinear in the rows and columns, we have

$$\begin{aligned} \det (C_T(\xi , u, v, 1)) = \det (C(\xi , u)) + v\det (C_T(\xi , u, 1, 0)) \end{aligned}$$

and hence

$$\begin{aligned} \frac{\partial }{ \partial v } \det (C_T(\xi , u, v, 1)) \Big |_{v=0} = \det (C_T(\xi , u, 1, 0)). \end{aligned}$$

Finally, by Lemma 6.2

$$\begin{aligned} \frac{\det (C_T(\xi , u, 1, 0))}{\det (C(\xi , u))} = {\text {dist}}_{L^2}\Big (B_{0, T}, {\text {span}}\{u_i B_{s_i, t_i} + Z_i: \, 1\;\leqslant \;i \;\leqslant \;{N(\xi )}\}\Big )^2 \end{aligned}$$

which concludes the proof.

Lemma 3.3

Let \(\xi = ((s_i, t_i))_{1\;\leqslant \;i \;\leqslant \;N(\xi )} \in \triangle ^\cup \) with \([s_i, t_i] \subseteq [0, T]\) for all \(1\;\leqslant \;i \;\leqslant \;N(\xi )\) and \([s_i, t_i] \cap [s_j, t_j] = \emptyset \) for \(i\ne j\). Then, for \(u \in [0, \infty )^{N(\xi )}\)

$$\begin{aligned} \sigma _T^2(\xi , u) = 1- \frac{1}{T} \sum _{i=1}^{N(\xi )} \tau _i + \frac{1}{T}\sum _{i=1}^{N(\xi )} \frac{\tau _i}{\tau _i u_i^2 + 1} \end{aligned}$$

where \(\tau _i :=t_i - s_i\) for \(1\;\leqslant \;i \;\leqslant \;{N(\xi )}\).

Proof

We project \(B_{0, T}\) onto \({\text {span}} \{u_iB_{s_i, t_i} + Z_i:\, 1 \;\leqslant \;i \;\leqslant \;{N(\xi )}\}\) and obtain

$$\begin{aligned} \sum _{i=1}^{N(\xi )} \mathbb E[B_{0, T}\cdot (u_i B_{s_i, t_i} + Z_i)] \frac{u_i B_{s_i, t_i} + Z_i}{\mathbb E[(u_i B_{s_i, t_i} + Z_i)^2]} = \sum _{i=1}^{N(\xi )} \frac{\tau _iu_i}{u_i^2\tau _i + 1} (u_i B_{s_i, t_i} + Z_i). \end{aligned}$$

Hence

$$\begin{aligned} T \sigma ^2_T(\xi , u)&={\text {dist}}\big (B_{0, T}, {\text {span}} \{u_iB_{s_i, t_i} + Z_i:\, 1\;\leqslant \;i \;\leqslant \;{N(\xi )}\} \big )^2 \\&= \mathbb E\Big [\Big (B_{0, T} - \sum _{i=1}^{N(\xi )} B_{s_i, t_i} + \sum _{i=1}^{N(\xi )} \frac{1}{u_i^2\tau _i + 1} B_{s_i, t_i} - \frac{u_i\tau _i}{u_i^2\tau _i + 1}Z_i \Big )^2 \Big ] \\&= \mathbb E\Big [\Big (B_{0, T} - \sum _{i=1}^{N(\xi )} B_{s_i, t_i} \Big )^2 \Big ] + \sum _{i=1}^{N(\xi )} \frac{\tau _i}{(u_i^2\tau _i + 1)^2} + \frac{u_i^2 \tau _i^2}{(u_i^2\tau _i + 1)^2} \\&= T - \sum _{i=1}^{N(\xi )} \tau _i + \sum _{i=1}^{N(\xi )} \frac{\tau _i}{\tau _i u_i^2 + 1}. \end{aligned}$$

\(\square \)

In the situation above, \(1 - \frac{1}{T}\sum _{i=1}^{N(\xi )} \tau _i = \frac{1}{T} \lambda \big ([0, T]\setminus \bigcup _{i=1}^{N(\xi )}[s_i, t_i]\big )\), where \(\lambda \) denotes the Lebesgue measure. Assume that \(\tau _i \;\geqslant \;1\) and \(u_i \;\geqslant \;C\) for all \(1\;\leqslant \;i \;\leqslant \;{N(\xi )}\) where \(C>0\). Then

$$\begin{aligned} \frac{1}{T}\sum _{i=1}^{N(\xi )} \frac{\tau _i}{\tau _i u_i^2 + 1} \;\leqslant \;\frac{1}{C^2 + 1} \frac{1}{T}\sum _{i=1}^{N(\xi )} \tau _i \;\leqslant \;\frac{1}{C^2 + 1}. \end{aligned}$$
(3.3)

For a general configuration \((\xi , u)\), the previous considerations allow us to obtain estimates on \(\sigma _{T}^2(\xi , u)\) by considering subconfigurations \((\xi ', u')\) of pairwise disjoint marked intervals with intervals lengths exceeding 1 and marks exceeding C. In combination with an application of renewal theory, this implies the following technical Lemma:

Lemma 3.4

Let \(f:[0, \infty ) \rightarrow [0, \infty )\) be measurable with \(\int _0^\infty (1+t)f(t) \, \mathrm dt <\infty \) and \(f(t) = 0\) for almost all \(t\in [0, 1]\). For \(T>0\) let \(\Xi _T\) be the law of a Poisson point process with intensity measure \( f(t-s) \mathbbm {1}_{\{0< s<t<T\}} \mathrm ds \mathrm dt \) and let \(C>0\). Then

$$\begin{aligned} \limsup _{T \rightarrow \infty } \int _{\triangle ^\cup } \Xi _T(\mathrm d \xi ) \int _{[0, \infty )^{N(\xi )}} \delta _C^{\otimes N(\xi )}(\mathrm du)\, \sigma ^2_T(\xi , u) \;\leqslant \;\frac{1}{1+ \int _0^\infty t f(t) \, \mathrm dt} + \frac{1}{C^2 + 1}. \end{aligned}$$

Proof

In case that \(f=0\) a.e. the statement is trivial, so assume \(\int _0^\infty f(t) \, \mathrm dt >0\). Let

$$\begin{aligned} \beta :=\int _{0}^{\infty } f(r) \mathrm dr, \quad \rho (t) :=\frac{f(t)}{\int _{0}^{\infty } f(r) \mathrm dr} \end{aligned}$$

for \(t\;\geqslant \;0\). We consider the Poisson point process \(\eta = \sum _{n=1}^\infty \delta _{(s_n, t_n)}\) of a \(M/G/\infty \)-queue where the arrival process \(\sum _{n=1}^\infty \delta _{s_n}\) is a homogeous Poisson point process with intensity \(\beta \), the service times \((\tau _n)_n\) are iid, have density \(\rho \) and are independent of the arrival process and \(t_n :=s_n + \tau _n\) is for \(n\in {\mathbb N}\) the departure of customer n. Then the restriction of \(\eta \) to the process of all customers that arrive and depart before T has distribution \(\Xi _T\). We inductively define \(i_0 :=0\), \(t_0 :=0\) and

$$\begin{aligned} i_{n+1} :=\inf \{j> i_n: s_{j} > t_{i_n}\} \end{aligned}$$

for \(n\;\geqslant \;0\), i.e. customer \(i_{n+1}\) is for \(n\;\geqslant \;1\) the first customer that arrives after the departure of customer \(i_n\). The waiting times \((s_{i_{n}} - t_{i_{n-1}})_{n\;\geqslant \;1}\) are iid \({\text {Exp}}(\beta )\) distributed and are independent of the iid service times \((t_{i_n} - s_{i_n})_{n\;\geqslant \;1}\) which have density \(\rho \). Notice that \((t_{i_n})_{n\;\geqslant \;0}\) defines a renewal process. Let \(N_T :=\inf \{n: t_{i_n} \;\geqslant \;T\}\). By the considerations after Lemma 3.3, we have

$$\begin{aligned} \int _{\triangle ^\cup } \Xi _{T}(\mathrm d \xi ) \int _{[0, \infty )^{N(\xi )}} \delta _C^{\otimes N(\xi )}(\mathrm du)\, \sigma ^2_T(\xi , u) \;\leqslant \;\mathbb E\Big [\frac{B_T}{T} + \frac{1}{T}\sum _{n=1}^{N_T-1} s_{i_n} - t_{i_{n-1}} \Big ] + \frac{1}{C^2 +1} \end{aligned}$$

where \(t_0 :=0\) and \(B_T :=T - t_{i_{N_T-1}}\) is the backward recurrence time of the renewal process \((t_{i_n})_{n}\) at time T. By renewal theory, \((B_T)_{T\;\geqslant \;0}\) converges in distribution as \(T\rightarrow \infty \). Let \((T_k)_k\) be a sequence in \([0, \infty )\) that converges to infinity. Then \(B_{T_k}/T_k \rightarrow 0\) in probability as \(k\rightarrow \infty \). Hence, there exists a subsequence \((T_{k_j})_j\) such that \(B_{T_{k_j}}/T_{k_j} \rightarrow 0\) almost surely as \(j\rightarrow \infty \). By dominated convergence, \(\mathbb E[B_{T_{k_j}}/T_{k_j}] \rightarrow 0\) as \(j\rightarrow \infty \). As \((T_k)_k\) was arbitrary, we have \(\mathbb E[B_{T}/T] \rightarrow 0\) as \(T\rightarrow \infty \). By renewal theory

$$\begin{aligned} N_T/T \rightarrow \frac{1}{\mathbb E[t_{i_1}]} = \frac{1}{\beta ^{-1} + \int _0^\infty r \rho (r) \, \mathrm dr} \end{aligned}$$
(3.4)

almost surely as \(T\rightarrow \infty \) and by the law of large numbers we thus have

$$\begin{aligned} \frac{1}{T}\sum _{n=1}^{N_T-1} s_{i_n} - t_{i_{n-1}} \rightarrow \frac{\beta ^{-1}}{\beta ^{-1} + \int _0^\infty r \rho (r) \, \mathrm dr} = \frac{1}{1 + \int _0^\infty r f(r) \, \mathrm dr} \end{aligned}$$

almost surely as \(T\rightarrow \infty \). By the dominated convergence theorem, we have convergence in \(L^1\) as well and the statement follows.

4 Estimation of the Kernel \(\kappa _\varepsilon \)

While the component \(\eta _{\alpha , T}^\varepsilon \sim \Gamma _{\varepsilon \alpha , T}\) is far easier to understand than the whole process, the distribution of marks of \(\eta _{\alpha , T}^\varepsilon \) under \(\kappa _\varepsilon \) still depends on the whole process. We get rid of this problem by estimating \(\kappa _\varepsilon \) by a suitable product kernel that marks the intervals independent of each other. For \(t, C>0\) we define

$$\begin{aligned} h_\varepsilon (t) :=\mathbb E_\mathcal W[v_\varepsilon (X_t)] = \int _{[0, \infty )} \nu _\varepsilon (\mathrm du) \frac{1}{(1+u^2t)^{d/2}} \end{aligned}$$

and

$$\begin{aligned} p_\varepsilon (C) :=\frac{2^{-\gamma /2} }{(1+4C^2)^{d/2}}\big (1- \varepsilon /h_\varepsilon (2)\big ). \end{aligned}$$

Notice that there exists some constant \(c_{\gamma , d}\) such that \(h_\varepsilon (t) = c_{\gamma , d} t^{-\gamma /2} + \varepsilon \) for all \(t>0\). In particular, \(\varepsilon /h_\varepsilon (2)<1\) and

$$\begin{aligned} \mathcal P_{\varepsilon , C}(t, \cdot ) :={\left\{ \begin{array}{ll} \delta _0 \quad &{}\text { if } t> 2 \\ p_\varepsilon (C) \delta _C + (1-p_\varepsilon (C))\delta _0 &{} \text{ if } t\;\leqslant \;2 \end{array}\right. } \end{aligned}$$

defines for \(t>0\) a probability measure. Fix \(\xi = ((s_i, t_i))_{1\;\leqslant \;i\;\leqslant \;N(\xi )} \in \triangle ^\cup \) with \(F_\varepsilon (\xi )< \infty \). We will show that \(\kappa _\varepsilon (\xi , \cdot )\) stochastically dominates

$$\begin{aligned} \tilde{\kappa }_{\varepsilon , C}(\xi , \cdot ) :=\bigotimes _{i=1}^{N(\xi )} \mathcal P_{\varepsilon , C}(t_i-s_i, \, \cdot \,) \end{aligned}$$

that is

$$\begin{aligned} \int _{[0, \infty )^{N(\xi )}} \kappa _\varepsilon (\xi , \mathrm du) f(u) \;\leqslant \;\int _{[0, \infty )^{N(\xi )}} \tilde{\kappa }_{\varepsilon , C}(\xi , \mathrm du) f(u) \end{aligned}$$

for any decreasing function \(f: [0, \infty )^{N(\xi )} \rightarrow \mathbb {R}\). By Proposition 3.2 and the monotonicity of \(\sigma _T^2(\xi , \cdot )\), we then obtain

$$\begin{aligned} \sigma ^2(\alpha ) \;\leqslant \;\limsup _{T \rightarrow \infty } \int _{\triangle ^\cup } \widehat{\Gamma }_{\alpha , T}^\varepsilon (\mathrm d\xi ) \int _{\{0, C\}^{N(\xi )}} \tilde{\kappa }_{\varepsilon , C}(\xi , \mathrm du) \, \sigma _{T}^2(\xi , u). \end{aligned}$$

Since \(\sigma _{T}^2(\xi , u)\) is increasing under deletion of marked intervals we can further estimate

$$\begin{aligned} \sigma ^2(\alpha ) \;\leqslant \;\limsup _{T \rightarrow \infty } \int _{\triangle ^\cup } \Gamma _{\varepsilon \alpha , T}(\mathrm d\xi ) \int _{\{0, C\}^{N(\xi )}} \tilde{\kappa }_{\varepsilon , C}(\xi , \mathrm du) \, \sigma _{T}^2(\xi , u). \end{aligned}$$
(4.1)

Notice that intervals with zero mark do not contribute to \(\sigma ^2_T(\xi , u)\). Estimate (4.1) in combination with Lemma 3.4 will yield Theorem 1.1.

Lemma 4.1

Let \(C>0\) and \(\xi = ((s_i, t_i))_{1 \;\leqslant \;i \;\leqslant \;N(\xi )} \in \triangle ^\cup \) with \(F_\varepsilon (\xi )<\infty \). Then \(\kappa _\varepsilon (\xi ,\, \cdot \,)\) stochastically dominates \(\tilde{\kappa }_{\varepsilon , C}(\xi , \cdot )\).

Proof

Let \( J(\xi ) :=\{1\;\leqslant \;i \;\leqslant \;N(\xi ):\, t_i - s_i \;\leqslant \;2\} \). The statement is trivial for \(J(\xi ) = \emptyset \), so assume \(J(\xi ) \ne \emptyset \). We define

$$\begin{aligned} \chi :[0, \infty )^{N(\xi )} \rightarrow \{0, C\}^{J(\xi )}, \quad \chi _i :=C \cdot \mathbbm {1}_{\{u_i\;\geqslant \;C\}} \end{aligned}$$

for \(i \in J(\xi )\). Let \(\pi : [0, \infty )^{N(\xi )} \rightarrow [0, \infty )^{J(\xi )}\) be the restriction map \(u \mapsto (u_i)_{i \in J(\xi )}\) and let \(\phi : [0, \infty )^{N(\xi )} \rightarrow [0, \infty )^{N(\xi )}\) be defined by

$$\begin{aligned} \phi _i(u) :={\left\{ \begin{array}{ll} u_i &{}\text { if }i\in J(\xi ) \\ 0 &{}\text { else} \end{array}\right. } \end{aligned}$$

for \(u \in [0, \infty )^{N(\xi )}\). Let \(f: [0, \infty )^{N(\xi )} \rightarrow \mathbb {R}\) be decreasing and let \(\tilde{f}:[0, \infty )^{J(\xi )} \rightarrow \mathbb {R}\) be such that \(\tilde{f} \circ \pi = f \circ \phi \). Then

$$\begin{aligned}&\int _{[0, \infty )^{N(\xi )}} \kappa _\varepsilon (\xi , \mathrm du) f(u) \;\leqslant \;\int _{[0, \infty )^{N(\xi )}} \kappa _\varepsilon (\xi , \mathrm du) \tilde{f}(\chi (u)) \\&\int _{[0, \infty )^{N(\xi )}} \tilde{\kappa }_{\varepsilon , C}(\xi , \mathrm du) f(u) = \int _{[0, \infty )^{N(\xi )}} \tilde{\kappa }_{\varepsilon , C}(\xi , \mathrm du) \tilde{f}(\chi (u)). \end{aligned}$$

Hence, it is sufficient to show that the distribution of \(\chi \) under \(\mu :=\kappa _\varepsilon (\xi , \cdot )\) stochastically dominates the distribution of \(\chi \) under \(\tilde{\mu }:=\tilde{\kappa }_{\varepsilon , C}(\xi , \cdot )\). In order to show this, it is sufficient (see e.g. [FV17, pp. 137–138]) to show that for any \(\omega \in \{0, C\}^{J(\xi )}\) and any \(i \in J(\xi )\)

$$\begin{aligned} \mu (\chi _i = C| \chi _j = \omega _j \forall j \ne i) \;\geqslant \;p_{\varepsilon }(C) = \tilde{\mu }(\chi _i = C| \chi _j = \omega _j \forall j \ne i). \end{aligned}$$

To simplify notation, we assume w.l.o.g. that \(i=1 \in J(\xi )\) and set \(\tau _1 :=t_1-s_1\). Let \(\omega \in \{0, C\}^{J(\xi )}\) be fixed and let \(A \subseteq [0, \infty )^{{N(\xi )}-1}\) be such that \( \{\chi _j = \omega _j \, \forall j \ne 1\} = [0, \infty ) \times A. \) For \(u_1\in [0, \infty )\) and we have by Corollary 6.3 of the appendix and since \((C+u_1)^2/2 \;\leqslant \;C^2 + u_1^2\)

$$\begin{aligned} \phi (\xi , (u_1+C, u))&= \mathbb E_\mathcal W \Big [e^{-(C + u_1)^2 |X_{s_1, t_1}|^2/2} \prod _{i=2}^{N(\xi )}e^{-u_i^2 |X_{s_i, t_i}|^2/2} \Big ] \\&\;\geqslant \;\mathbb E_\mathcal W \Big [e^{-C^2 |X_{s_1, t_1}|^2} e^{-u_1^2|X_{s_1, t_1}|^2} \prod _{i=2}^{N(\xi )}e^{-u_i^2 |X_{s_i, t_i}|^2/2} \Big ] \\&\;\geqslant \;\frac{1}{(1+ 4C^2)^{d/2}} \phi (\xi , (\sqrt{2}u_1, u)) \end{aligned}$$

since \(\tau _1 \;\leqslant \;2\) as \(1\in J(\xi )\). We get (by using the fact that the density of \(\nu _\varepsilon |_{(0, \infty )}\) with respect to the Lebesgue measure is increasing)

$$\begin{aligned}&\mu \big (\chi _1 = C, \chi _j = \omega _j \, \forall j\ne 1\big ) \\&\quad = \frac{1}{F_\varepsilon (\xi )}\int _{(C, \infty )}\nu _\varepsilon (\mathrm du_1) \int _{A} \nu _\varepsilon ^{\otimes ({N(\xi )}-1)}(\mathrm du) \phi (\xi , (u_1, u)) \\&\quad \;\geqslant \;\frac{1}{F_\varepsilon (\xi )}\int _{(0, \infty )}\nu _\varepsilon (\mathrm du_1) \int _{A} \nu _\varepsilon ^{\otimes ({N(\xi )}-1)}(\mathrm du) \phi (\xi , (u_1+C, u)) \\&\quad \;\geqslant \;\frac{1}{(1+ 4C^2)^{d/2}} \frac{1}{F_\varepsilon (\xi )}\int _{(0, \infty )}\nu _\varepsilon (\mathrm du_1) \int _{A} \nu _\varepsilon ^{\otimes ({N(\xi )}-1)}(\mathrm du) \phi (\xi , (\sqrt{2}u_1, u)) \\&\quad = \frac{2^{-\gamma /2} }{(1+ 4C^2)^{d/2}} \frac{1}{F_\varepsilon (\xi )}\int _{(0, \infty )}\nu _\varepsilon (\mathrm du_1) \int _{A} \nu _\varepsilon ^{\otimes ({N(\xi )}-1)}(\mathrm du) \phi (\xi , (u_1, u)) \\&\quad =\frac{2^{-\gamma /2} }{(1+ 4C^2)^{d/2}} \frac{1}{F_\varepsilon (\xi )}\bigg (\int _{[0, \infty )}\nu _\varepsilon (\mathrm du_1) \int _{A} \nu _\varepsilon ^{\otimes ({N(\xi )}-1)}(\mathrm du) \phi (\xi , (u_1, u))\\&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad - \varepsilon \int _{A} \nu _\varepsilon ^{\otimes ({N(\xi )}-1)}(\mathrm du) \phi (\xi ', u) \bigg ) \end{aligned}$$

where . By Corollary 6.3, for any \(u_1\in [0, \infty ), u\in [0, \infty )^{{N(\xi )}-1}\)

$$\begin{aligned} \phi (\xi , (u_1, u)) \;\geqslant \;\frac{ \phi (\xi ', u)}{(1+u_1^2 \tau _1)^{d/2}} \;\geqslant \;\frac{ \phi (\xi ', u)}{(1+2u_1^2)^{d/2}} \end{aligned}$$

which yields

$$\begin{aligned}&\int _{[0, \infty )}\nu _\varepsilon (\mathrm du_1) \int _{A} \nu _\varepsilon ^{\otimes ({N(\xi )}-1)}(\mathrm du) \phi (\xi , (u_1, u))\\ {}&\quad \;\geqslant \;\int _{[0, \infty )} \frac{\nu _\varepsilon (\mathrm du_1)}{(1+2u_1^2)^{d/2}} \int _{A} \nu _\varepsilon ^{\otimes ({N(\xi )}-1)}(\mathrm du) \phi (\xi ', u) \\&\quad = h_\varepsilon (2) \int _{A} \nu _\varepsilon ^{\otimes ({N(\xi )}-1)}(\mathrm du) \phi (\xi ', u). \end{aligned}$$

Hence, we get

$$\begin{aligned}&\mu \big (\chi _1 = C, \chi _j = \omega _j \,\forall j\ne 1\big ) \\&\quad \;\geqslant \;\frac{2^{-\gamma /2}}{(1+ 4C^2)^{d/2}} \Big (1- \varepsilon /h_\varepsilon (2) \Big ) \frac{1}{F_\varepsilon (\xi )} \int _{[0, \infty )}\nu _\varepsilon (\mathrm du_1) \int _{A} \nu _\varepsilon ^{\otimes ({N(\xi )}-1)}(\mathrm du) \phi (\xi , (u_1, u)) \\&\quad = p_\varepsilon (C) \mu \big (\chi _j = \omega _j \, \, \forall j\ne 1\big ) \end{aligned}$$

which yields the claim. \(\square \)

5 Finishing the Proof of Theorem 1.1

Proof of Theorem 1.1

For \(C>0\) let \(\zeta _{\alpha , T}^{\varepsilon , C}\) be a Poisson point process on \(\triangle \times \{0, C\}\) with intensity measure

$$\begin{aligned} \alpha \varepsilon g(t-s) \mathbbm {1}_{\{0< s< t < T\}} \mathrm ds \mathrm dt \, \mathcal P_{\varepsilon , C}(t-s, \mathrm du). \end{aligned}$$

Then Estimate (4.1) becomesFootnote 5

$$\begin{aligned} \sigma ^2(\alpha ) \;\leqslant \;\limsup _{T \rightarrow \infty } \mathbb E[\sigma _T^2(\zeta ^{\varepsilon , C}_{\alpha , T})] \;\leqslant \;\limsup _{T \rightarrow \infty } \mathbb E[\sigma _T^2(\tilde{\zeta }^{\varepsilon , C}_{\alpha , T})] \end{aligned}$$
(5.1)

where \(\tilde{\zeta }_{\alpha , T}^{\varepsilon , C}\) is the restriction of \(\zeta _{\alpha , T}^{\varepsilon , C}\) to \(\{(s, t): 1 \;\leqslant \;t-s \;\leqslant \;2\} \times \{C\}\) and thus a Poisson point process with intensity measure

$$\begin{aligned} \alpha \varepsilon p_\varepsilon (C) g(t-s) \mathbbm {1}_{\{1\;\leqslant \;t-s \;\leqslant \;2\}} \mathbbm {1}_{\{0< s< t < T\}} \mathrm ds \mathrm dt \mathcal \, \delta _{C}(\mathrm du). \end{aligned}$$

By Lemma 3.4 we have

$$\begin{aligned} \limsup _{T \rightarrow \infty } \mathbb E[\sigma _T^2(\tilde{\zeta }^{\varepsilon , C}_{\alpha , T})] \;\leqslant \;\frac{1}{1 + \alpha \varepsilon p_\varepsilon (C) \int _1^2 r g(r)\, \mathrm dr} + \frac{1}{C^2 + 1}. \end{aligned}$$

Choosing \(C = \alpha ^{1/(2+d)}\) yields the claim.

Remark 5.1

Replacing the arbitrarily chosen cut-off parameters 1 and 2 for the lifespan by general \(A<B\) and optimizing over \(A, B, C, \varepsilon \) leads to

$$\begin{aligned} \sigma ^2(\alpha ) \;\leqslant \;\inf _{0<A< B,\, 0<C,\, 0<\varepsilon } \bigg (1+ \frac{2^{-\gamma /2} \alpha }{(1 + 2BC^2)^{d/2}}\frac{\varepsilon h_0(B)}{\varepsilon + h_0(B)} \int _A^B r g(r) \mathrm dr\bigg )^{-1} + (1+ AC^2)^{-1} \end{aligned}$$

which leads to the same exponent \(-2/(2+d)\) for the decay in \(\alpha \). The non-dependency of this exponent on \(\gamma \) is presumably the result from the estimate \((C+ r)^{\gamma -1} \;\geqslant \;r^{\gamma - 1}\) that has been used in the proof of Lemma 4.1.

6 Appendix: Facts About Gaussian Measures

In the following, we collect a few facts about Gaussian measures (which were in a similar form already applied in [MV19]) in order to be self contained.

Lemma 6.1

Let be a centered Gaussian vector with covariance matrix C. Then

$$\begin{aligned} \mathbb E\Big [\prod _{i=1}^{n} e^{-X_i^2/2}\Big ] = \frac{1}{\sqrt{\det (I_{n}+ C)}}. \end{aligned}$$

Proof

Write \(C = O \Lambda O^T\) with \(O \in \mathbb {R}^{n\times n}\) orthogonal and \(\Lambda \in \mathbb {R}^{n\times n}\) diagonal with non-negative entries. Then \(C = A A^T\) with \(A :=O \Lambda ^{1/2}\). Let . Then \(AZ \overset{d}{=}\ X\) and thus

$$\begin{aligned} \mathbb E\Big [\prod _{i=1}^{n} e^{-X_i^2/2}\Big ] = \mathbb E\Big [e^{-|X|^2/2}\Big ]&= \frac{1}{(2 \pi )^{n/2}} \int _{\mathbb {R}^n} e^{-|Az|^2/2} e^{-|z|^2/2} \mathrm dz \\&= \frac{1}{(2 \pi )^{n/2}} \int _{\mathbb {R}^n} e^{-\sum _{i=1}^n (1 + \lambda _i )z_i^2/2} \mathrm dz \\&= \prod _{i=1}^n (1+ \lambda _i)^{-1/2}. \end{aligned}$$

Since \(\prod _{i=1}^n (1+ \lambda _i) = \det (I + \Lambda )= \det (I + C)\) the statement follows. \(\square \)

Lemma 6.2

Let be a centered Gaussian vector with covariance matrix C. Then

Proof

Write \(X_n = Y_n + Z_n\) with and . Define \(Y_i :=X_i\) for \(1\;\leqslant \;i \;\leqslant \;n-1\). Let \(C'\) and \(C''\) be the covariance matrices of and respectively. Then, by the Leibniz formula for the determinant

$$\begin{aligned} \det (C)&= \sum _{\sigma \in S_n} {\text {sgn}}(\sigma ) \prod _{i=1}^{n} \mathbb E[X_{i} X_{\sigma (i)}] \\&= \mathbb E[X_n^2]\sum _{\sigma : \sigma (n) = n} {\text {sgn}}(\sigma )\prod _{i=1}^{n-1} \mathbb E[X_{i} X_{\sigma (i)}] + \sum _{\sigma : \sigma (n)\ne n } {\text {sgn}}(\sigma ) \prod _{i=1}^{n} \mathbb E[X_{i} X_{\sigma (i)}] \\&= \big (\mathbb E[Y_n^2] + \mathbb E[Z_n^2]\big )\sum _{\sigma :\sigma (n) = n} {\text {sgn}}(\sigma )\prod _{i=1}^{n-1} \mathbb E[Y_{i} Y_{\sigma (i)}] + \sum _{\sigma : \sigma (n)\ne n } {\text {sgn}}(\sigma ) \prod _{i=1}^{n} \mathbb E[Y_{i} Y_{\sigma (i)}] \\&= \mathbb E[Z_n^2] \det (C') + \det (C''). \end{aligned}$$

Since we have \(\det (C'') = 0\) and the statement follows inductively.

Corollary 6.3

For any centered Gaussian vector we have

$$\begin{aligned} \mathbb E\Big [\prod _{i=1}^{n} e^{-X_i^2/2}\Big ] \;\geqslant \;\frac{1}{\sqrt{1 + \mathbb E[X_n^2]}} \mathbb E\Big [\prod _{i=1}^{n-1} e^{-X_i^2/2}\Big ]. \end{aligned}$$

Proof

Let be independent of X. Then the covariance matrix of \(Z+X\) is \(I+C\), where C is the covariance matrix of X. By the previous two lemmas,

The statement follows from

\(\square \)