1 Introduction

The phenomenon of thermal ionization can be viewed as a positive temperature generalization of the photoelectric effect: an atom is exposed to thermal radiation emitted by a black body of temperature \(T > 0\). Then photons of momentum \(\omega \) with a momentum density given by Planck’s law of black-body radiation,

$$\begin{aligned} \frac{1}{e^{\beta \omega }-1}, \end{aligned}$$

where \(\beta = T^{-1}\), will interact with the electrons of the atom. Since there is a positive probability for arbitrary high energy photons, eventually one with sufficiently high energy will show up exceeding the ionization threshold of the atom.

For the zero temperature situation (photoelectric effect) there can be found qualitative and quantitative statements for different simplified models of atoms with quantized fields in [3, 11, 32]. If one replaces the atom by a finite-dimensional small subsystem, the model usually exhibits the behavior of return to equilibrium, see for example [2]. Here the existence of a Gibbs state of the atom leads to the existence of an equilibrium state of the whole system. One is confronted with a similar mathematical problem—disproving the degeneracy of the eigenvalue zero. The most common technique for handling this are complex dilations, or its infinitesimal analogue: positive commutators, which goes back to work of Mourre (cf. [23]). There are a number of papers which also use positive commutators in the context of return to equilibrium, see for example [10, 13, 15, 19].

A first rigorous treatment of thermal ionization was given by Fröhlich and Merkli in [7] and in a subsequent paper [8] by the same authors together with Sigal. The ionization appears mathematically as the absence of a time-invariant state in a suitable von Neumann algebra describing the complete system. The time-invariant states can be shown to be in one-to-one correspondence with the elements of the kernel of the Liouvillian. For the proof they used a global positive commutator method first established for Liouvillians in [19] and [6] with a conjugate operator on the field space as in [15]. Furthermore, they developed a new virial theorem.

A similar situation in a mathematical sense occurs if one considers a small finite-dimensional or confined system coupled to multiple reservoirs at different temperatures. Here one can prove as well the absence of time-invariant normal states, which translates into the absence of zero as an eigenvalue of the Liouvillian (cf. [5, 14, 20]). However, one can show the existence of so-called non-equilibrium steady states, which correspond to resonances of the Liouvillian (cf. [4, 14, 21]).

In [7] an abstract representation of a Hamiltonian diagonalized with respect to its energy with a single negative eigenvalue was considered. For the proof certain regularity assumptions of the interaction with respect to the energy were imposed. However, it is not so clear how these assumptions translate to a more concrete setting. The reference [8] on the other hand covers the case of a Schrödinger operator with a long-range potential but with only finitely many modes coupled via the interaction. Moreover, only a compact interval away from zero of the continuous spectrum was coupled.

The purpose of this paper is to transfer the results of [7, 8] to a more specific model of a Schrödinger operator with a compactly supported potential with finitely many eigenvalues. We consider a typical coupling term and we have to impose a spatial cutoff. However, we do not need any restrictions with respect to the coupling to the continuous subspace of the atomic operator as in [8]. Moreover, in contrast to [7, 8] our result holds uniformly for bounded positive temperatures. This is achieved by considering a finite approximation of the so-called level shift operator. For the proof we use the same commutator on the space of the bosonic field as in [7, 8, 15], and we also reuse the original virial theorem of [7, 8]. On the other hand we work with a different commutator on the atomic space, namely the generator of dilations in the space of scattering functions.

The organization of the paper is as follows. In Sect. 2 we introduce the model and define the Liouvillian. In addition, we state the precise form of our main result in Theorem 2.6 and all the necessary assumptions. We also give a more detailed outline of the proof in Sect. 2.6. Section 3 recalls the abstract virial theorem of [7, 8] and some related technical methods. Then we verify the requirements of the abstract virial theorem in our setting in Sect. 4. We repeat the definition of scattering states (Sect. 4.1) and use those for the concrete choice of the commutators. The major difficulty here is to check that the commutators with the interaction terms are bounded. This requires bounds which involve the scattering functions and is elaborated in Sect. 5. The application of the virial theorem then yields a concrete version—Theorem 4.5. This is the first key element for the proof of the main theorem. The second one is the actual proof that the commutator together with some auxiliary term is positive. This and the concluding proof of Theorem 2.6 at the end can be found in Sect. 6.

2 Model and the Main Result

A model of a small subsystem interacting with a bosonic field at positive temperature is usually represented as a suitable \(C^*\)- or \(W^*\)-dynamical system on a tensor product algebra consisting of the field and the atom, respectively. Note that by \(\mathcal {L}(\mathcal {H})\) we shall denote the space of all bounded linear operators on a space \(\mathcal {H}\).

The field is defined by a Weyl algebra with infinitely many degrees of freedom. To implement black-body radiation at a specific temperature \(T > 0\) the GNS representation with respect to a KMS (equilibrium) state depending on T is considered. For the atom the whole algebra \(\mathcal {L}(\mathcal {H}_{\mathrm p})\) for the atomic Hilbert space \(\mathcal {H}_{\mathrm p}= L^2({\mathbb {R}}^3)\) is used and the GNS representation with respect to an arbitrary mixed reference state is performed. The combined representation of the whole system generates a \(W^*\)-algebra where the interacting dynamics can be defined by means of a Dyson series in a canonical way. Its self-adjoint generator—the Liouvillian—is of great interest when studying such systems. The details of this construction can be found in [7, 8, 24]. Furthermore, it is shown in [7, 8] that the absence of zero as an eigenvalue of the Liouvillian implies the absence of time-invariant states of the \(W^*\)-dynamical system.

In this paper we start directly with the definition of the Liouvillian without repeating its derivation and the algebraic construction. The only difference in our setting to [7, 8] is the coupling term which can be realized as an approximation of step function couplings as considered in their work.

The purpose of this section is the definition of the concrete Liouvillian, the precise statement of the result—absence of zero as an eigenvalue—and the required conditions. At the end in Sect. 2.6, we explain the basic structure of the proof.

We start with the three ingredients of the model: the atom, the field and the interaction, which we first discuss separately and state the required assumptions.

2.1 The Atom

For the atom we consider a Schrödinger operator on the Hilbert space \(\mathcal {H}_{\mathrm p}:= L^2({\mathbb {R}}^3)\),

$$\begin{aligned} H_{\mathrm p}= -\varDelta + V, \end{aligned}$$

where we assume that

  1. (H1)

    \(V \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\),

  2. (H2)

    if \(\psi \in L^2({\mathbb {R}}^3)\) satisfies

    $$\begin{aligned} \psi (x) = - \frac{1}{4\pi } \int \frac{\left| V(x) \right| ^{\frac{1}{2}} V(y)^{\frac{1}{2}} }{\left| x-y \right| } \psi (y) {\mathrm d}y , \quad \text { for a.e. } x \in {\mathbb {R}}^3, \end{aligned}$$
    (2.1)

    where \(V^{1/2} := |V|^{1/2} \text {sgn} V\), then \(\psi = 0\).

We note that the above assumptions have the following immediate consequences.

Proposition 2.1

If (H1) holds, then

  1. (a)

    \(H_{\mathrm p}\) is essentially self-adjoint on \(C_{\mathrm c}^\infty ({\mathbb {R}}^3)\) with domain \(\mathcal {D}(\varDelta )\),

  2. (b)

    \(H_{\mathrm p}\) has essential spectrum \([0,\infty )\),

  3. (c)

    the discrete spectrum of \(H_{\mathrm p}\), denoted by \(\sigma _{\mathrm d}(H_{\mathrm p})\), is finite,

  4. (d)

    \(H_{\mathrm p}\) has no positive eigenvalues,

  5. (e)

    \(H_{\mathrm p}\) has no singular spectrum.

If (H1) and (H2) hold, then

  1. (f)

    zero is not an eigenvalue of \(H_{\mathrm p}\).

Proof

(a) follows from Kato–Rellich since V is infinitesimally bounded with respect to \(-\varDelta \). (b) is shown for example in Example 6 of Section XIII.4 in [27]. (c) follows from [27, Theorem XIII.6]. (d) follows from the Kato-Agmon-Simon theorem ([27, Theorem XIII.58]). (e) follows from [27, Theorem XIII.21]. (f) Suppose \(\varphi \) were an eigenvector with eigenvalue zero. Then using the integral representation of the resolvent of the Laplacian, see for example [26, IX.7], we find \(\varphi (x) = - (4\pi )^{-1} \int _{\mathbb {R}^3} |x-y|^{-1} V(y) \varphi (y) dy\). From this it is straightforward to verify that \(\psi = |V|^{1/2} \varphi \) would be nonzero solution of (2.1). \(\square \)

We shall denote by \(P_{\mathsf {ess} }\) be the spectral projection to the essential spectrum of \(H_{\mathrm p}\). As an immediate consequence of the above proposition we find that (H1) and (H2) imply \(P_{\mathsf {ess} }= P_\text {ac}\), where \(P_\text {ac} \) denotes the projection onto the absolutely continuous spectral subspace of \(H_{\mathrm p}\). As we are in a statistical physics setting, we have to consider density matrices as states of our system. They form a subset of the Hilbert-Schmidt operators, so we will work in a space isomorphic to the latter,

$$\begin{aligned} \mathcal {H}_{\mathrm p}\otimes \mathcal {H}_{\mathrm p}. \end{aligned}$$

Remark 2.2

More details about the condition (H2) and the behavior at the threshold of zero energy can be found in [16, 17]. In particular, we would like to point out that (H2) is mostly satisfied for the potentials which we consider. To this end, assume (H1) and let \(V_\alpha = \alpha V\) for \(\alpha \ge 0\). Let \(K_\alpha \) be the operator with integral kernel

$$\begin{aligned} -|V_\alpha (x)|^{1/2} V_\alpha ^{1/2}(y) /(4\pi |x-y|). \end{aligned}$$

Clearly \(K_\alpha = \alpha K_1\), and it is straightforward to see, that \(K_1\) is Hilbert-Schmidt and hence a compact operator. We conclude that for any \(k > 0\), there exist only finitely many \(\alpha \in [0,k]\) such that \(K_\alpha \) has eigenvalue one or equivalently that (H2) is violated for \(V_\alpha \). Furthermore, we note that (H2) is equivalent to the statement that the so-called Fredholm determinant \(\text {det}_2(1 - K_0)\) is nonzero, see [25]. For the definition of \(\text {det}_2\) we refer the reader to [27]. Now Fredholm determinants are finite analytic (cf. [29]) and \(\text {det}_2(1) \ne 0\), which could be used to obtain a second argument why potentials which do not satisfy (H2) are rather rare. Finally, we note the property (H2) is referred to in [25] as energy 0 being an exceptional point.

Remark 2.3

For the result and our proof one actually needs just finitely many derivatives of V. Therefore, one could weaken (H1). However it seems a bit tedious in the proof to keep track to which order exactly derivatives are required.

2.2 The Quantized Field

The quantized field will be described by operators on Fock spaces. Let \(\mathfrak {F}(\mathfrak {h})\) denote the bosonic Fock space over a Hilbert space \(\mathfrak {h}\), that is,

$$\begin{aligned} \mathfrak {F}(\mathfrak {h}) := \bigoplus _{n=0}^\infty \mathfrak {h}^{\otimes _{\mathrm s}n}, \end{aligned}$$

where \(\otimes _{\mathrm s}n\) denotes the n-fold symmetric tensor product of Hilbert spaces, and \(\mathfrak {h}^{\otimes _{\mathrm s}0} := {\mathbb {C}}\). For \(\psi \in \mathfrak {F}(\mathfrak {h})\), we write \(\psi _n\) for the n-th element in the direct sum and we use the notation \(\psi = (\psi _0, \psi _1, \psi _2, \ldots )\). The vacuum vector is defined as \(\varOmega := (1,0,0,\ldots )\). For a dense subspace \(\mathfrak {d} \subseteq \mathfrak {h}\) we define the dense space of finitely many particles in \(\mathfrak {F}(\mathfrak {h})\) by

$$\begin{aligned} \mathfrak {F}_{{\text {fin}}}(\mathfrak {d}) :=&\{ (\psi _0, \psi _1, \psi _2, \dots ) \in \mathfrak {F}(\mathfrak {h}) :\psi _n \in \mathfrak {d}^{\otimes _{\mathrm s}n} \text { for all } n \in {\mathbb {N}}_0 \\&\text { and there exists } N \in {\mathbb {N}}_0 :\psi _n = 0 \text { for } n \ge N \}, \end{aligned}$$

where here \(\otimes _{\mathrm s}n\) represents the n-fold symmetric tensor product of vector spaces. As in the case of the atomic degrees of freedom we will work on the space of density matrices and thus use the space

$$\begin{aligned} \mathfrak {F}:= \mathfrak {F}(L^2({\mathbb {R}}\times \mathbb {S}^2 )) \cong \mathfrak {F}(L^2({\mathbb {R}}^3)) \otimes \mathfrak {F}(L^2({\mathbb {R}}^3)) , \end{aligned}$$
(2.2)

where \(L^2({\mathbb {R}}\times \mathbb {S}^2 ) := L^2 ({\mathbb {R}}\times \mathbb {S}^2,{\mathrm d}u \times {\mathrm d}\varSigma )\) and \({\mathrm d}\varSigma \) denotes the standard spherical measure on \(\mathbb {S}^2\). We note that the canonical identification (2.2), outlined in Remark 2.5 below, is referred to as ‘gluing’ and was first introduced by Jakšić and Pillet in [13]. For \(\psi \in \mathfrak {F}\) notice that \(\psi _n\) can be understood as an \(L^2\) function in n symmetric variables \((u,\varSigma ) \in {\mathbb {R}}\times \mathbb {S}^2\).

Let \(\mathcal {H}\) be a Hilbert space. On \(\mathcal {H}\otimes \mathfrak {F}\) we define a so-called generalized annihilation operator for a function \(F \in L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}( \mathcal {H}))\) by

$$\begin{aligned}&(a(F) \psi )_n(u_1, \varSigma _1, \ldots , u_n, \varSigma _n ) \\&\quad := \sqrt{n+1} \int F(u,\varSigma )^* \psi _{n+1}(u,\varSigma , u_1, \varSigma _1, \ldots , u_n, \varSigma _n ) {\mathrm d}(u,\varSigma ) . \end{aligned}$$

Note that \(a(F) \varOmega = 0\). The generalized creation operator \(a^*(F)\) is defined as the adjoint of a(F). By definition \(F \mapsto a^*(F)\) is linear whereas \(F \mapsto a(F)\) is anti-linear. In the scalar case where \(\mathcal {H}= {\mathbb {C}}\), we have \(\mathcal {H}\otimes \mathfrak {F}= \mathfrak {F}\) and one obtains the usual creation and annihilation operators satisfying the canonical commutation relations, for \(f,g \in L^2({\mathbb {R}}\times \mathbb {S}^2)\),

$$\begin{aligned} {[a(f),a^*(g)]} = \left\langle f,g \right\rangle , \qquad [a(f),a(g)] = 0, \qquad [a^*(f),a^*(g)] = 0. \end{aligned}$$

We also define the field operators as \(\varPhi (F) = a(F) + a^*(F)\).

For a measurable function \(M :{\mathbb {R}}\times \mathbb {S}^2 \rightarrow {\mathbb {R}}\) we introduce the second quantization \(\mathsf {d} \varGamma (M)\), which is a self-adjoint operator on \(\mathfrak {F}\), given for \(\psi \in \mathfrak {F}\) by

$$\begin{aligned} (\mathsf {d} \varGamma (M) \psi )_0&:= 0, \\ (\mathsf {d} \varGamma (M) \psi )_n(u_1, \varSigma _1, \ldots , u_n,\varSigma _n)&:= \left( \sum _{i=1}^n M(u_i,\varSigma _i) \right) \psi _n(u_1, \varSigma _1, \ldots , u_n,\varSigma _n), \qquad n \in {\mathbb {N}}. \end{aligned}$$

In particular we will use the number operator,

$$\begin{aligned} N_{{\text {f}}}:= \mathsf {d} \varGamma (1). \end{aligned}$$

2.3 Liouvillian with Interaction

We assume to have an interaction term with a smooth spatial cutoff. For \((\omega ,\varSigma ) \in {\mathbb {R}}_+ \times \mathbb {S}^2\), where \({\mathbb {R}}_+ := (0,\infty )\), we define a bounded multiplication operator on \(\mathcal {H}_{\mathrm p}\) by

$$\begin{aligned} G(\omega ,\varSigma )(x) = \kappa (\omega ) \chi (x) {\tilde{G}}(\omega ,\varSigma )(x), \qquad x \in {\mathbb {R}}^3, \end{aligned}$$
(2.3)

where \(\kappa \) is a function on \({\mathbb {R}}_+\) and \(\chi \in \mathcal {S}({\mathbb {R}}^3)\)—the space of Schwartz functions, and for each \((\omega ,\varSigma )\), \({\tilde{G}}(\omega ,\varSigma )\) is a function on \({\mathbb {R}}^3\), satisfying the following conditions.

  1. (I1)

    Spatial cutoff: For all \(n \in \{0,1,2,3\}\) and \(\alpha \in {\mathbb {N}}_0^3\) the partial derivatives \( \partial _x^\alpha \partial ^n_\omega {\tilde{G}}\) exist and are continuous on \({\mathbb {R}}_+ \times \mathbb {S}^2 \times {\mathbb {R}}^3\), and there exists a polynomial P and an \(M \in {\mathbb {N}}_0\) such that for all \((\omega ,\varSigma ,x) \in {\mathbb {R}}_+ \times \mathbb {S}^2 \times {\mathbb {R}}^3\)

    $$\begin{aligned} \left| \partial _x^\alpha \partial ^n_\omega {\tilde{G}}(\omega ,\varSigma )(x) \right| \le P(\omega ) \langle x \rangle ^M , \end{aligned}$$
    (2.4)

    where \(\left\langle x \right\rangle := (1 + x^2)^{1/2}\).

  2. (I2)

    UV cutoff: \(\kappa \) decays faster than any polynomial, that is, for all \(n \in {\mathbb {N}}\),

    $$\begin{aligned} \sup _{\omega \ge 1} \omega ^n \left| \kappa (\omega ) \right| < \infty . \end{aligned}$$
  3. (I3)

    Regularity and infrared behavior: \(\kappa \in C^3({\mathbb {R}}_+)\) and one of the following two properties holds:

    1. (i)

      there exist \(k, C \in (0,\infty )\), \(p > 2\) such that

      $$\begin{aligned} \left| \partial ^j_\omega \kappa (\omega ) \right| \le C \omega ^{p-j}, \quad \omega \in (0, k) , \quad j = 0,\ldots ,3 , \end{aligned}$$
    2. (ii)

      there exist \(J \in {\mathbb {N}}_0\) and \(\kappa _0 \in C^s([0,\infty ))\), with \(s = \max \{0,3-J\}\), such that

      $$\begin{aligned} \kappa (\omega ) = \omega ^{-\frac{1}{2} + J } \kappa _0(\omega ), \quad \omega >0 , \end{aligned}$$

      and there exists an extension \(\tilde{G}_0\) of \(\tilde{G}\) to \([0,\infty ) \times \mathbb {S}^2 \times {\mathbb {R}}^3\) such that for all \(j \in \{0,\ldots ,s\}\) and all \(\alpha \in {\mathbb {N}}_0^3\) the partial derivatives \( \partial _x^\alpha \partial ^j_\omega {\tilde{G}}_0\) exist, are continuous, satisfy (2.4) for all \((\omega ,\varSigma ,x) \in [0,\infty ) \times \mathbb {S}^2 \times {\mathbb {R}}^3\), and

      $$\begin{aligned}&\partial _{\omega }^j (\kappa _0(\omega ) \chi {\tilde{G}}_0(\omega ,\varSigma ))(x) |_{\omega =0} = (-1)^{j+J+1} \partial _{\omega }^j \overline{ (\kappa _0(\omega ) \chi {\tilde{G}}_0(\omega ,\varSigma ))(x)}|_{\omega =0} . \end{aligned}$$

Remark 2.4

The condition (I1) implies that there is a constant C such that \(\left\| G(\omega ,\varSigma ) \right\| _{\mathcal {L}(\mathcal {H}_{\mathrm p})} \le C \left| \kappa (\omega ) \right| \) holds for all \((\omega ,\varSigma ) \in {\mathbb {R}}_+ \times \mathbb {S}^2\). In particular, G has the same decay behavior like \(\kappa \) for \(\omega \rightarrow 0\) and \(\omega \rightarrow \infty \) as described in (I2) and (I3).

Let \(\beta > 0\) be the inverse temperature and let

$$\begin{aligned} \rho _\beta (\omega ) := \frac{1}{e^{ \beta \omega } - 1 } \end{aligned}$$

be the probability distribution for black-body radiation. To describe the interaction in the positive temperature setting it is convenient to introduce a map \(\tau _\beta \) as follows. For a function \(F :{\mathbb {R}}_+ \times \mathbb {S}^2 \rightarrow \mathcal {L}(\mathcal {H}_{\mathrm p})\) we define a function \(\tau _\beta F :( \mathbb {R}\setminus \{ 0 \} ) \times \mathbb {S}^2 \rightarrow \mathcal {L}(\mathcal {H}_{\mathrm p})\) by

$$\begin{aligned} (\tau _\beta F)(u, \varSigma )&:= {\left\{ \begin{array}{ll} u \sqrt{1+\rho _\beta (u)} F(u, \varSigma ), &{}\quad u > 0, \\ u \sqrt{\rho _\beta (-u)} F(-u ,\varSigma )^* , &{}\quad u < 0 . \end{array}\right. } \end{aligned}$$
(2.5)

It is straightforward to verify that \(\tau _\beta \) maps the following spaces into each other

$$\begin{aligned} \tau _\beta :L^2({\mathbb {R}}_+&\times \mathbb {S}^2, (\omega ^2 + \omega ) {\mathrm d}\omega {\mathrm d}\varSigma , \mathcal {L}(\mathcal {H}_{\mathrm p}) ) \rightarrow L^2({\mathbb {R}}\times \mathbb {S}^2, {\mathrm d}\omega {\mathrm d}\varSigma , \mathcal {L}(\mathcal {H}_{\mathrm p})) , \end{aligned}$$

where we adapted the convention to use again the symbol for the restriction. Note that the measure \((\omega ^2 + \omega ) {\mathrm d}\omega {\mathrm d}\varSigma \) originates from the identification

$$\begin{aligned} L^2({\mathbb {R}}_+ \times \mathbb {S}^2, (\omega ^2 + \omega ) {\mathrm d}\omega {\mathrm d}\varSigma ) \cong L^2({\mathbb {R}}^3, (1+ \left| k \right| ^{-1}) {\mathrm d}^3 k ), \end{aligned}$$

where all elements f in the space on the right-hand side satisfy \(\sqrt{\rho _\beta ( \left| \cdot \right| ) } f, \sqrt{1+\rho _\beta ( \left| \cdot \right| )} f \in L^2({\mathbb {R}}^3)\).

Let \(\mathcal {C}_{\mathrm p}\) be the complex conjugation on \(\mathcal {H}_{\mathrm p}\) given by \(\mathcal {C}_{\mathrm p}\psi (x) := \overline{\psi (x)}\), and for an operator \(T \in \mathcal {L}(\mathcal {H}_{\mathrm p})\) we define \(\overline{T} := \mathcal {C}_{\mathrm p}T \mathcal {C}_{\mathrm p}\).

Let

$$\begin{aligned} {\widetilde{\mathcal {D}}}:= C_{\mathrm c}^\infty ({\mathbb {R}}^3) \otimes C_{\mathrm c}^\infty ({\mathbb {R}}^3) \otimes \mathfrak {F}_{{\text {fin}}}(C_{\mathrm c}^\infty ({\mathbb {R}}\times \mathbb {S}^2)), \end{aligned}$$

which is a dense subspace of the composite Hilbert space

$$\begin{aligned} \mathcal {H}:= \mathcal {H}_{\mathrm p}\otimes \mathcal {H}_{\mathrm p}\otimes \mathfrak {F}. \end{aligned}$$

On \({\widetilde{\mathcal {D}}}\) we define for \(\lambda \in {\mathbb {R}}\) the Liouvillian by

$$\begin{aligned} L_\lambda := (H_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}- {\text {Id}}_{\mathrm p}\otimes H_{\mathrm p}) \otimes {\text {Id}}_{{\text {f}}}+ {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes \mathsf {d} \varGamma ({u}) + \lambda W , \end{aligned}$$
(2.6)

where \(W := \varPhi (I)\), and

$$\begin{aligned} I(u,\varSigma )&:= I^{(\mathsf {l} )}(u,\varSigma ) \otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes I^{(\mathsf {r} )}(u,\varSigma ), \\ I^{(\mathsf {l} )}(u, \varSigma )&:= \tau _\beta (G)(u,\varSigma ), \qquad I^{(\mathsf {r} )}(u, \varSigma ) := - e^{-\beta u /2} \tau _\beta ({\overline{G}}^*)(u,\varSigma ). \end{aligned}$$

Note that by (I1)–(I3), \(G \in L^2({\mathbb {R}}_+ \times \mathbb {S}^2, (\omega ^2 + \omega ) {\mathrm d}\omega {\mathrm d}\varSigma , \mathcal {L}(\mathcal {H}_{\mathrm p}) )\) and therefore, \(I \in L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(\mathcal {H}_{\mathrm p}\otimes \mathcal {H}_{\mathrm p}))\) and the expression \(\varPhi (I)\) is well-defined. It will be shown below (Proposition 4.3) that \(L_\lambda \) is indeed essentially self-adjoint.

Remark 2.5

Let us give a unitarily equivalent definition of the Liouvillian (2.6), which uses the usual notations from physics and which does not involve the ‘gluing’ construction. Let \(c^*(k)\) and c(k), \(k \in {\mathbb {R}}^3\), denote the usual creation and annihilation operator-valued distributions of \(\mathfrak {F}(L^2({\mathbb {R}}^3))\). Let the free field energy operator on \(\mathfrak {F}(L^2({\mathbb {R}}^3))\) be given by \(H_{{\text {f}}}= \int |k| c^*(k) c(k) dk\). For the coupling function introduced in (2.3) we define \(\widehat{G}(\omega \varSigma ) = G(\omega ,\varSigma )\) for all \((\omega ,\varSigma ) \in \mathbb {R}_+ \times \mathbb {S}^2\). It follows from (I1)–(I3) that \(\widehat{G} \in L^2({\mathbb {R}}^3, (1+ \left| k \right| ^{-1}) {\mathrm d}k , \mathcal {L}( \mathcal {H}_{\mathrm p}))\). We consider the following operator in \(\mathcal {H}_{\mathrm p}\otimes \mathcal {H}_{\mathrm p}\otimes \mathfrak {F}(L^2({\mathbb {R}}^3)) \otimes \mathfrak {F}(L^2({\mathbb {R}}^3)) \),

$$\begin{aligned} \widehat{L}_\lambda&= (H_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}- {\text {Id}}_{\mathrm p}\otimes H_{\mathrm p}) \otimes {\text {Id}}_{{\text {f}}}\otimes {\text {Id}}_{{\text {f}}}+ {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes (H_{{\text {f}}}\otimes {\text {Id}}_{{\text {f}}}- {\text {Id}}_{{\text {f}}}\otimes H_{{\text {f}}})\\&\quad + \lambda \int {\mathrm d}k \bigg \{ \left( \sqrt{ 1 + \rho _\beta (|k|)} \widehat{G}(k) \otimes {\text {Id}}_{\mathrm p}- \sqrt{\rho _\beta (|k|)} {\text {Id}}_{\mathrm p}\otimes \overline{\widehat{G}}^*(k) \right) \otimes c^*(k) \otimes {\text {Id}}_{{\text {f}}}\\&\quad + \left( \sqrt{ 1 + \rho _\beta (|k|) } \widehat{G}^*(k) \otimes {\text {Id}}_{\mathrm p}- \sqrt{\rho _\beta (|k|)} {\text {Id}}_{\mathrm p}\otimes \overline{\widehat{G}}(k) \right) \otimes c(k) \otimes {\text {Id}}_{{\text {f}}}\\&\quad + \left( \sqrt{ \rho _\beta (|k|)} \widehat{G}^*(k) \otimes {\text {Id}}_{\mathrm p}- \sqrt{ 1+ \rho _\beta (|k|)} {\text {Id}}_{\mathrm p}\otimes \overline{\widehat{G}}(k) \right) \otimes {\text {Id}}_{{\text {f}}}\otimes c^*(k) \\&\quad + \left( \sqrt{ \rho _\beta (|k|)} \widehat{G}(k) \otimes {\text {Id}}_{\mathrm p}- \sqrt{1 + \rho _\beta (|k|)} {\text {Id}}_{\mathrm p}\otimes \overline{\widehat{G}}^*(k) \right) \otimes {\text {Id}}_{{\text {f}}}\otimes c(k) \bigg \} , \end{aligned}$$

which is defined on the dense subspace \(\hat{ \mathcal {D} } := C_{\mathrm c}^\infty ({\mathbb {R}}^3) \otimes C_{\mathrm c}^\infty ({\mathbb {R}}^3) \otimes \mathfrak {F}_{{\text {fin}}}( \mathcal {D}(|\cdot |)) \otimes \mathfrak {F}_{{\text {fin}}}(\mathcal {D}(|\cdot |))\). One can show that \(\widehat{L}_\lambda \) is unitarily equivalent to (2.6). To this end, we define the unitary transformation

$$\begin{aligned} \psi :L^2(\mathbb {R}^3) \longrightarrow L^2(\mathbb {R}_+ \times \mathbb {S}^2 )\end{aligned}$$

with \(\psi (f)(\omega ,\varSigma ) := \omega f(\omega \varSigma )\), inducing the unitary transformation of Fock spaces \(\varGamma (\psi )\), see for example [26, Section X] ,

$$\begin{aligned}&\varGamma (\psi ) :\mathfrak {F}(L^2({\mathbb {R}}^3)) \longrightarrow \mathfrak {F}(L^2(\mathbb {R}_+ \times \mathbb {S}^2 ) ) , \end{aligned}$$

and the unitary transformation

$$\begin{aligned}&\tau :L^2(\mathbb {R}_+ \times \mathbb {S}^2 ) \oplus L^2(\mathbb {R}_+ \times \mathbb {S}^2) \longrightarrow L^2(\mathbb {R}\times \mathbb {S}^2), \\&\tau (f_1,f_2)(u, \varSigma ) := {\left\{ \begin{array}{ll} f_1(u, \varSigma ), &{}\quad u > 0,\\ - f_2(-u,\varSigma ), &{}\quad u < 0, \end{array}\right. } \end{aligned}$$

inducing the unitary transformation of Fock spaces \(\varGamma (\tau )\). Furthermore, let \(V_\mathfrak {h}\) denote the canonical unitary map

$$\begin{aligned} V_\mathfrak {h} :\mathfrak {F}(\mathfrak {h}) \otimes \mathfrak {F}(\mathfrak {h}) \longrightarrow \mathfrak {F}(\mathfrak {h} \oplus \mathfrak {h} ) , \end{aligned}$$

characterized by mapping the tensor product of the vacua to the vacuum and satisfying \(V_\mathfrak {h} (b(f)\otimes {\text {Id}}+ {\text {Id}}\otimes b(g) ) V_\mathfrak {h}^* = b(f,g)\), here \(b(\cdot )\) stands for the usual annihilation operator on the corresponding Fock spaces. As a consequence of the definition it follows that \(U := \varGamma (\tau ) \circ V_{ L^2(\mathbb {R}_+ \times \mathbb {S}^2 ) } \circ ( \varGamma (\psi ) \otimes \varGamma (\psi ) ) \) is a unitary transformation

$$\begin{aligned} U :\mathfrak {F}(L^2({\mathbb {R}}^3)) \otimes \mathfrak {F}(L^2({\mathbb {R}}^3)) \longrightarrow \mathfrak {F}(L^2({\mathbb {R}}\times \mathbb {S}^2 )) , \end{aligned}$$

which satisfies the property

$$\begin{aligned} a(u ,\varSigma ) = U \left( \mathbb {1}_{u > 0} u c (u \varSigma ) \otimes {\text {Id}}+ {\text {Id}}\otimes \mathbb {1}_{u < 0} u c(- u \varSigma ) \right) U^* , \quad (u,\varSigma ) \in {\mathbb {R}}\times \mathbb {S}^2 , \end{aligned}$$
(2.7)

where \(a(u,\varSigma )\) denotes the usual annihilation operator-valued distribution of \(\mathfrak {F}(L^2( \mathbb {R}\times \mathbb {S}^2 ) ) \). Using (2.7) it is straightforward to verify that \( L_\lambda = U \widehat{L}_\lambda U^*\) on \({\widetilde{\mathcal {D}}}\).

2.4 Main Result

For the proof of our main result we need an additional assumption. The instability of the eigenvalues should be visible in second order in perturbation theory with respect to the coupling constant. This term is also called level shift operator and the corresponding positivity assumption Fermi Golden Rule condition. In Sect. 2.5 an example is provided where this is satisfied.

Fermi Golden Rule Condition

For \(E \in \sigma _{\mathrm d}(H_{\mathrm p})\) let \(p_E := \mathbb {1}_{\{E\}}(H_{\mathrm p})\) be the spectral projection corresponding to the eigenvalue E. For \(\varepsilon > 0\) and \(E \in \sigma _{\mathrm d}(H_{\mathrm p})\) let \(\gamma _\beta (E,\varepsilon )\) be the largest number such that

$$\begin{aligned} p_E \left( F^{(1)}_\beta (E,\varepsilon ) + F^{(2)}_\beta (E,\varepsilon ) \right) p_E&\ge \gamma _\beta (E,\varepsilon ) p_E, \end{aligned}$$

where

$$\begin{aligned} F^{(1)}_\beta (E,\varepsilon )&:= \int _{0}^\infty \int _{\mathbb {S}^2} \frac{\omega ^2}{e^ {\beta \omega } - 1} G(\omega ,\varSigma ) \frac{ P_{\mathsf {ess} }}{(H_{\mathrm p}- E - \omega )^2 + \varepsilon ^2} G(\omega ,\varSigma )^* {\mathrm d}\varSigma {\mathrm d}\omega , \\ F^{(2)}_\beta (E,\varepsilon )&:= \int _0^{\infty } \int _{\mathbb {S}^2} \frac{\omega ^2}{1 - e^ {-\beta \omega } } G(\omega ,\varSigma )^* \frac{ P_{\mathsf {ess} }}{(H_{\mathrm p}- E + \omega )^2 + \varepsilon ^2} G(\omega ,\varSigma ) {\mathrm d}\varSigma {\mathrm d}\omega , \end{aligned}$$

and we recall that \(P_{\mathsf {ess} }\) denotes the spectral projection to the essential spectrum of \(H_{\mathrm p}\). Furthermore we set \(\gamma _\beta (\varepsilon ) := \inf _{E \in \sigma _{\mathrm d}(H_{\mathrm p})} \gamma _\beta (E,\varepsilon )\).

  1. (F)

    By the Fermi Golden Rule Condition for \(\beta \) we mean that there exists an \(\varepsilon > 0\) such that \(\gamma _\beta (\varepsilon ) > 0\).

Notice that \(\gamma _\beta \) might depend on \(\beta \). In particular, this is the case if, as in [7, 8], (F) is verified using only the term \(\varepsilon F^{(1)}_\beta (E,\varepsilon )\) (for \(\varepsilon \rightarrow 0\)), which decays exponentially to zero if \(\beta \rightarrow \infty \). However, one can also obtain results uniformly in \(\beta \) for \(\beta \ge \beta _0 > 0\) (low temperature) by proving that \(F^{(2)}_\beta (E,\varepsilon )\) is positive for a fixed \(\varepsilon > 0\), which will be done in the next section, Corollary 2.8.

Let us now state the main result of this paper.

Theorem 2.6

Assume that (H1), (H2) and (I1)–(I3) hold. Let \(\beta _0 > 0\) and \(\varepsilon > 0\). Then there exists a constant \( C > 0\) such that the following holds. If \(\beta \ge \beta _0\) and \(0< \left| \lambda \right| < C \min \{ 1 , \gamma _\beta ^2(\varepsilon )\}\) the operator \(L_\lambda \) given in (2.6) does not have zero as an eigenvalue.

Remark 2.7

Our result implies the absence of a zero eigenstate in the Hilbert space realized by the class of Hilbert-Schmidt operators. However, it does not exclude the existence of resonances, which lead to so-called non-equilibrium steady states (NESS) described as in [4, 14, 21].

2.5 Application

In the following we present an example of a QED system with a linear coupling term (Nelson Model) where the conditions for the main theorem are satisfied. In particular, one can verify the Fermi Golden Rule condition (F) in this case. We summarize this in the following corollary to Theorem 2.6.

Corollary 2.8

Assume that (H1) and (H2) hold. Let

$$\begin{aligned} G(\omega ,\varSigma )(x) = \kappa (\omega ) e^{ \mathrm {i}\omega \varSigma x } \chi ( x), \qquad \text {for all } \quad x \in {\mathbb {R}}^3, \, (\omega , \varSigma ) \in \mathbb {R}_+ \times \mathbb {S}^2 , \end{aligned}$$
(2.8)

where \(\chi \in \mathcal {S}({\mathbb {R}}^3)\) is nonzero, and \(\kappa \) is a nonzero function on \({\mathbb {R}}_+\) satisfying (I2) and one of the following two conditions:

  1. (a)

    part (i) of (I3) holds,

  2. (b)

    \(\chi \) is real-valued and there exist \(J \in {\mathbb {N}}_0\) and a real-valued \(\kappa _0 \in C^s([0,\infty ))\), \(s = \max \{0,3-J\}\), satisfying

    $$\begin{aligned} \frac{{\mathrm d}^j}{{\mathrm d}\omega ^j}\bigg |_{\omega =0} \kappa _0(\omega ) = 0, \qquad 1 \le j \le 3-J,~j \text { odd}, \end{aligned}$$

    such that

    $$\begin{aligned} \kappa (\omega ) = \omega ^{-\frac{1}{2} + J} \kappa _0(\omega ) \end{aligned}$$
    (2.9)

    for all \(\omega \ge 0\).

Then for any \(\beta _0 > 0\) there exists a \(\lambda _0 > 0 \), such that whenever \(0< \left| \lambda \right| < \lambda _0\) and \(\beta \ge \beta _0\) the operator \(L_\lambda \), or equivalently \(\widehat{L}_\lambda \) defined in Remark 2.5, does not have zero as an eigenvalue.

Remark 2.9

A possible choice for \(\kappa _0\) in (2.9) would be \(\kappa _0(\omega ) = C e^{-c \omega ^2}\) for constants \(c,C\,{>}\,0\).

Proof

Derivatives with respect to \(\omega \) and x yield only polynomial growth in x and \(\omega \), respectively. Thus, (I1) is satisfied. The conditions (H1), (H2), (I2) and (I3) (i) (if (a) holds), are satisfied by assumption. In the case of (b) note that G in (2.8) can be multiplied with any phase \(e^{\mathrm {i}\varphi }\), \(\varphi \in {\mathbb {R}}\), which just yields unitary equivalent Liouvillians by means of the unitary transformation \({\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes \varGamma (e^{\mathrm {i}\varphi })\) (cf. [26, Section X] for the definition of \(\varGamma \)). Therefore, we can assume without loss of generality that instead of (2.9), we have

$$\begin{aligned} \kappa (\omega ) = \mathrm {i}^{J+1} \omega ^{-\frac{1}{2} + J} \kappa _0(\omega ). \end{aligned}$$

The condition (I3) (ii) is actually satisfied in this case, since

$$\begin{aligned} \partial _{\omega }^j ( \mathrm {i}^{J+1} \kappa _0(\omega ) \chi (x) e^{ \mathrm {i}\omega \varSigma x } ) |_{\omega =0} = (-1)^{j+J+1} \partial _{\omega }^j \overline{\mathrm {i}^{J+1} \kappa _0(\omega ) \chi (x) e^{ \mathrm {i}\omega \varSigma x } }|_{\omega =0} \end{aligned}$$

holds for all \(\varSigma \in \mathbb {S}^2\), \(x \in {\mathbb {R}}^3\) and \(j=0, \ldots ,3 - J\).

It remains the verification of the Fermi Golden Rule condition. This will follow from Proposition 2.10, below, since \(\sigma _{\mathrm d}(H_{\mathrm p})\) is finite. \(\square \)

Proposition 2.10

Suppose the assumptions of Corollary 2.8 hold, and let \(E \in \sigma _{\mathrm d}(H_{\mathrm p})\). Then for any \(\varepsilon > 0\) there exists a \(\gamma > 0\) (independent of \(\beta \)) such that

$$\begin{aligned} p_E F^{(2)}_\beta (E,\varepsilon ) p_E&\ge \gamma p_E. \end{aligned}$$

Proof

First note that \(p_E F^{(2)}_\beta (E,\varepsilon ) p_E\restriction _{{\text {ran}}p_E}\) is a non-negative matrix acting on the finite-dimensional space \({\text {ran}}p_E\). Let \(\varphi _E \in {\text {ran}}p_E\) denote a normalized eigenvector with respect to the lowest eigenvalue of \(p_E F^{(2)}_\beta (E,\varepsilon ) p_E\restriction _{{\text {ran}}p_E}\). We have to show that \(\left\langle \varphi _E, F^{(2)}_\beta (E,\varepsilon ) \varphi _E \right\rangle \ge \gamma \) for a \(\gamma > 0\) independent of \(\beta \).

First observe that

$$\begin{aligned}&\left\langle \varphi _E, F^{(2)}_\beta (E,\varepsilon ) \varphi _E \right\rangle \ge \int _0^{\infty } \int _{\mathbb {S}^2} \omega ^2 \left\langle \varphi _E, G(\omega ,\varSigma )^* \frac{ P_{\mathsf {ess} }}{(H_{\mathrm p}- E + \omega )^2 + \varepsilon ^2} G(\omega ,\varSigma ) \varphi _E \right\rangle {\mathrm d}\varSigma {\mathrm d}\omega . \end{aligned}$$

The integrand is continuous in \((\omega ,\varSigma )\) and non-negative. Thus, by the finite dimensionality of the eigenspace with eigenvalue E and continuity it suffices to show that for some \((\omega ,\varSigma )\) we have

$$\begin{aligned} \left\langle G(\omega ,\varSigma ) \varphi _E, \frac{ P_{\mathsf {ess} }}{(H_{\mathrm p}- E + \omega )^2 + \varepsilon ^2} G(\omega ,\varSigma ) \varphi _E \right\rangle \not = 0, \end{aligned}$$

which follows if we can show

$$\begin{aligned} P_{\mathsf {ess} }G(\omega ,\varSigma ) \varphi _E \not = 0. \end{aligned}$$
(2.10)

We now claim that (2.10) holds for some \((\omega ,\varSigma )\). Otherwise, by the finite dimensionality of the range of \({\text {Id}}-P_\text {ess}\), the space

$$\begin{aligned} \{ G(\omega , \varSigma ) \varphi _E :(\omega ,\varSigma ) \in (0,\infty ) \times \mathbb {S}^2 \} \end{aligned}$$
(2.11)

would be finite-dimensional as well. But this leads to contradiction. By unique continuation of eigenfunctions ([27, Theorem XIII.57]) \(\varphi _E(x) \not = 0\) for a.e. \(x \in {\mathbb {R}}^3\), and thus, \(\chi \varphi _E\) is nonzero. By assumption \(\kappa \) does not vanish on some nonempty open interval \(I \subset (0,\infty )\). It now follows that

$$\begin{aligned} \{ \tilde{G}(\omega , \varSigma ) \chi \varphi _E :(\omega ,\varSigma ) \in I \times \mathbb {S}^2 \} \end{aligned}$$

is a subspace of (2.11) and has infinite dimension, by well-known methods (e.g. by calculating Wronskians and Vandermonde determinants). \(\square \)

Instead of using \(F^{(2)}_\beta (E,\varepsilon )\) one could also verify (F) with the first term \(F^{(1)}_\beta (E,\varepsilon )\) in the limit \(\varepsilon \rightarrow 0\). This does not improve the qualitative statement of Corollary 2.8 and has the drawback that \(\gamma _\beta (\varepsilon ) \rightarrow 0\) as \(\beta \rightarrow \infty \) as in [7, 8]. However, in certain situations it might give the dominant contribution for the lower bound \(\gamma _\beta (\varepsilon )\) and therefore the maximal admissible coupling strength in Theorem 2.6. In this context we would like to mention the “zero temperature result” about the leading order contribution of the ionization probability in the photoelectric effect [11].

2.6 Overview of the Proof

The first step is to find a suitable conjugate operator A consisting of a part \({A_{\mathrm p}}\) on the particle space \(\mathcal {H}_{\mathrm p}\) and a part \(A_{{\text {f}}}\) on the field space \(\mathfrak {F}\).

For the latter we make the same choice as established for the first time in [15] and later also used by Merkli and co-authors in [7, 8, 19], namely the second quantization of the generator of translations,

$$\begin{aligned} A_{{\text {f}}}= \mathsf {d} \varGamma (\mathrm {i}\partial _u). \end{aligned}$$

Let \(P_\varOmega \) denote the orthogonal projection onto the one-dimensional subspace containing the vacuum \(\varOmega \) (“vacuum subspace”). Formally, we obtain on \(\mathfrak {F}\) that

$$\begin{aligned} \mathrm {i}[\mathsf {d} \varGamma ({u}), A_{{\text {f}}}] = \mathsf {d} \varGamma ( N_{{\text {f}}}) \ge { P_\varOmega }^\perp , \end{aligned}$$

which yields a positive contribution on the space orthogonal to the vacuum. The Hilbert space \(\mathcal {H}\) can be further decomposed by means of the projection

$$\begin{aligned} \varPi := \mathbb {1}_{L_0 = 0} = \mathbb {1}_{L_{\mathrm p}= 0}\otimes P_\varOmega \end{aligned}$$
(2.12)

as

$$\begin{aligned} \mathcal {H}= {\text {ran}}({\text {Id}}_{\mathcal {H}_{\mathrm p}\otimes \mathcal {H}_{\mathrm p}} \otimes P_\varOmega ^\perp ) \oplus {\text {ran}}\varPi \oplus {\text {ran}}( \mathbb {1}_{L_{\mathrm p}\not = 0} \otimes P_\varOmega ). \end{aligned}$$
(2.13)

To obtain a positive operator on \({\text {ran}}\varPi \), we proceed again as in [7, 8] and consider a bounded operator \(A_0\) on the whole space \(\mathcal {H}\). The Fermi Golden Rule Condition (F) then implies that

$$\begin{aligned} \mathrm {i}\varPi [L_\lambda , A_0] \varPi \restriction _{{\text {ran}}\varPi } > 0. \end{aligned}$$

The details can be found in Sect. 6.2.

Let \(P_{{\text {disc}}}\) denote the spectral projection to the discrete spectrum of \(H_{\mathrm p}\). Note that \( { P_{{\text {disc}}} }^\perp = P_{\mathsf {ess} }\). The third space in (2.13) can be decomposed further by use of

$$\begin{aligned} \mathbb {1}_{L_{\mathrm p}\not = 0}&= ( P_{\mathsf {ess} }\otimes P_{\mathsf {ess} }) \oplus (P_{\mathsf {ess} }\otimes P_{{\text {disc}}}) \nonumber \\&\qquad \oplus (P_{{\text {disc}}}\otimes P_{\mathsf {ess} }) \oplus \mathbb {1}_{L_{\mathrm p}\not = 0}(P_{{\text {disc}}}\otimes P_{{\text {disc}}}) . \end{aligned}$$
(2.14)

We start with the space generated by the first projection \(P_{\mathsf {ess} }\otimes P_{\mathsf {ess} }\). In contrast to [8] the conjugate operator in the particle space will be defined as follows. We first diagonalize the non-negative part of \(H_{\mathrm p}\) by means of generalized eigenfunctions associated to the positive (continuous) spectrum, the scattering functions, which we recall in Sect. 4.1. This will establish a unitary map \(V_{{\mathsf {c} }}\) between the non-negative eigenspace of \(H_{\mathrm p}\) and \(L^2({\mathbb {R}}^3)\) with the property that \(V_{{\mathsf {c} }}^* H_{\mathrm p}V_{{\mathsf {c} }}= \hat{{\mathsf {k} }}^2\), where \(\hat{{\mathsf {k} }}= (\hat{{\mathsf {k} }}_1, \hat{{\mathsf {k} }}_2, \hat{{\mathsf {k} }}_3)\) denotes the vector of multiplication operators with the respective components. Let

$$\begin{aligned} A_{\mathsf {D} }:= \frac{1}{4} ({\hat{{\mathsf {q} }}}\hat{{\mathsf {k} }}+ \hat{{\mathsf {k} }}{\hat{{\mathsf {q} }}}) \end{aligned}$$

be the generator of dilations, where \({\hat{{\mathsf {q} }}}:= \mathrm {i}\nabla = (\mathrm {i}\partial _1,\mathrm {i}\partial _2,\mathrm {i}\partial _3) \) and where we used the notation \({\hat{{\mathsf {q} }}}\hat{{\mathsf {k} }}:= \sum _{j=1}^3 {\hat{{\mathsf {q} }}}_j \hat{{\mathsf {k} }}_j\) and \(\hat{{\mathsf {k} }}{\hat{{\mathsf {q} }}}:= \sum _{j=1}^3 \hat{{\mathsf {k} }}_j {\hat{{\mathsf {q} }}}_j\). Then

$$\begin{aligned} {A_{\mathrm p}}:= V_{{\mathsf {c} }}^* A_{\mathsf {D} }V_{{\mathsf {c} }}\end{aligned}$$

has the effect that

$$\begin{aligned} \mathrm {i}[H_{\mathrm p}, {A_{\mathrm p}}] = V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}= P_{\mathsf {ess} }H_{\mathrm p}, \end{aligned}$$

which is strictly positive on \({\text {ran}}P_{\mathsf {ess} }\). We combine \(A_{{\text {f}}}\) and \({A_{\mathrm p}}\) to an operator on \(\mathcal {H}\) by

$$\begin{aligned} {A}= ({A_{\mathrm p}}\otimes {\text {Id}}_{\mathrm p}- {\text {Id}}_{\mathrm p}\otimes {A_{\mathrm p}}) \otimes {\text {Id}}_{{\text {f}}}+ {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes A_{{\text {f}}}, \end{aligned}$$

which yields

$$\begin{aligned} \mathrm {i}[L_0, {A}] = ( P_{\mathsf {ess} }H_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes P_{\mathsf {ess} }H_{\mathrm p}) \otimes {\text {Id}}_{{\text {f}}}+ {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes N_{{\text {f}}}. \end{aligned}$$

As \({A}\) is unbounded, it is necessary to use a virial theorem for the positive commutator method to work. We will indeed use the same abstract versions developed in [7, 8] which are repeated in Sect. 3. In order to be able to apply the virial theorem Theorem 3.4 it is necessary that the commutators are bounded on the atomic space (see (3.7) and (3.8)). Thus one has to include a regularization in \({A_{\mathrm p}}\). The exact definition of \({A}\) and of a regularized version \({A^{(\epsilon )}}\), as well as the verification of the conditions for the virial theorems, can be found in Sect. 4.

For the space corresponding to the sum of the remaining three projections in (2.14) we choose an operator Q on \(\mathcal {H}_{\mathrm p}\otimes \mathcal {H}_{\mathrm p}\) given as a bounded continuous function of \(L_{\mathrm p}\) in such a way that Q is strictly positive on \({\text {ran}}\mathbb {1}_{L_{\mathrm p}\not = 0}\). We add a suitable operator T depending on the interaction and \(\lambda \) to accomplish

$$\begin{aligned} \left\langle \psi , (Q \otimes P_\varOmega + T) \psi \right\rangle = 0 \end{aligned}$$

for all \(\psi \in \ker L_\lambda \). Now the distance between the essential and the discrete spectrum is strictly positive and the distances between the distinct discrete eigenvalues of \(H_{\mathrm p}\) are bounded from below by a positive number. Therefore, Q as well is bounded from below by a positive constant on the space

$$\begin{aligned} {\text {ran}}(P_{\mathsf {ess} }\otimes P_{{\text {disc}}}+ P_{{\text {disc}}}\otimes P_{\mathsf {ess} }+ \mathbb {1}_{L_{\mathrm p}\not = 0}P_{{\text {disc}}}\otimes P_{{\text {disc}}}). \end{aligned}$$

The operator T will be viewed as an error term which will be estimated by \(N_{{\text {f}}}\), where we need in addition that Q satisfies \((L_{\mathrm p}^{-1} Q)^2 \le C Q\) for a constant C.

Finally, there will arise further error terms from the commutator of the interaction with \({A}\) and \(A_0\), respectively. The general idea to control them, is to estimate them in terms of \(N_{{\text {f}}}\) on \(\mathfrak {F}\) and in terms of bounded terms on \(\mathcal {H}_{\mathrm p}\otimes \mathcal {H}_{\mathrm p}\otimes {\text {ran}}P_\varOmega \), respectively. It is for the latter that we need the decompositions (2.13) and (2.14) as well as the corresponding positive operators mentioned above. On \({\text {ran}}P_{\mathsf {ess} }\otimes P_{\mathsf {ess} }\) we estimate them by \({\hat{{\mathsf {q} }}}^{-2}\) and then use that by the uncertainty principle

$$\begin{aligned} \hat{{\mathsf {k} }}^2 - \lambda {\hat{{\mathsf {q} }}}^{-2} > 0 \end{aligned}$$

for \(\lambda > 0\) sufficiently small (cf. [26, X.2]).

3 Abstract Virial Theorems

In this section we recall the abstract virial theorems of [7, 8]. They are based on Nelson’s commutator theorem, which can be used for proving self-adjointness of operators which are not bounded from below. An important notion will be that of a GJN triple.

Definition 3.1

(GJN triple) Let \(\mathcal {H}\) be a Hilbert space, \(\mathcal {D} \subset \mathcal {H}\) a core for a self-adjoint operator \(Y \ge {\text {Id}}\), and X a symmetric operator on \(\mathcal {D}\). We say the triple \((X,Y,\mathcal {D})\) satisfies the Glimm-Jaffe-Nelson (GJN) condition, or that \((X,Y,\mathcal {D})\) is a GJN-triple, if there is a constant \(C < \infty \), such that for all \(\psi \in \mathcal {D}\):

$$\begin{aligned} \Vert X \psi \Vert&\le C \Vert Y \psi \Vert , \end{aligned}$$
(3.1)
$$\begin{aligned} \pm \mathrm {i}\{ \left\langle X \psi , Y \psi \right\rangle - \left\langle Y \psi , X \psi \right\rangle&\le C \left\langle \psi , Y \psi \right\rangle . \end{aligned}$$
(3.2)

Theorem 3.2

(GJN commutator theorem, [26, Theorem X.37]) If \((X,Y,\mathcal {D})\) satisfies the GJN condition, then X determines a self-adjoint operator (again denoted by X), such that \(\mathcal {D}(X) \supset \mathcal {D}(Y)\). Moreover, X is essentially self-adjoint on any core for Y, and (3.1) is valid for all \(\psi \in \mathcal {D}(Y)\).

A consequence of the GJN commutator theorem is that the unitary group generated by X leaves the domain of Y invariant. The concrete formulation stated in the next theorem is taken from [7].

Theorem 3.3

(Invariance of domain, [9]) Suppose \((X,Y,\mathcal {D})\) satisfies the GJN condition. Then, for all \(t \in {\mathbb {R}}\), \(e^{\mathrm {i}tX}\) leaves \(\mathcal {D}(Y)\) invariant, and there is a constant \(\kappa \ge 0\) such that

$$\begin{aligned} \left\| Y e^{\mathrm {i}t X} \psi \right\| \le e^{\kappa \left| t \right| } \left\| Y \psi \right\| , \qquad \psi \in \mathcal {D}(Y). \end{aligned}$$

Based on the GJN commutator theorem, we can now describe the setting for a general virial theorem. Suppose one is given a self-adjoint operator \(\varLambda \ge {\text {Id}}\) with core \(\mathcal {D} \subset \mathcal {H}\), and let \(L, A, N, D, C_n\), \(n \in \{0,1,2,3\}\), be symmetric operators on \(\mathcal {D}\) satisfying the relations

$$\begin{aligned}&\left\langle \varphi , D \psi \right\rangle = \mathrm {i}( \left\langle L \varphi , N \psi \right\rangle - \left\langle N \varphi , L \psi \right\rangle ) \end{aligned}$$
(3.3)

and

$$\begin{aligned}&C_0 = L, \end{aligned}$$
(3.4)
$$\begin{aligned}&\left\langle \varphi , C_{n+1} \psi \right\rangle = \mathrm {i}( \left\langle C_{n} \varphi , A \psi \right\rangle - \left\langle A \varphi , C_{n} \psi \right\rangle ) , \quad n \in \{0,1,2\}, \end{aligned}$$
(3.5)

where \(\varphi , \psi \in \mathcal {D}\). Furthermore we shall assume:

  1. (V1)

    \((X,\varLambda ,\mathcal {D})\) satisfies the GJN condition for \(X=L,N,D,C_n\), \(n\in \{0,1,2,3\}\). Consequently all these operators determine self-adjoint operators, which we denote by the same letters.

  2. (V2)

    A is self-adjoint, \(\mathcal {D} \subset \mathcal {D}(A)\), and \(e^{\mathrm {i}t A}\) leaves \(\mathcal {D}(\varLambda )\) invariant.

Theorem 3.4

(Abstract virial theorem, [7, Theorem 3.2]) Let \(\varLambda \ge {\text {Id}}\) be a self-adjoint operator in \(\mathcal {H}\) with core \(\mathcal {D} \subset \mathcal {H}\), and let \(L, A, N, D, C_n\), \(n \in \{0,1,2,3\}\), be symmetric on \(\mathcal {D}\) satisfying relations (3.3)–(3.5). Assume (V1) and (V2). Furthermore, assume that N and \(e^{\mathrm {i}t A}\) commute, for all \(t \in \mathbb {R}\), in the strong sense on \(\mathcal {D}\), and that there exist \( 0 \le p < \infty \) and \(C < \infty \) such that

$$\begin{aligned} \left\| D \psi \right\|&\le C \left\| N^{1/2} \psi \right\| , \end{aligned}$$
(3.6)
$$\begin{aligned} \left\| C_1\psi \right\|&\le C\left\| N^p \psi \right\| , \end{aligned}$$
(3.7)
$$\begin{aligned} \left\| C_3\psi \right\|&\le C \left\| N^{1/2} \psi \right\| , \end{aligned}$$
(3.8)

for all \(\psi \in \mathcal {D}\). Then, if \(\psi \in \mathcal {D}(L)\) is an eigenvector of L, there is a sequence of approximating eigenvectors \(( \psi _n )_{n \in \mathbb {N}}\) in \(\mathcal {D}(L)\cap \mathcal {D}(C_1)\) such that \(\lim _{n \rightarrow \infty } \psi _n = \psi \) in \(\mathcal {H}\), and

$$\begin{aligned} \lim _{n \rightarrow \infty } \left\langle \psi _n, C_1 \psi _n \right\rangle = 0 . \end{aligned}$$

Remark 3.5

A positivity condition for \(C_1\) will be established in Proposition 6.3.

4 Definition of the Commutator and Verification of the Virial Theorems

In this section we introduce generalized eigenstates associated to scattering states of the atomic Hamiltonian \(H_{\mathrm p}\). Using these scattering states we will then define explicit realizations for the operators \(L, A, N, D, C_n\), \(n \in \{0,1,2,3\}\), of Sect. 3. We then verify the assumptions of the abstract virial theorem in order to obtain a concrete virial theorem Theorem 4.5. This theorem will be one of the two ingredients for the proof of the main result of this paper. In this section we shall always assume that the potential V satisfies (H1) and (H2) and that (I1)–(I3) hold.

4.1 Scattering States

In this part we recall the theory of generalized eigenstates, which are associated to scattering states, and their corresponding spectral decomposition. The scattering states \(\varphi (k,\cdot )\), \(k \in {\mathbb {R}}^3\), can be defined as generalized eigenvectors,

$$\begin{aligned} (-\varDelta + V) \varphi (k,\cdot ) = k^2 \varphi (k,\cdot ), \end{aligned}$$

or as solutions of the so-called Lippmann–Schwinger equation,

$$\begin{aligned} \varphi (k,x) = e^{\mathrm {i}k x} - \frac{1}{4\pi } \int _{{\mathbb {R}}^3} \frac{e^{\mathrm {i}\left| k \right| \left| x-y \right| }}{\left| x-y \right| } V(y) \varphi (k,y) {\mathrm d}y. \end{aligned}$$
(4.1)

We discuss their properties in the following proposition which is from [12], see also [28, Theorem XI.41] and [25]. In particular, the scattering functions can be used for a spectral decomposition of the continuous spectrum of \(H_{\mathrm p}\).

Theorem 4.1

([28, Theorem XI.41], [12, 25]) Suppose (H1) and (H2) hold.

  1. (a)

    For all \(k \in {\mathbb {R}}^3\) there exists a unique solution \(\varphi (k,\cdot )\) of (4.1) which obeys \(|V|^{1/2} \varphi (k ,\cdot ) \in L^2(\mathbb {R}^3)\). Moreover for every \(k \in \mathbb {R}^3\) the function \(x \mapsto \varphi (k, x)\) is continuous.

  2. (b)

    For \(f \in L^2({\mathbb {R}}^3)\) the generalized Fourier transform

    $$\begin{aligned} (V_{{\mathsf {c} }}f)(k) := f^\#(k) := (2\pi )^{-3/2} \mathsf {l.i.m.} \int \overline{\varphi (k,x)} f(x) {\mathrm d}x, \end{aligned}$$

    where \(\mathsf {l.i.m.} \int g(x) {\mathrm d}x := L^2\)-\(\lim _{R \rightarrow \infty } \int _{\left| x \right| < R} g(x) {\mathrm d}x\), exists.

  3. (c)

    We have \({\text {ran}}V_{{\mathsf {c} }}= L^2({\mathbb {R}}^3)\) and for \(f \in L^2({\mathbb {R}}^3)\)

    $$\begin{aligned} \left\| V_{{\mathsf {c} }}f \right\| = \left\| P_{\mathsf {ess} }f \right\| . \end{aligned}$$

    In particular, \(V_{{\mathsf {c} }}\) is a partial isometry and \(V_{{\mathsf {c} }}\restriction _{{\text {ran}}P_{\mathsf {ess} }} :{\text {ran}}P_{\mathsf {ess} }\rightarrow L^2({\mathbb {R}}^3)\) is a unitary operator, and \(V_{{\mathsf {c} }}V_{{\mathsf {c} }}^* = {\text {Id}}\).

  4. (d)

    For \(f \in L^2({\mathbb {R}}^3)\) we have the spectral decomposition

    $$\begin{aligned} (P_{\mathsf {ess} }f)(x) = \mathsf {l.i.m.} (2\pi )^{-3/2} \int f^\#(k) \varphi (k,x) {\mathrm d}k . \end{aligned}$$
  5. (e)

    If \(f \in \mathcal {D}(H_{\mathrm p})\), then

    $$\begin{aligned} (H_{\mathrm p}f)^\#(k) = k^2 f^\#(k), \end{aligned}$$

    in other words, \(V_{{\mathsf {c} }}H_{\mathrm p}V_{{\mathsf {c} }}^* = \hat{{\mathsf {k} }}^2\).

The basic strategy of the proof of the theorem is to introduce the method of modified square integrable scattering functions, which can be found in [12, 28], originally developed by Rollnik. In particular, one introduces the so-called modified Lippmann–Schwinger equation

$$\begin{aligned} \widetilde{\varphi }(k,x) = \left| V(x) \right| ^{1/2} e^{\mathrm {i}k x} + (L_{\left| k \right| } \widetilde{\varphi }(k,\cdot ))(x), \quad k, x \in \mathbb {R}^3 , \end{aligned}$$
(4.2)

where

$$\begin{aligned} L_\kappa \psi (x) := - \frac{1}{4\pi } \int \frac{\left| V(x) \right| ^{1/2} e^{\mathrm {i}\kappa \left| x-y \right| } V(y)^{1/2} }{\left| x-y \right| } \psi (y) {\mathrm d}y, \quad \kappa \ge 0, \end{aligned}$$
(4.3)

and \(V(y)^{1/2} := \left| V(y) \right| ^{1/2} {{\text {sgn}}}V(y) \). We note that is elementary to see that by (H1) the operator \(L_\kappa \) is a Hilbert-Schmidt operator, see [30, Theorem 1.22]. If for fixed \(k \in {\mathbb {R}}^3\) the function \(\varphi (k,\cdot )\) obeys (4.1) and \({\widetilde{\varphi }}(k,\cdot ) := |V|^{1/2} \varphi (k,\cdot )\) is an \(L^2\)-function, then \({\widetilde{\varphi }}(k,\cdot )\) obeys (4.2), provided (H1) holds (in fact it holds for a larger class of potentials [28]). On the other hand, if for a fixed \(k \in \mathbb {R}^3\), the modified Lippmann–Schwinger equation (4.2) has a unique \(L^2\)-solution \({\widetilde{\varphi }}(k,\cdot )\), then, as outlined in [28], the original Lippmann–Schwinger equation (4.1) has a unique solution \(\varphi (k,\cdot )\) satisfying \(|V|^{1/2} \varphi (k,\cdot ) \in L^2(\mathbb {R}^3)\). It is given by

$$\begin{aligned} \varphi (k,x) = e^{\mathrm {i}k x} - \frac{1}{4\pi } \int \frac{e^{\mathrm {i}\left| k \right| \left| x-y \right| }}{\left| x-y \right| } V(y)^{1/2} \widetilde{\varphi }(k,y) {\mathrm d}y . \end{aligned}$$
(4.4)

Proof

By the assumptions on the potential the operator \(L_\kappa \) defined as in (4.3) is a Hilbert-Schmidt operator for all \(\kappa \ge 0\). Let \(\mathcal {E}\) denote the set of all \(\kappa \in (0,\infty )\) such that \(\psi = L_\kappa \psi \) has a nonzero solution in \(L^2\). We claim that \(\mathcal {E}\) is the empty set if (H1) holds. To this end, let \(\kappa > 0\) and assume \({\widetilde{\varphi }} = L_\kappa {\widetilde{\varphi }}\) for some \(L^2\)-function \({\widetilde{\varphi }}\). Now consider

$$\begin{aligned} \varphi (x) := - \frac{1}{4\pi } \int \frac{e^{\mathrm {i}\kappa \left| x-y \right| }}{\left| x-y \right| } V(y)^{1/2} \widetilde{\varphi }(y) {\mathrm d}y = - \frac{1}{4\pi } \int \frac{e^{\mathrm {i}\kappa \left| x-y \right| }}{\left| x-y \right| } V(y) \varphi (y) {\mathrm d}y . \end{aligned}$$

It follows that \(\varphi (x) = o(|x|^{-1})\) as \(|x| \rightarrow \infty \), and that \( - \varDelta \varphi + V \varphi = \kappa ^2 \varphi . \) According [18] this implies that \(\varphi \) vanishes identically outside a sufficiently large sphere. Hence by the unique continuation theorem it follows that \(\varphi = 0\) and \({\widetilde{\varphi }} = \left| V \right| ^{1/2} \varphi = 0\). This is a contradiction, and we conclude that the set \(\mathcal {E}\) is empty for potentials which we consider. Thus by the Fredholm alternative whenever \(k\ne 0\) there is a unique \(L^2\) solution \({\widetilde{\varphi }}\) of the modified Lippmann–Schwinger equation \( {\widetilde{\varphi }}(x) = |V|^{1/2} e^{ikx} + (L_{|k|}{\widetilde{\varphi }})(x) \). As mentioned above it follows that the original Lippmann–Schwinger equation (4.1) has a unique solution \(\varphi \) satisfying \(|V|^{1/2}\varphi \in L^2\) given by (4.4). In the case \(k=0\) we argue analogously using (H2). This shows the first part of (a). The continuity follows in view of (4.1) from dominated convergence. (b)–(e) now follow from [28, Theorem XI.41], where we have seen in Proposition 2.1 that the essential and the absolutely continuous spectrum of \(H_{\mathrm p}\) coincide. \(\square \)

Furthermore, we can extend \(V_{{\mathsf {c} }}\) to a unitary operator by including the eigenfunctions into consideration. For this, we denote by \(\varphi _n\), \(n=1, \ldots , N\), the eigenvectors of \(H_{\mathrm p}\). We define

$$\begin{aligned} V_{\mathsf {d} }:L^2({\mathbb {R}}^3) \longrightarrow \ell ^2(N), \qquad (V_{\mathsf {d} }\psi )_i := \left\langle \varphi _i, \psi \right\rangle . \end{aligned}$$

Obviously \(V_{\mathsf {d} }\restriction _{{\text {ran}}P_{{\text {disc}}}} :{\text {ran}}P_{{\text {disc}}}\rightarrow \ell ^2(N) \) is a unitary operator and \(V_{\mathsf {d} }\restriction _{{\text {ran}}P_{\mathsf {ess} }} = 0\). Thus,

$$\begin{aligned} \mathcal {V}:= V_{\mathsf {d} }\oplus V_{{\mathsf {c} }}:L^2({\mathbb {R}}^3) \longrightarrow \ell ^2(N) \oplus L^2({\mathbb {R}}^3) \end{aligned}$$
(4.5)

is unitary.

4.2 Setup for the Virial Theorems

First, we describe the setting on the particle space \(\mathcal {H}_{\mathrm p}\). We consider a dense subspace given by

$$\begin{aligned} \mathcal {D}_{\mathrm p}:= V_{{\mathsf {c} }}^* C_{\mathrm c}^\infty ({\mathbb {R}}^3)\oplus {\text {ran}}P_{{\text {disc}}}. \end{aligned}$$
(4.6)

Note that \(\mathcal {D}_{\mathrm p}\) is dense since \(V_{{\mathsf {c} }}^* C_{\mathrm c}^\infty ({\mathbb {R}}^3)\subseteq {\text {ran}}P_{\mathsf {ess} }\) is dense in \({\text {ran}}P_{\mathsf {ess} }\). Now, based on the definition of the generator of dilations,

$$\begin{aligned} A_{\mathsf {D} }= \frac{1}{4} (\hat{{\mathsf {k} }}{\hat{{\mathsf {q} }}}+ {\hat{{\mathsf {q} }}}\hat{{\mathsf {k} }}), \end{aligned}$$

we define on \(\mathcal {D}_{\mathrm p}\) the conjugate operator \({A_{\mathrm p}}\) and a regularized version \({A_{\mathrm p}^{(\epsilon )}}\),

$$\begin{aligned} {A_{\mathrm p}}:= V_{{\mathsf {c} }}^* A_{\mathsf {D} }V_{{\mathsf {c} }}, \qquad {A_{\mathrm p}^{(\epsilon )}}:= V_{{\mathsf {c} }}^* \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon V_{{\mathsf {c} }}, \end{aligned}$$

where

$$\begin{aligned} \eta _\epsilon (k) := e^{-\epsilon k^2}. \end{aligned}$$

Note that \(\eta _0 \equiv 1\) and \({A_{\mathrm p}}= A_{\mathrm p}^{(0)}\). It is clear that both \(H_{\mathrm p}\) and \({A_{\mathrm p}^{(\epsilon )}}\), \(\epsilon \ge 0\), leave \(\mathcal {D}_{\mathrm p}\) invariant. Thus we can define \({\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}(H_{\mathrm p})\) on \(\mathcal {D}_{\mathrm p}\) for all \(n \in {\mathbb {N}}\). Furthermore, the bounding operator is chosen as

$$\begin{aligned} \varLambda _{\mathrm p}&:= V_{{\mathsf {c} }}^* (\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2) V_{{\mathsf {c} }}+ {\text {Id}}_{\mathrm p}. \end{aligned}$$

Next, on the field space we set

$$\begin{aligned} A_{{\text {f}}}&:= \mathsf {d} \varGamma (\mathrm {i}\partial _u), \end{aligned}$$
(4.7)
$$\begin{aligned} \varLambda _{{\text {f}}}&:= \mathsf {d} \varGamma ({u} ^2 + 1). \end{aligned}$$
(4.8)

Now, we can define on the dense subspace of the composite space \(\mathcal {H}\),

$$\begin{aligned} {\mathcal {D}}&= \mathcal {D}_{\mathrm p}\otimes \mathcal {D}_{\mathrm p}\otimes \mathfrak {F}_{{\text {fin}}}(C_{\mathrm c}^\infty ({\mathbb {R}}^3)), \end{aligned}$$
(4.9)

the operators

$$\begin{aligned} \varLambda&= \varLambda _{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{{\text {f}}}+ {\text {Id}}_{\mathrm p}\otimes \varLambda _{\mathrm p}\otimes {\text {Id}}_{{\text {f}}}+ {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes \varLambda _{{\text {f}}}, \end{aligned}$$
(4.10)
$$\begin{aligned} {A^{(\epsilon )}}&= ({A_{\mathrm p}^{(\epsilon )}}\otimes {\text {Id}}_{\mathrm p}- {\text {Id}}_{\mathrm p}\otimes {A_{\mathrm p}^{(\epsilon )}}) \otimes {\text {Id}}_{{\text {f}}}+ {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes A_{{\text {f}}}, \qquad \epsilon \ge 0, \nonumber \\ D&= \mathrm {i}[L_\lambda ,\widehat{N_{{\text {f}}}}], \end{aligned}$$
(4.11)

where \(\widehat{N_{{\text {f}}}}:= {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes N_{{\text {f}}}\).

For operators XY with a dense domain \(\mathcal {D}_0\) we define multiple commutators by \({\text {ad}}^{(0)}_Y(X) = X\) and \({\text {ad}}^{(n+1)}_Y(X) = \mathrm {i}[{\text {ad}}^{(n)}_Y(X),Y]\) in the form sense on \(\mathcal {D}_0 \times \mathcal {D}_0\), provided the right-hand side is determined by a densly defined bounded or an essentially self-adjoint operator, in which case we denote the corresponding extension by the same symbol. Furthermore, we set \({\text {ad}}_Y(X) := {\text {ad}}^{(1)}_Y(X)\).

Now we define for \(n \in \{1,2,3\}\), \(\epsilon \ge 0\),

$$\begin{aligned} C_n^{(\epsilon )}&:= {\text {ad}}^{(n)}_{{A^{(\epsilon )}}}(L_\lambda ) \nonumber \\&= \delta _{n,1} {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes N_{{\text {f}}}+ {\text {ad}}_{{A_{\mathrm p}^{(\epsilon )}}}^{(n)}(H_{\mathrm p}) \otimes {\text {Id}}_{\mathrm p}+ (-1)^{n+1} {\text {Id}}_{\mathrm p}\otimes {\text {ad}}_{{A_{\mathrm p}^{(\epsilon )}}}^{(n)}(H_{\mathrm p}) + \lambda W_n^{(\epsilon )}, \end{aligned}$$
(4.12)
$$\begin{aligned} C_{{{\text {f}}},n}&:= {\text {ad}}^{(n)}_{{\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes A_{{\text {f}}}}(L_\lambda ) = \delta _{n,1} {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes N_{{\text {f}}}+ \lambda W_n^{({{\text {f}}})} , \end{aligned}$$
(4.13)

where

$$\begin{aligned} W_n^{(\epsilon )}&:= {\text {ad}}^{(n)}_{{A^{(\epsilon )}}}(\varPhi (I)) = \varPhi ( I_n^{(\epsilon )} (u,\varSigma )), \end{aligned}$$
(4.14)
$$\begin{aligned} W_n^{({{\text {f}}})}&:= {\text {ad}}^{(n)}_{{\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes A_{{\text {f}}}}(\varPhi (I)) = \varPhi ( I_n^{({{\text {f}}})} (u,\varSigma )), \end{aligned}$$
(4.15)

with

$$\begin{aligned} I_n^{(\epsilon )} (u,\varSigma )&:= \sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) \big ( (-\mathrm {i}\partial _u)^k \tau _\beta ( {\text {ad}}^{(n-k)}_{ {A_{\mathrm p}^{(\epsilon )}}}(G) ) \otimes {\text {Id}}_{\mathrm p}\\&\qquad - (-\mathrm {i}\partial _u)^k e^{-\beta u /2} {\text {Id}}_{\mathrm p}\otimes \tau _\beta ( {\text {ad}}^{(n-k)}_{{A_{\mathrm p}^{(\epsilon )}}} ({\overline{G}}^*) ) \big ), \\ I_n^{({{\text {f}}})} (u,\varSigma )&:= (-\mathrm {i}\partial _u)^n I(u,\varSigma ), \end{aligned}$$

and we use the shorthand notation \(I_n:= I_n^{(0)}\), \(W_n := W_n^{(0)}\), \(C_1 := C_1^{(0)}\). We note that the above identites follow from a straightforward calculation. We will see in Proposition 4.2 that the expressions in the field operators in (4.14) and (4.15) are indeed well-defined and belong to \(L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(H_p){\otimes }\mathcal {L}(H_p)\). Furthermore, it will be proven in Proposition 4.3, that \(C_n^{(\epsilon )}\) and \(C_{{{\text {f}}},n}\), \(n \in \{1,2,3\}\), are in fact essentially self-adjoint on \({\mathcal {D}}\) and we denote their self-adjoint extensions by the same symbols. Moreover, it will be shown below in Proposition 6.3 that \(C_1\) is actually bounded from below. Thus, we can assign to \(C_1\) a quadratic form \(q_{C_1}\).

4.3 Verification of the Assumptions of the Virial Theorems

In the given setting just described we can now start to prove the assumptions of the virial theorem Theorem 3.4. Above all, we have to check the GJN condition for the different commutators. The most difficult part will be the discussion of the interaction terms \(W_n^{({{\text {f}}})}\) and \(W_n^{(\epsilon )}\), \(n \in \{1,2,3\}\), \(\epsilon > 0\), and \(W_1\). Here, the expressions in the field operators need to be sufficiently bounded. These bounds will be collected in the following proposition, which will be poven in Sect. 5.

Proposition 4.2

Let \(\partial _u\) denote the weak derivative of a \(\mathcal {L}(\mathcal {H}_{\mathrm p})\)-valued function in the sense of the strong operator topology. For all \(m \in \{0,1,2,3\}\), \(n \in {\mathbb {N}}_0\), \(j \in \{1,2,3\}\), and for all \((u,\varSigma )\), \(\epsilon \ge 0\), the operators

  1. (1)

    \(\partial _u^m {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( \tau _\beta (G)(u,\varSigma ) )\),

  2. (2)

    \(\partial _u^m {\text {ad}}_{V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}}( {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( \tau _\beta (G)(u,\varSigma ) ) )\),

  3. (3)

    \( \partial _u^m {\text {ad}}_{V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}}( {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( \tau _\beta (G)(u,\varSigma ) ) )\),

  4. (4)

    \( \partial _u^m V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}{\text {ad}}^{(n)}_{{A_{\mathrm p}}}( \tau _\beta (G)(u,\varSigma ) )\),

are well-defined, and the corresponding functions \({\mathbb {R}}\times \mathbb {S}^2 \rightarrow \mathcal {L}(\mathcal {H}_{\mathrm p})\) of \((u,\varSigma )\) belong to \(L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(\mathcal {H}_{\mathrm p}))\). Moreover, there exists a constant C independent of \(\beta \) such that for \(n,m,s \in \{0,1\}\),

$$\begin{aligned} \left\| \partial _u^m (V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }})^s {\text {ad}}^{(n)}_{{A_{\mathrm p}}}( \tau _\beta (G) ) \right\| _{L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(H_{\mathrm p}))} \le C(1+ \beta ^{-\frac{1}{2}}). \end{aligned}$$
(4.16)

The result also holds true if we replace G by \(G^*\).

With the help of Proposition 4.2 we can now verify the necessary GJN conditions.

Proposition 4.3

The following triples are GJN:

  1. (1)

    \(({A_{\mathrm p}}, \varLambda _{\mathrm p}, \mathcal {D}_{\mathrm p})\),

  2. (2)

    \(({A_{\mathrm p}^{(\epsilon )}}, \varLambda _{\mathrm p}, \mathcal {D}_{\mathrm p})\), \(\epsilon > 0\),

  3. (3)

    \((H_{\mathrm p}, \varLambda _{\mathrm p}, \mathcal {D}_{\mathrm p})\),

  4. (4)

    \((L_\lambda , \varLambda , {\mathcal {D}})\), \(\lambda \in {\mathbb {R}}\),

  5. (5)

    \((\widehat{N_{{\text {f}}}}, \varLambda , {\mathcal {D}})\),

  6. (6)

    \((D, \varLambda , {\mathcal {D}})\),

  7. (7)

    \((C_i^{(\epsilon )}, \varLambda , {\mathcal {D}})\), \(\epsilon > 0\), \(i \in \{1,2,3\}\),

  8. (8)

    \((C_{{{\text {f}}},i}, \varLambda , {\mathcal {D}})\), \(i \in \{1,2,3\}\),

  9. (9)

    \((C_1, \varLambda , {\mathcal {D}})\).

In particular, \(L_\lambda \) is essentially self-adjoint on \({\mathcal {D}}\) for any \(\lambda \in {\mathbb {R}}\) due to (4). Moreover, D, \(C_1^{(\epsilon )}\), \(C_3^{(\epsilon )}\), \(\epsilon > 0\), and \(C_1^{({{\text {f}}})}\), \(C_3^{({{\text {f}}})}\) are relatively bounded by \(\widehat{N_{{\text {f}}}}^{1/2}\).

Proof

  1. (1)

    Using that \((A_{\mathsf {D} }, \hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2, C_{\mathrm c}^\infty ({\mathbb {R}}^3))\) is a GJN triple (cf. [8]) and \(V_{{\mathsf {c} }}^*\) an isometry, we have, for \(\psi \in \mathcal {D}_{\mathrm p}\),

    $$\begin{aligned} \left\| {A_{\mathrm p}}\psi \right\| = \left\| V_{{\mathsf {c} }}^* A_{\mathsf {D} }V_{{\mathsf {c} }}\psi \right\| = \left\| A_{\mathsf {D} }V_{{\mathsf {c} }}\psi \right\| \le C \left\| (\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2) V_{{\mathsf {c} }}\psi \right\| \le C' \left\| \varLambda _{\mathrm p}\psi \right\| , \end{aligned}$$

    for some constants \(C,C'\), and

    $$\begin{aligned} \pm \mathrm {i}&(\left\langle {A_{\mathrm p}}\psi , \varLambda _{\mathrm p}\psi \right\rangle - \left\langle \varLambda _{\mathrm p}\psi , {A_{\mathrm p}}\psi \right\rangle )\\&= \pm \mathrm {i}\left( \left\langle A_{\mathsf {D} }V_{{\mathsf {c} }}\psi , (\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2) V_{{\mathsf {c} }}\psi \right\rangle - \left\langle (\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2)V_{{\mathsf {c} }}\psi , A_{\mathsf {D} }V_{{\mathsf {c} }}\psi \right\rangle \right) \\&\le C \left\langle V_{{\mathsf {c} }}\psi , (\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2) V_{{\mathsf {c} }}\psi \right\rangle \\&\le C \left\langle \psi , \varLambda _{\mathrm p}\psi \right\rangle . \end{aligned}$$
  2. (2)

    On \(\mathcal {D}_{\mathrm p}\) we have

    $$\begin{aligned} {A_{\mathrm p}^{(\epsilon )}}= V_{{\mathsf {c} }}^* \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon V_{{\mathsf {c} }}= V_{{\mathsf {c} }}^* \eta _\epsilon ^2 A_{\mathsf {D} }V_{{\mathsf {c} }}+ V_{{\mathsf {c} }}^* \eta _\epsilon [A_{\mathsf {D} },\eta _\epsilon ] V_{{\mathsf {c} }}. \end{aligned}$$

    The operator \(V_{{\mathsf {c} }}^* \eta _\epsilon ^2 A_{\mathsf {D} }V_{{\mathsf {c} }}\) is relatively bounded by \(V_{{\mathsf {c} }}^* A_{\mathsf {D} }V_{{\mathsf {c} }}\), and thus also by \((\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2 )V_{{\mathsf {c} }}\) as we have already seen in the proof of (1). Furthermore, as derivatives of \(\eta _\epsilon \) are also bounded, the operator

    $$\begin{aligned}{}[A_{\mathsf {D} },\eta _\epsilon ] = \frac{1}{2} \sum _{j} [ {\hat{{\mathsf {q} }}}_j, \eta _\epsilon ] \hat{{\mathsf {k} }}_j \end{aligned}$$

    is relatively bounded by \(| \hat{{\mathsf {k} }}|\), and thus \(\eta _\epsilon [A_{\mathsf {D} },\eta _\epsilon ] V_{{\mathsf {c} }}\) is also relatively bounded by \((\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2 )V_{{\mathsf {c} }}\) . This shows the first GJN condition.

    For the second one, we compute

    $$\begin{aligned} \pm \mathrm {i}\left( \left\langle {A_{\mathrm p}^{(\epsilon )}}\psi , \varLambda _{\mathrm p}\psi \right\rangle - \left\langle \varLambda _{\mathrm p}\psi , {A_{\mathrm p}^{(\epsilon )}}\psi \right\rangle \right) = \pm \left\langle V_{{\mathsf {c} }}\psi , \mathrm {i}[\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2, \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon ] V_{{\mathsf {c} }}\psi \right\rangle . \end{aligned}$$

    Thus, it suffices to show \( \pm \mathrm {i}[\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2, \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon ] \le C(\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2)\) for some constant C on \(C_{\mathrm c}^\infty ({\mathbb {R}}^3)\). First,

    $$\begin{aligned}{}[\hat{{\mathsf {k} }}^2, \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon ] = \eta _\epsilon [\hat{{\mathsf {k} }}^2, A_{\mathsf {D} }] \eta _\epsilon = \sum _{j} \eta _\epsilon \hat{{\mathsf {k} }}_j [\hat{{\mathsf {k} }}_j, {\hat{{\mathsf {q} }}}_j] \hat{{\mathsf {k} }}_j \eta _\epsilon = -\mathrm {i}\eta _\epsilon \hat{{\mathsf {k} }}^2 \eta _\epsilon . \end{aligned}$$

    Second, in order to obtain \(\pm \mathrm {i}[{\hat{{\mathsf {q} }}}^2, \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon ] \le C(\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2)\), we use the basic operator inequality

    $$\begin{aligned} A^*B + B^*A \le A^* A + B^* B \end{aligned}$$
    (4.17)

    to show that

    $$\begin{aligned} {[{\hat{{\mathsf {q} }}}_j, \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon ]} = \eta _\epsilon [{\hat{{\mathsf {q} }}}_j, A_{\mathsf {D} }] \eta _\epsilon + [{\hat{{\mathsf {q} }}}_j, \eta _\epsilon ] A_{\mathsf {D} }\eta _\epsilon + \eta _\epsilon A_{\mathsf {D} }[{\hat{{\mathsf {q} }}}_j, \eta _\epsilon ], \qquad j \in \{1,2,3\}, \end{aligned}$$
    (4.18)

    is relatively bounded by \(|\hat{{\mathsf {k} }}|\). This is clear for the first term in (4.18) as \(\eta _\epsilon [{\hat{{\mathsf {q} }}}_j, A_{\mathsf {D} }] \eta _\epsilon = \frac{\mathrm {i}}{2} \eta _\epsilon ^2 \hat{{\mathsf {k} }}_j\), and for the two remaining ones it follows analogously to the proof of the first GJN condition, since second derivatives of \(\eta _\epsilon \) are bounded.

  3. (3)

    We have, with regard to the first GJN condition,

    $$\begin{aligned} \left\| H_{\mathrm p}\psi \right\| = \left\| V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\psi + H_{\mathrm p}P_{{\text {disc}}}\psi \right\| \le C \left\| V_{{\mathsf {c} }}^* (\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2) V_{{\mathsf {c} }}\psi \right\| + \sup _{\lambda \in \sigma _{\mathrm d}(H_{\mathrm p})} \left| \lambda \right| \left\| \psi \right\| , \end{aligned}$$
    (4.19)

    as \(\hat{{\mathsf {k} }}^2\) is relatively bounded by \({\hat{{\mathsf {q} }}}^2 + \hat{{\mathsf {k} }}^2\). Furthermore,

    $$\begin{aligned}&\left\langle H_{\mathrm p}\psi , \varLambda _{\mathrm p}\psi \right\rangle - \left\langle \varLambda _{\mathrm p}\psi , H_{\mathrm p}\psi \right\rangle \\&\quad = \left\langle V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\psi , V_{{\mathsf {c} }}^* ({\hat{{\mathsf {q} }}}^2 + \hat{{\mathsf {k} }}^2) V_{{\mathsf {c} }}\psi \right\rangle - \left\langle V_{{\mathsf {c} }}^* ({\hat{{\mathsf {q} }}}^2 + \hat{{\mathsf {k} }}^2) V_{{\mathsf {c} }}\psi , V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\psi \right\rangle \\&\quad = \left\langle \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\psi , {\hat{{\mathsf {q} }}}^2 V_{{\mathsf {c} }}\psi \right\rangle - \left\langle {\hat{{\mathsf {q} }}}^2 V_{{\mathsf {c} }}\psi , \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\psi \right\rangle . \end{aligned}$$

    Using now that \(\pm \mathrm {i}[{\hat{{\mathsf {q} }}}^2,\hat{{\mathsf {k} }}^2] \le C(\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2)\) for some constant C, we get also the second GJN condition.

  4. (4)

    As \(H_{\mathrm p}\) is relatively bounded by \(\varLambda _{\mathrm p}\) in view of (4.19), \(L_0 = (H_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}- {\text {Id}}_{\mathrm p}\otimes H_{\mathrm p}) \otimes {\text {Id}}_{{\text {f}}}\) is relatively bounded by \(\varLambda \). Next, by (I2), (I3), and Lemma A.3 we know that the interaction terms \(I^{(\mathsf {l} )}, I^{(\mathsf {r} )} \in L^2({\mathbb {R}}\times \mathbb {S}^2\), \(\mathcal {L}(H_p)\)). Hence W is bounded by \(\widehat{N_{{\text {f}}}}^{1/2}\) and thus bounded by \({\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes \varLambda _{{\text {f}}}^{1/2}\). Therefore, the first GJN condition is satisfied.

    Next, as \((H_{\mathrm p}, \varLambda _{\mathrm p}, \mathcal {D}_{\mathrm p})\) is GJN, we get a constant C, such that for all \(\psi \in {\mathcal {D}}\),

    $$\begin{aligned} \pm \mathrm {i}( \left\langle L_0 \psi , \varLambda \psi \right\rangle - \left\langle \varLambda \psi , L_0 \psi \right\rangle ) \le C\left\langle \psi , (\varLambda _{\mathrm p}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes \varLambda _{\mathrm p}) \otimes {\text {Id}}_{{\text {f}}}\psi \right\rangle , \end{aligned}$$

    which yields the second GJN condition for \(L_0\).

    Again by Lemma A.3 and (I1)–(I3) we know that \((u,\varSigma ) \mapsto (u^2 + 1) I^{(\mathsf {l} )}(u,\varSigma )\) and \((u,\varSigma ) \mapsto (u^2 + 1) I^{(\mathsf {r} )}(u,\varSigma )\) are in \(L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}( \mathcal {H}_{\mathrm p}))\). We have

    $$\begin{aligned}&\left| \left\langle \varPhi ( I^{(\mathsf {l} )}) \psi , {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes \varLambda _{{\text {f}}}\psi \right\rangle - \left\langle {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes \varLambda _{{\text {f}}}\psi , \varPhi ( I^{(\mathsf {l} )}) \psi \right\rangle \right| \\&\qquad = \left| \left\langle \psi ,( a\left( ( {u} ^2 + 1) I^{(\mathsf {l} )}) - a^*(( {u} ^2 + 1) I^{(\mathsf {l} )}) \right) \psi \right\rangle \right| \\&\qquad \le C \left\| {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes N_{{\text {f}}}^{1/2} \psi \right\| \left\| \psi \right\| \\&\qquad \le C' \left\langle \psi , \varLambda \psi \right\rangle \end{aligned}$$

    for some constants \(C,C'\). The same thing can be shown for the commutator with \(\varPhi (I^{(\mathsf {r} )})\).

    It remains to consider the commutator of W with the \(\varLambda _{\mathrm p}\) terms. One has to show that the commutators

    $$\begin{aligned}{}[\varPhi (I^{(\mathsf {l} )} ), V_{{\mathsf {c} }}^* ({\hat{{\mathsf {q} }}}^2 + \hat{{\mathsf {k} }}^2) V_{{\mathsf {c} }}], \qquad [\varPhi (I^{(\mathsf {r} )} ), V_{{\mathsf {c} }}^* ({\hat{{\mathsf {q} }}}^2 + \hat{{\mathsf {k} }}^2) V_{{\mathsf {c} }}] \end{aligned}$$

    are form-bounded by \(\varLambda _{\mathrm p}\), that is, in the sense of (3.2). This follows from Proposition 4.2 since we can write in the weak sense on \({\mathcal {D}}\),

    $$\begin{aligned} {[\varPhi (I^{(\mathsf {l} )} ), V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}^2 V_{{\mathsf {c} }}]} = \sum _{j} [\varPhi (I^{(\mathsf {l} )} ), V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}] V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}+ V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}[\varPhi (I^{(\mathsf {l} )} ), V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}] \end{aligned}$$

    and analogously for terms involving \(\varPhi (I^{(\mathsf {r} )})\) as well as \(V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\).

  5. (5)

    Clearly, \(\left\| N_{{\text {f}}}\psi \right\| = \left\| \mathsf {d} \varGamma (1) \psi \right\| \le \left\| \mathsf {d} \varGamma (u^2 + 1) \psi \right\| \) for any \(\psi \in \mathfrak {F}_{{\text {fin}}}(C_{\mathrm c}^\infty ({\mathbb {R}}^3))\), which shows that \(\widehat{N_{{\text {f}}}}\) is relatively bounded by \(\varLambda \) on \({\mathcal {D}}\). Furthermore, \([\widehat{N_{{\text {f}}}}, \varLambda ] = 0\) on \({\mathcal {D}}\).

  6. (6)

    We have

    $$\begin{aligned} D = \mathrm {i}\lambda ( a(I) - a^*(I) ). \end{aligned}$$

    Thus, the proof works as the one for \(L_\lambda \).

  7. (7)

    We first consider the atomic part. One can show by induction that for all \(n \in {\mathbb {N}}\) there exists \(f \in \mathcal {S}({\mathbb {R}}^3)\) such that

    $$\begin{aligned} {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}(H_{\mathrm p}) = V_{{\mathsf {c} }}^* f(\hat{{\mathsf {k} }}) V_{{\mathsf {c} }}. \end{aligned}$$
    (4.20)

    Clearly, for \(n=1\),

    $$\begin{aligned} {\text {ad}}_{{A_{\mathrm p}^{(\epsilon )}}}(H_{\mathrm p}) = \mathrm {i}V_{{\mathsf {c} }}^* [\hat{{\mathsf {k} }}^2, \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon ] V_{{\mathsf {c} }}= \mathrm {i}V_{{\mathsf {c} }}^* \eta _\epsilon [\hat{{\mathsf {k} }}^2, A_{\mathsf {D} }] \eta _\epsilon V_{{\mathsf {c} }}= V_{{\mathsf {c} }}^* \eta _\epsilon \hat{{\mathsf {k} }}^2 \eta _\epsilon V_{{\mathsf {c} }}, \end{aligned}$$
    (4.21)

    which has the form (4.20). Next, as \([f(\hat{{\mathsf {k} }}), {\hat{{\mathsf {q} }}}_j] = -\mathrm {i}\partial _j f(\hat{{\mathsf {k} }})\),

    $$\begin{aligned} {[ V_{{\mathsf {c} }}^* f(\hat{{\mathsf {k} }}) V_{{\mathsf {c} }}, {A_{\mathrm p}^{(\epsilon )}}]}&= V_{{\mathsf {c} }}^*\eta _\epsilon [ f(\hat{{\mathsf {k} }}) , A_{\mathsf {D} }] \eta _\epsilon V_{{\mathsf {c} }}= \frac{1}{2} \sum _{j} V_{{\mathsf {c} }}^*\eta _\epsilon [ f(\hat{{\mathsf {k} }}), {\hat{{\mathsf {q} }}}_j ] \hat{{\mathsf {k} }}_j \eta _\epsilon V_{{\mathsf {c} }}\end{aligned}$$

    yields again the form (4.20).

    In particular, (4.20) implies that \({\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}(H_{\mathrm p})\), \(n \in {\mathbb {N}}\), are bounded and

    $$\begin{aligned} \pm \mathrm {i}[{\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}(H_{\mathrm p}), \varLambda _{\mathrm p}] \le C \varLambda _{\mathrm p}\end{aligned}$$

    on \(\mathcal {D}_{\mathrm p}\) for some constant C, since

    $$\begin{aligned} \pm \mathrm {i}[ f(\hat{{\mathsf {k} }}),\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2] = \pm \mathrm {i}[ f(\hat{{\mathsf {k} }}),{\hat{{\mathsf {q} }}}^2] \le C(\hat{{\mathsf {k} }}^2 + {\hat{{\mathsf {q} }}}^2), \end{aligned}$$

    because \([ f(\hat{{\mathsf {k} }}),{\hat{{\mathsf {q} }}}_j]\), \(j \in \{1,2,3\}\), is bounded.

    This together with (5) implies that \(({\text {ad}}^{(n)}_{A^{(\epsilon )}}(L_0), \varLambda , {\mathcal {D}})\), \(n\in \{1,2,3\}\), are GJN triples.

    It remains to verify the GJN conditions for \((W_n^{(\epsilon )}, \varLambda , {\mathcal {D}})\), \(n\in \{1,2,3\}\). Analogously to the proof (4) for the GJN condition of \(L_\lambda \) we have to show that the expressions in the field operators and the commutators with \(V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}\) and \(V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}\), \(j \in \{1,2,3\}\) are integrable. That is, we have to show that

    $$\begin{aligned} (u,\varSigma )&\mapsto \partial _u^m \tau _\beta ({\text {ad}}^{(n-m)}_{{A_{\mathrm p}^{(\epsilon )}}} (G))(u,\varSigma ), \\ (u,\varSigma )&\mapsto \partial _u^m [\tau _\beta ({\text {ad}}^{(n-m)}_{{A_{\mathrm p}^{(\epsilon )}}} (G))(u,\varSigma ), X], \end{aligned}$$

    \(X \in \{ V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}, V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}:j \in \{1,2,3\}\}\), \(n \in \{ 1,2,3 \}\), \(m \in \{ 0, \ldots ,n \}\), are in \(L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(\mathcal {H}_{\mathrm p}))\). This follows from Proposition 4.2.

  8. (8)

    Analogously to the proof of (7) it suffices to show that

    $$\begin{aligned} (u,\varSigma )&\mapsto \partial _u^n \tau _\beta (G)(u,\varSigma ), \\ (u,\varSigma )&\mapsto \partial _u^n [\tau _\beta (G)(u,\varSigma ), X], \end{aligned}$$

    \(X \in \{ V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}, V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}:j \in \{1,2,3\}\}\), \(n \in \{ 1,2,3 \}\), are in \(L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(\mathcal {H}_{\mathrm p}))\). This follows again from Proposition 4.2.

  9. (9)

    We first consider again the free part. We have on \(\mathcal {D}_{\mathrm p}\),

    $$\begin{aligned} \mathrm {i}[H_{\mathrm p}, {A_{\mathrm p}}] = \mathrm {i}[V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}, {A_{\mathrm p}}] = \mathrm {i}V_{{\mathsf {c} }}^* [\hat{{\mathsf {k} }}^2, A_{\mathsf {D} }] V_{{\mathsf {c} }}= V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}. \end{aligned}$$

    Then we can show as in the proof of (3) that also \(({\text {ad}}_{{A_{\mathrm p}}}(H_{\mathrm p}), \varLambda _{\mathrm p}, \mathcal {D}_{\mathrm p})\), is GJN and so is \(({\text {ad}}_{A}(L_0), \varLambda , {\mathcal {D}})\).

    It remains to verify the GJN conditions for \((W_1, \varLambda , {\mathcal {D}})\). As in (7) and (8) one has to show that

    $$\begin{aligned} (u,\varSigma )&\mapsto \partial _u^n \tau _\beta ({\text {ad}}^{(1-n)}_{{A_{\mathrm p}}} (G))(u,\varSigma ), \\ (u,\varSigma )&\mapsto \partial _u^n [\tau _\beta ({\text {ad}}^{(1-n)}_{{A_{\mathrm p}}} (G))(u,\varSigma ), X], \end{aligned}$$

    \(X \in \{ V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}, V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}:j \in \{1,2,3\}\}\), \(n \in \{0,1\}\), are in \(L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(\mathcal {H}_{\mathrm p}))\), which follows again from Proposition 4.2. \(\square \)

The statements of Proposition 4.3 allow the application of the virial theorem Theorem 3.4 for the regularized conjugate operator \({A^{(\epsilon )}}\), \(\epsilon >0\). In order to remove the regularization and transfer the result to \(C_1\), one has to consider the limit \(\epsilon \rightarrow 0\) for the corresponding quadratic forms \(q_{C^{(\epsilon )}_1}\) of \(C_1^{(\epsilon )}\). This is the content of the following lemma.

Lemma 4.4

Assume that \(\psi \in \mathcal {D}(\widehat{N_{{\text {f}}}}^{1/2})\) and \(q_{C^{(\epsilon )}_1}(\psi ) \le 0\) for all \( \epsilon \in (0,1)\). Then \(\psi \in \mathcal {D}(q_{C_1})\), and

$$\begin{aligned} q_{C_1}(\psi ) = \lim _{\epsilon \rightarrow 0} q_{C^{(\epsilon )}_1}(\psi ). \end{aligned}$$
(4.22)

Proof

Let us first recall that by definition

$$\begin{aligned} C_1^{(\epsilon )}&= {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes N_{{\text {f}}}+ ({\text {ad}}_{{A_{\mathrm p}^{(\epsilon )}}}(H_{\mathrm p}) \otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes {\text {ad}}_{{A_{\mathrm p}^{(\epsilon )}}}(H_{\mathrm p}))\otimes \supset {\text{ d }}_{\text{ f }} + \lambda W_1^{(\epsilon )} . \end{aligned}$$
(4.23)

Step 1: We show that

$$\begin{aligned} \psi \in \mathcal {D}\left( ( V_{{\mathsf {c} }}^* | \hat{{\mathsf {k} }}| V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* | \hat{{\mathsf {k} }}| V_{{\mathsf {c} }}) \otimes {\text {Id}}_{{\text {f}}}+ \widehat{N_{{\text {f}}}}^{1/2} \right) \subseteq \mathcal {D}(q_{C_1}). \end{aligned}$$
(4.24)

Step 1 will follow once we have established that

$$\begin{aligned} \left\langle \psi , ( V_{{\mathsf {c} }}^* \eta _\epsilon \hat{{\mathsf {k} }}^2 \eta _\epsilon V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* \eta _\epsilon \hat{{\mathsf {k} }}^2\eta _\epsilon V_{{\mathsf {c} }}) \otimes {\text {Id}}_{{\text {f}}}\psi \right\rangle \le C \end{aligned}$$
(4.25)

for a constant C independent of \(\epsilon \in (0,1)\). To this end, we use that by standard estimates for creation and annihilation operators, we obtain for any \(\delta > 0\),

$$\begin{aligned} \pm \mathrm {i}W_1^{(\epsilon )} \le \frac{1}{\delta } \widehat{N_{{\text {f}}}}+ \delta w_{1}^{(\epsilon )} \otimes {\text {Id}}_{{\text {f}}}\end{aligned}$$
(4.26)

in the form sense on \(\mathcal {D}(\widehat{N_{{\text {f}}}}^{1/2})\), where we introduced the follwoing bounded operators on \(\mathcal {H}_{\mathrm p}\otimes \mathcal {H}_{\mathrm p}\),

$$\begin{aligned} w_{1}^{(\epsilon )}&:= \int I_1^{(\epsilon )} (u,\varSigma )^* I_1^{(\epsilon )}(u,\varSigma ) {\mathrm d}(u, \varSigma ), \qquad \epsilon \ge 0 . \end{aligned}$$

To estimate this expression we use that

$$\begin{aligned} \int {\text {ad}}_{{A_{\mathrm p}^{(\epsilon )}}}( \tau _\beta ( G) (u,\varSigma ))^* {\text {ad}}_{{A_{\mathrm p}^{(\epsilon )}}}( \tau _\beta ( G) (u,\varSigma )) {\mathrm d}(u,\varSigma ) \le C ( V_{{\mathsf {c} }}^* \eta _\epsilon \hat{{\mathsf {k} }}^2 \eta _\epsilon V_{{\mathsf {c} }}+ {\text {Id}}) , \end{aligned}$$

in the form sense for some constant C independent of \(\epsilon \), where we multiplied out the commutators, used (4.17) and the fact that the functions \((u,\varSigma ) \mapsto \partial _u \tau _\beta ( G) (u,\varSigma )\) and \((u,\varSigma ) \mapsto V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}\tau _\beta ( G)\) belong to \(L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(\mathcal {H}_{\mathrm p}))\) due to Proposition 4.2. This yields

$$\begin{aligned} w_{1}^{(\epsilon )} \le C(V_{{\mathsf {c} }}^* \eta _\epsilon \hat{{\mathsf {k} }}^2 \eta _\epsilon V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* \eta _\epsilon \hat{{\mathsf {k} }}^2\eta _\epsilon V_{{\mathsf {c} }}+ {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}). \end{aligned}$$
(4.27)

Now using (4.21) and (4.26) to estimate (4.23) we obtain in the form sense on \(\mathcal {D}(\widehat{N_{{\text {f}}}}^{1/2})\) for any \(\epsilon > 0\)

$$\begin{aligned} C_1^{(\epsilon )}&\ge (1 - C \left| \lambda \right| \delta ) (V_{{\mathsf {c} }}^* \eta _\epsilon \hat{{\mathsf {k} }}^2 \eta _\epsilon V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* \eta _\epsilon \hat{{\mathsf {k} }}^2 \eta _\epsilon V_{{\mathsf {c} }}+ {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}) \otimes {\text {Id}}_{{\text {f}}}\\&\qquad + \left( 1 - \frac{\left| \lambda \right| }{\delta } \right) \widehat{N_{{\text {f}}}}. \end{aligned}$$

Making \(\delta > 0\) sufficently small such that \(C \left| \lambda \right| \delta < 1\) and using that by assumption \(C_1^{(\epsilon )} \le 0\) and \(\psi \in \mathcal {D}(\widehat{N_{{\text {f}}}}^{1/2})\), we arrive at (4.25).

Step 2: We have (4.22).

First observe, that by dominated convergence, we have for all \(\varphi \in \mathcal {D}( V_{{\mathsf {c} }}^*| \hat{{\mathsf {k} }}| V_{{\mathsf {c} }})\),

$$\begin{aligned} \left\| | \hat{{\mathsf {k} }}| \eta _\epsilon V_{{\mathsf {c} }}\varphi \right\| \longrightarrow \left\| | \hat{{\mathsf {k} }}| V_{{\mathsf {c} }}\varphi \right\| , ~\epsilon \rightarrow 0 . \end{aligned}$$

Thus Step 2 will follow once we have shown that

$$\begin{aligned} \left\langle \psi , W_1^{(\epsilon )} \psi \right\rangle \rightarrow \left\langle \psi , W_1 \psi \right\rangle . \end{aligned}$$
(4.28)

Thus we have to show the convergence of the field operator of a commutator. To this end, we note that from Proposition 4.2 we know that

$$\begin{aligned} (u,\varSigma ) \mapsto V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}H(u,\varSigma ), \end{aligned}$$

for \(H \in \{ \tau _\beta ( G), \partial _u \tau _\beta ( G) \}\), are in \(L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(\mathcal {H}_{\mathrm p}))\). Now,

$$\begin{aligned} {A_{\mathrm p}^{(\epsilon )}}H(u,\varSigma )&= V_{{\mathsf {c} }}^* \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon V_{{\mathsf {c} }}H(u,\varSigma ) \nonumber \\&= V_{{\mathsf {c} }}^* \eta _\epsilon \frac{1}{4}(3 \mathrm {i}+ 2 \hat{{\mathsf {k} }}{\hat{{\mathsf {q} }}}) \eta _\epsilon V_{{\mathsf {c} }}H(u,\varSigma ) \nonumber \\&= V_{{\mathsf {c} }}^* \eta _\epsilon \frac{1}{4}\left( 3 \mathrm {i}\eta _\epsilon + 2 \sum _{j=1}^3 \hat{{\mathsf {k} }}_j ( \mathrm {i}( \partial _{k_j} \eta _\epsilon ) + \eta _\epsilon {\hat{{\mathsf {q} }}}_j) \right) V_{{\mathsf {c} }}H(u,\varSigma ). \end{aligned}$$
(4.29)

Observe that \(\eta _\epsilon \) and \( \partial _{k_j} \eta _\epsilon \) are bounded uniformly in \(\epsilon \in (0,1)\). From (4.29) we see for \(\varphi \in \mathcal {D}( {\text {Id}}_{\mathrm p}\otimes N_{{\text {f}}})\) that

$$\begin{aligned}&\left\langle \varphi , a^*( {A_{\mathrm p}^{(\epsilon )}}H ) \varphi \right\rangle \\&\quad = \frac{3}{4} \left\langle V_{{\mathsf {c} }}^* \eta _\epsilon ^2 V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi , a^*( \mathrm {i}H ) \varphi \right\rangle + \frac{1}{2} \sum _{j=1}^3 \left\langle V_{{\mathsf {c} }}^* ( \partial _{k_j} \eta _\epsilon ) \hat{{\mathsf {k} }}_j \eta _\epsilon V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi , a^*( \mathrm {i}H) \varphi \right\rangle \\&\qquad + \frac{1}{2} \sum _{j=1}^3 \left\langle V_{{\mathsf {c} }}^* \eta _\epsilon \hat{{\mathsf {k} }}_j \eta _\epsilon V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi , a^*(V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}H) \varphi \right\rangle \\&\qquad \rightarrow \frac{3}{4} \left\langle V_{{\mathsf {c} }}^* V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi , a^*( \mathrm {i}H ) \varphi \right\rangle + \frac{1}{2} \sum _{j=1}^3 \left\langle V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi , a^*(V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}H) \varphi \right\rangle , \end{aligned}$$

where for the limit we used that \( \partial _{k_j} \eta _\epsilon \) tends to zero as \(\epsilon \downarrow 0\), (4.24), and dominated convergence. Similarly, we find

$$\begin{aligned}&\left\langle \varphi , a^*( H {A_{\mathrm p}^{(\epsilon )}}) \varphi \right\rangle \\&\quad \rightarrow \frac{3}{4} \left\langle a( \mathrm {i}H ) \varphi , V_{{\mathsf {c} }}^* V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi \right\rangle + \frac{1}{2} \sum _{j=1}^3 \left\langle a( H V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}) \varphi , V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi \right\rangle , \end{aligned}$$

Thus we find

$$\begin{aligned}&\mathrm {i}\left\langle \varphi , a^*( {\text {ad}}_{{A_{\mathrm p}^{(\epsilon )}}}(H) ) \varphi \right\rangle \\&\quad \rightarrow \frac{3}{4} \left\langle V_{{\mathsf {c} }}^* V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi , a^*( \mathrm {i}H ) \varphi \right\rangle + \frac{1}{2} \sum _{j=1}^3 \left\langle V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi , a^*(V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}H) \varphi \right\rangle \\&\qquad - \frac{3}{4} \left\langle a( \mathrm {i}H ) \varphi , V_{{\mathsf {c} }}^* V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi \right\rangle - \frac{1}{2} \sum _{j=1}^3 \left\langle a ( H V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}) \varphi , V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\varphi \right\rangle \\&\quad = \mathrm {i}\left\langle \varphi , a^*( {\text {ad}}_{{A_{\mathrm p}}}(H(u,\varSigma )) ) \varphi \right\rangle , \end{aligned}$$

where the last equality follows by verifying the identiy on the dense space \(\mathcal {D}_{\mathrm p}\), defined in (4.6), using a straightforward calculation, and then extending it to \(\varphi \) using Proposition 4.2. This shows (4.28) in view of the definition of \(W_1^{(\epsilon )}\) and \(W_1\), see (4.14). \(\square \)

Now we can prove the main result of this section, the concrete virial theorem for our setting.

Theorem 4.5

(Concrete virial theorem) Assume there exists \(\psi \in \mathcal {D}(L_\lambda )\) with \(L_\lambda \psi = 0\). Then \(\psi \in \mathcal {D}(q_{C_1})\) and \(q_{C_1}(\psi ) \le 0\).

Proof

Step 1: We have \(\psi \in \mathcal {D}(\widehat{N_{{\text {f}}}}^{1/2})\).

To show Step 1, we apply the virial theorem for \(\varLambda \) as in (4.10), \(\mathcal {D}\) as in (4.9), as well as the operators \(L=L_\lambda \), \(A = {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes A_{{\text {f}}}\), \(N=\widehat{N_{{\text {f}}}}+1\), D as in (4.11), \(C_n = C_{{{\text {f}}},n}\), \(n \in \{0,1,2,3\}\) as in (4.13), which are symmetric on \(\mathcal {D}\) and satisfy (3.3)–(3.5) as a consequence of the definition. Next we verify the additional assumptions of the virial theorem. We have shown in Proposition 4.3 that (V1) is satisfied. Furthermore, on \(\mathfrak {F}_{{\text {fin}}}(C_{\mathrm c}^\infty ({\mathbb {R}}^3))\),

$$\begin{aligned} \varLambda _{{\text {f}}}e^{\mathrm {i}t A_{{\text {f}}}} = e^{\mathrm {i}t A_{{\text {f}}}} (\varLambda _{{\text {f}}}+ \mathsf {d} \varGamma (2{u}t + t^2)), \qquad t \in {\mathbb {R}}. \end{aligned}$$

Thus, for some fixed t there is a constant C such that \(\left\| \varLambda _{{\text {f}}}e^{\mathrm {i}t A_{{\text {f}}}} \psi \right\| \le C\left\| \varLambda _{{\text {f}}}\psi \right\| \) for all \(\psi \in \mathfrak {F}_{{\text {fin}}}(C_{\mathrm c}^\infty ({\mathbb {R}}^3))\). As \(\mathfrak {F}_{{\text {fin}}}(C_{\mathrm c}^\infty ({\mathbb {R}}^3))\) is a core for \(\varLambda _{{\text {f}}}\), we find \(e^{\mathrm {i}t A_{{\text {f}}}} \mathcal {D}(\varLambda _{{\text {f}}}) \subseteq \mathcal {D}(\varLambda _{{\text {f}}})\). Hence, we conclude that \(\mathcal {D}(\varLambda )\) is invariant under the unitary group associated to A and hence (V2) holds. Furthermore, (3.6)–(3.8) are satisfied since D, \(C_{{{\text {f}}},1}\) and \(C_{{{\text {f}}},3}\) are relatively bounded by \(N_{{\text {f}}}^{1/2}\) (by Proposition 4.3). Thus we can apply the virial theorem, and obtain a sequence \((\psi _n)\) in \(\mathcal {D}(L_\lambda ) \cap \mathcal {D}(C_{{{\text {f}}},1})\) such that \(\psi _n \rightarrow \psi \) and

$$\begin{aligned} \lim _{n \rightarrow \infty } \left\langle \psi _n, C_{{{\text {f}}},1} \psi _n \right\rangle = 0. \end{aligned}$$

Now, it follows from lower semi-continuity of closed quadratic forms that \(\psi \) is in the form domain of \(C_{{{\text {f}}},1}\). As \(W_1^{({{\text {f}}})}\) is relatively bounded by \(\widehat{N_{{\text {f}}}}^{1/2}\) (as shown in the proof of Proposition 4.3 (8)), we conclude that \(\psi \in \mathcal {D}(\widehat{N_{{\text {f}}}}^{1/2})\).

Step 2: \(\psi \in \mathcal {D}(q_{C^{(\epsilon )}_1})\) and \(q_{C^{(\epsilon )}_1}(\psi ) \le 0 \).

To show Step 2, we apply the virial theorem once more with \(\varLambda \), D, L, and N as in Step 1. But now with \(A = {A^{(\epsilon )}}\), \(\epsilon > 0\) and \(C_n = C_n^{(\epsilon )}\), \(n \in \{0,1,2,3\}\) as in (4.12). Again, the operators are symmetric on \(\mathcal {D}\) and satisfy (3.3)–(3.5) as a consequence of the definition. Let us now verify the additional assumptions of the virial theorem. Again, by Proposition 4.3, (V1) and (3.6)–(3.8) are satisfied. Furthermore, by Proposition 4.3 we know that \(({A_{\mathrm p}^{(\epsilon )}}, \varLambda _{\mathrm p}, \mathcal {D}_{\mathrm p})\) is a GJN triple. Hence Theorem 3.2 shows that \(\exp ( \mathrm {i}t {A_{\mathrm p}^{(\epsilon )}})\), \(t \in {\mathbb {R}}\), leaves \(\mathcal {D}(\varLambda _{\mathrm p})\) invariant. This and the invariance for the unitary group generated by \(A_{{\text {f}}}\), as already shown in Step 1, imply (V2). Thus we can apply the virial theorem and obtain a sequence \((\psi _n)_{n\in {\mathbb {N}}}\) in \(\mathcal {D}(C_1^{(\epsilon )}) \cap \mathcal {D}(L_\lambda )\) such that \(\psi _n \rightarrow \psi \), \(n \rightarrow \infty \), and \(\lim _{n\rightarrow \infty } \left\langle \psi _n, C_1^{(\epsilon )} \psi _n \right\rangle = 0\). Now, since \(q_{C^{(\epsilon )}_1}\) is closed and thus lower semi-continuous, we obtain \(\psi \in \mathcal {D}(q_{C^{(\epsilon )}_1})\) and

$$\begin{aligned} q_{C^{(\epsilon )}_1}(\psi ) \le \lim _{n \rightarrow \infty } q_{C^{(\epsilon )}_1}(\psi _n) = 0. \end{aligned}$$

Step 3: The theorem follows from Step 1, Step 2, and Lemma 4.4. \(\square \)

5 Estimates on the Scattering Functions

The aim of this section is to prove that the commutators of the interaction with the dilation operator in scattering space are sufficiently bounded. To achieve this, we use the Born series expansion of the scattering functions, that is, we expand them using the recursion formula of the Lippmann–Schwinger equation (4.1). Then we get the Born series terms, and a remainder term since we perform only finitely many recursion steps. The idea is that the remainder term decays fast enough for the momentum \(\left| k \right| \rightarrow \infty \) after sufficiently many iteration steps.

5.1 Born Series Expansion and Technical Preparations

First we show that that the scattering functions as well as their derivatives with respect to the wave vector k are bounded. For that we use the method of modified square integrable scattering functions as described in Sect. 4.1. Remember that \(\varphi (k,\cdot )\), \(k \in {\mathbb {R}}^3\), denote the continuous scattering functions on \({\mathbb {R}}^3\) and V a potential satisfying the assumptions (H1), (H2). As V is compactly supported we may assume that \({\text {supp}}V\) is contained in a ball around the origin of radius R.

We now set \(\widetilde{\varphi }(k,x) := \left| V(x) \right| ^{1/2} \varphi (k,x)\). Then \(\widetilde{\varphi }(k,\cdot ) \in L^2({\mathbb {R}}^3)\) for all k, and the function satisfies the modified Lippmann–Schwinger equation

$$\begin{aligned} \widetilde{\varphi }(k,x) = \left| V(x) \right| ^{1/2} e^{\mathrm {i}k x} + (L_{\left| k \right| } \widetilde{\varphi }(k,\cdot ))(x), \end{aligned}$$

where \(L_\kappa \) is defined in (4.3). We recall from Sect. 4.1 that we can recover the original scattering function from the modified one by

$$\begin{aligned} \varphi (k,x) = e^{\mathrm {i}k x} - \frac{1}{4\pi } \int \frac{e^{\mathrm {i}\left| k \right| \left| x-y \right| }}{\left| x-y \right| } V(y)^{1/2} \widetilde{\varphi }(k,y) {\mathrm d}y . \end{aligned}$$
(5.1)

Now we extend the results of boundedness of the first derivative of the scattering functions in [25, Lemma 1.1.3] to derivatives of arbitrary order.

Proposition 5.1

Assume (H1) and (H2). Let \({\hat{D}}_k= \frac{k}{|k|} \nabla _k\) and let \(\partial _{k_j}\) be the derivative with respect to the j-th component of k. For all \(n \in {\mathbb {N}}_0\), \(m \in \{0,1\}\) there is a polynomial P such that for all x and \(k \ne 0\),

$$\begin{aligned} \left| \partial ^m_{k_j} {\hat{D}}_k^n \varphi (k,x) \right| \le P(\left| x \right| ) . \end{aligned}$$

Proof

We get by the modified Lippmann–Schwinger equation

$$\begin{aligned} \widetilde{\varphi }(k,\cdot ) = ({\text {Id}}- L_{\left| k \right| })^{-1} ( \left| V \right| ^{1/2} e_k ), \end{aligned}$$

where \(e_k(x) := e^{\mathrm {i}k x}\). First we claim that \(({\text {Id}}- L_{\left| k \right| })^{-1}\) is uniformly bounded in \(k \in {\mathbb {R}}^3\). For this we note that \(\kappa \mapsto L_{\kappa }\) is continuous on \([0,\infty )\), which is easy to see, cf. [30, Theorem 1.22]. Moreover, \(\lim _{\kappa \rightarrow 0} \Vert L_\kappa \Vert \rightarrow 0\) follows from a result of Zemach and Klein, see [30, Theorem 1.23]. Since \(L_\kappa \) is Hilbert-Schmidt and hence compact, it follows from the Fredholm alternative that \({\text {Id}}- L_\kappa \) is invertible, provided \(\psi = L_{\kappa } \psi \) has no non-trivial solutions in \(L^2\). But as in the proof of Theorem 4.1, such non-trivial solutions are ruled out for all \(\kappa \ge 0\). Since the inverse is a continuous map on the space of bounded invertible operators, the claim about the bounded resolvent now follows.

Observe that \({\hat{D}}_k|k| = 1\) and \({\hat{D}}_k(k/|k|) = 0\). Thus, for any \(n \in {\mathbb {N}}_0\), \({\hat{D}}_k^n ({\text {Id}}- L_{\left| k \right| })^{-1} \) is again uniformly bounded in \(k \not = 0\), since differentiation with \({\hat{D}}_k\) yields just higher powers of \(({\text {Id}}- L_{\left| k \right| })^{-1}\) and radial derivatives of \(L_{\left| k \right| }\), which are again bounded operators since V decays fast enough. Similarly one sees that \(\partial _{k_j} {\hat{D}}_k^n ({\text {Id}}- L_{\left| k \right| })^{-1} \) is uniformly bounded in \(k \not = 0\). Note that the expression is not differentiable at the origin. Furthermore, for any \(n \in {\mathbb {N}}\),

$$\begin{aligned} \sup _{k \not = 0} \left\| {\hat{D}}_k^n (\left| V \right| ^{1/2} e_k ) \right\| _2 < \infty \end{aligned}$$

as V is compactly supported. Thus, we have shown, for all \(n \in {\mathbb {N}}_0\),

$$\begin{aligned} \sup _{k \not = 0} \left\| {\hat{D}}_k^n \widetilde{\varphi }(k,\cdot ) \right\| _2 < \infty . \end{aligned}$$

Now we can differentiate (5.1), estimate the integral with Cauchy–Schwarz, and use that

$$\begin{aligned} \int \frac{\left| V(y) \right| }{\left| x-y \right| ^2} {\mathrm d}y \le \Vert V \Vert _\infty \int _{B_R(x)} \frac{{\mathrm d}y}{y^2} \le \Vert V \Vert _\infty \left( \int _{B_{3R}(0)} \frac{{\mathrm d}y}{y^2} + \frac{\left| B_R(0) \right| }{R^2} \right) < \infty \end{aligned}$$
(5.2)

is bounded uniformly in x, where the second inequality can be seen by considering the cases \(|x| < 2R\) and \(|x| \ge 2R\). \(\square \)

Next, we perform the Born series expansion. Similar to [12] it is convenient to introduce a symbol for the integral operator in the Lippmann–Schwinger equation. We consider a slightly bigger class of operators to cover also derivatives with respect to k. Let \(C_{\mathsf {b} }({\mathbb {R}}^3)\) denote the bounded continuous functions on \({\mathbb {R}}^3\) and \( C_{\mathsf {poly} }({\mathbb {R}}^3)\) the polynomially bounded continuous functions, that is,

$$\begin{aligned} C_{\mathsf {poly} }({\mathbb {R}}^3) = \{ \psi \in C({\mathbb {R}}^3) :\, \exists n \in {\mathbb {N}}_0 , \, \exists C > 0 , \, \forall x \in {\mathbb {R}}^3 :\left| \psi (x) \right| \le C (1+\left| x \right| )^n \}. \end{aligned}$$

Definition 5.2

For \(\kappa \ge 0\), \(\psi \in C_{\mathsf {poly} }({\mathbb {R}}^3)\) and \(n \in {\mathbb {N}}_0\), we define

$$\begin{aligned} T^{(n)}_{V,\kappa } \psi (x) := \int \frac{e^{\mathrm {i}\kappa \left| x-y \right| } }{\left| x-y \right| ^{1-n}} V(y) \psi (y) {\mathrm d}y = \int \frac{e^{\mathrm {i}\kappa \left| v \right| }}{\left| v \right| ^{1-n}} V(v+x) \psi (v+x) {\mathrm d}v \end{aligned}$$

(where the second equality follows by a simple change of variables). Furthermore, we set \(T_{V,\kappa } := T^{(0)}_{V,\kappa }\).

In the next proposition we collect a few elementary properties of these operators.

Proposition 5.3

For all \(\kappa \ge 0\),

  1. (a)

    \(T_{V,\kappa }\), \(T^{(-1)}_{V,\kappa }\) are bounded operators from \(C_{\mathsf {b} }({\mathbb {R}}^3)\) to \(C_{\mathsf {b} }({\mathbb {R}}^3)\) with an operator norm, which can be uniformly bounded in \(\kappa \),

  2. (b)

    for all \(n \in {\mathbb {N}}_0\), \(T^{(n)}_{V,\kappa }\) maps \( C_{\mathsf {poly} }({\mathbb {R}}^3)\) to \( C_{\mathsf {poly} }({\mathbb {R}}^3)\).

Proof

  1. (a)

    It follows that for \(\psi \in C_{\mathsf {b} }({\mathbb {R}}^3)\), \(x\in {\mathbb {R}}^3\), \(\kappa \ge 0\),

    $$\begin{aligned} \left| T^{-1}_{V,\kappa }\psi (x) \right|&\le \left\| \psi \right\| _\infty \int \frac{ \left| V(y) \right| }{\left| x-y \right| ^2} {\mathrm d}y \le \left\| V \right\| _\infty \left\| \psi \right\| _\infty \int _{B_R(0)} \frac{1}{\left| x-y \right| ^2} {\mathrm d}y, \\ \left| T_{V,\kappa }\psi (x) \right|&\le \left\| \psi \right\| _\infty \int \frac{ \left| V(y) \right| }{\left| x-y \right| } {\mathrm d}y \le \left\| V \right\| _2 \left\| \psi \right\| _\infty \left( \int _{B_R(0)} \frac{1}{\left| x-y \right| ^2} {\mathrm d}y \right) ^{1/2} . \end{aligned}$$

    The integral is bounded independent of x, see (5.2).

  2. (b)

    There exists a constant \(C > 0\) such that for all \(x\in {\mathbb {R}}^3\), \(n \in {\mathbb {N}}_0\), \(\kappa \ge 0\),

    $$\begin{aligned} \left| T^{(n)}_{V,\kappa } \psi (x) \right|&\le C \int _{B_R(0)} \frac{ \left| V(y) \right| }{\left| x-y \right| ^{1-n}} (1+ \left| y \right| ^m) {\mathrm d}y \nonumber \\&\le C (1+ \left| R \right| ^m) \left\| V \right\| _2 \left( \int _{B_R(x)} \frac{1}{\left| y \right| ^{2-2n}} {\mathrm d}y \right) ^{1/2}. \end{aligned}$$
    (5.3)

    The last integral can be estimated by

    $$\begin{aligned} \int _{r=0}^{R+\left| x \right| } r^{2n} {\mathrm d}r, \end{aligned}$$

    which is bounded by a polynomial in \(\left| x \right| \).\(\square \)

Using the notation introduced above and iterating the Lippmann–Schwinger equation (4.1) we arrive at the following proposition.

Proposition 5.4

For all \(N \in {\mathbb {N}}_0\), \(k,x \in {\mathbb {R}}^3\), we have

$$\begin{aligned} \varphi (k,x) = \sum _{n=0}^N \varphi _0^{(n)}(k,x) + \varphi _{\mathsf {R} }^{(N+1)}(k,x), \end{aligned}$$

where we defined for \(n \in {\mathbb {N}}_0\)

$$\begin{aligned} \varphi _0^{(n)}(k,x)&:= (-4\pi )^{-n} T_{V,\left| k \right| }^n e_k(x), \\ \varphi _{\mathsf {R} }^{(n)}(k,x)&:= (-4\pi )^{-n} T_{V,\left| k \right| }^n \varphi (k,\cdot ) (x), \end{aligned}$$

where \(e_k(x) := e^{\mathrm {i}k x}\).

As an immediate consequence of an iterated application of the first and second identity in Definition 5.2 we find the following lemma, which we will use.

Lemma 5.5

Let \(V_1,\ldots , V_p \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\), \(n_1,\ldots ,n_p \in {\mathbb {N}}_0\), and \(\psi \in C_{\mathsf {poly} }({\mathbb {R}}^3)\). Then for all \(k, x \in \mathbb {R}^3\), we have

$$\begin{aligned} (T_{V_1,\left| k \right| }^{(n_1)}&\cdots T_{V_p,\left| k \right| }^{(n_p)} \psi )(x_0) \nonumber \\&= \int \left\{ \prod _{l=1}^p \frac{e^{\mathrm {i}\left| k \right| \left| x_{l-1}-x_l \right| }}{\left| x_{l-1}-x_l \right| ^{1-n_l}} V_l(x_l) \right\} \psi ( x_p ) {\mathrm d}(x_1, \ldots , x_p) , \end{aligned}$$
(5.4)
$$\begin{aligned}&= \int \left\{ \prod _{l=1}^p \frac{e^{\mathrm {i}\left| k \right| \left| u_l \right| }}{\left| u_l \right| ^{1-n_l}} V_l\left( x_0 + \sum _{s=1}^l u_s \right) \right\} \psi \left( x_0 + \sum _{s=1}^p u_s \right) {\mathrm d}(u_1, \ldots , u_p) , \end{aligned}$$
(5.5)

and the special case

$$\begin{aligned} (T_{V_1,\left| k \right| }^{(n_1)}&\cdots T_{V_p,\left| k \right| }^{(n_p)} e_k )(x) = e^{\mathrm {i}k x} \int \left\{ \prod _{l=1}^p \frac{e^{\mathrm {i}( \left| k \right| \left| u_l \right| + k u_l)}}{\left| u_l \right| ^{1-n_l}} V_l\left( x + \sum _{s=1}^l u_s \right) \right\} {\mathrm d}(u_1, \ldots , u_p). \end{aligned}$$
(5.6)

5.2 Estimates of the Terms of the Born Series

In this subsection we prove decay estimates for the inner products of an abstract coupling function \(\chi \in \mathcal {S}({\mathbb {R}}^3)\) with (derivatives with respect to k of) the functions \(\varphi _0^{(n)}(k,\cdot )\), \(k \in {\mathbb {R}}^3\), \(n \in {\mathbb {N}}_0\). These estimates will be collected in Proposition 5.9 and Proposition 5.10. In fact one can show an arbitrary fast decay for any \(n \in {\mathbb {N}}\). The main tool will be a standard stationary phase argument as given in the next lemma. For this recall the notation \(\left\langle k \right\rangle = (1 + |k|^2)^{1/2}\).

Lemma 5.6

(Stationary phase) For any \(n \in {\mathbb {N}}\) there exists a constant C such that for all \(g \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\) and \(k \in {\mathbb {R}}^3\), we have

$$\begin{aligned} \left| \int e^{\mathrm {i}x k} g(x) {\mathrm d}x \right| \le \frac{C}{\left\langle k \right\rangle ^n} \sup _{\left| \alpha \right| \le n} \left\| D^\alpha g \right\| _1. \end{aligned}$$

Proof

For all \(k \in {\mathbb {R}}^3\), \(j=1, \ldots , n\),

$$\begin{aligned} \mathrm {i}k_j\int e^{\mathrm {i}xk} g(x) {\mathrm d}x&= \int \partial _j e^{\mathrm {i}xk} g(x) {\mathrm d}x \\&= \lim _{R \rightarrow \infty } \int _{S_R(0)} e^{\mathrm {i}xk} g(x) {\mathrm d}x - \int e^{\mathrm {i}xk} \partial _j g(x) {\mathrm d}x. \end{aligned}$$

The first term clearly vanishes. Now we can repeat this procedure n times. \(\square \)

We proceed by computing the derivatives as well as the effect of multiple applications of the dilation operator in the variable k acting on the terms of the Born series. The idea is that the application of \(k \nabla _k\) or \(\nabla _k\) on terms of the form

$$\begin{aligned} T_{V_1,\left| k \right| } \cdots T_{V_p,\left| k \right| } e_k, \qquad V_1, \ldots , V_p \in C_{\mathrm c}^\infty ({\mathbb {R}}^3), ~ p \in {\mathbb {N}}, \end{aligned}$$
(5.7)

yields again a linear combination of such terms multiplied with polynomials in x and k (see Lemma 5.7 and Lemma 5.8). We want to remember that the Born series terms can be written in the form (5.7). This procedure can be repeated multiple times and the resulting expressions can then be estimated with the stationary phase argument.

Lemma 5.7

Assume \(V_1,\ldots , V_p \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\). Then we can write for all \(k \in {\mathbb {R}}^3\),

$$\begin{aligned} k \nabla _k \left( T_{V_1,\left| k \right| } \cdots T_{V_p,\left| k \right| } e_k \right) \end{aligned}$$

as a sum of

$$\begin{aligned} \mathrm {i}(k \hat{{\mathsf {x} }} ) T_{V_1,\left| k \right| } \cdots T_{V_p,\left| k \right| } e_k, \end{aligned}$$
(5.8)

where \(\hat{{\mathsf {x} }}\) denotes the multiplication in x, and terms of the form

$$\begin{aligned} \sum _{l=1}^p Q T_{W_1,\left| k \right| } \cdots T_{W_l,\left| k \right| } e_k, \end{aligned}$$
(5.9)

where Q denotes the multiplication in x with a polynomial of maximal degree one, and \(W_l \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\), \(l=1, \ldots , p\).

Proof

Recall the formula (5.6) in Lemma 5.5,

$$\begin{aligned} (T_{V_1,\left| k \right| } \cdots T_{V_p,\left| k \right| } e_k )(x) = e^{\mathrm {i}k x} \int \left\{ \prod _{l=1}^p \frac{e^{\mathrm {i}(\left| k \right| \left| u_l \right| + k u_l)}}{\left| u_l \right| } V_l\left( x + \sum _{s=1}^l u_s \right) \right\} {\mathrm d}(u_1, \ldots , u_p). \end{aligned}$$

Differentiation with respect to the first factor on the right-hand side yields (5.8). Under the integral we use

$$\begin{aligned} k \nabla _k \prod _{l=1}^p \frac{e^{\mathrm {i}( k u_l + \left| k \right| \left| u_l \right| )}}{\left| u_l \right| }&= \mathrm {i}\sum _{l'=1}^p ( k u_{l'} + \left| k \right| \left| u_{l'} \right| ) \prod _{l=1}^p \frac{e^{\mathrm {i}( k u_l + \left| k \right| \left| u_l \right| )}}{\left| u_l \right| } \nonumber \\&= \sum _{l'=1}^p (u_{l'} \nabla _{u_{l'}} +1) \prod _{l=1}^p \frac{e^{\mathrm {i}( k u_l + \left| k \right| \left| u_l \right| )}}{\left| u_l \right| }. \end{aligned}$$
(5.10)

We can now use in (5.6) integration by parts to shift the derivatives to the \(V_l\) terms. Any boundary terms vanish as we consider compactly supported functions. Thus, we arrive at

$$\begin{aligned} k \nabla _k&\int \left\{ \prod _{l=1}^p \frac{e^{\mathrm {i}(\left| k \right| \left| u_l \right| + k u_l)}}{\left| u_l \right| } V_l\left( x + \sum _{s=1}^l u_s \right) \right\} {\mathrm d}(u_1, \ldots , u_p) \\&= - \int \left\{ \prod _{l=1}^p \frac{e^{\mathrm {i}(\left| k \right| \left| u_l \right| + k u_l)}}{\left| u_l \right| } \right\} \sum _{l'=1}^p (u_{l'} \nabla _{u_{l'}} +2) \prod _{l=1}^p V_l\left( x + \sum _{s=1}^l u_s \right) {\mathrm d}(u_1, \ldots , u_p)\\&= - \int \left\{ \prod _{l=1}^p \frac{e^{\mathrm {i}(\left| k \right| \left| u_l \right| + k u_l)}}{\left| u_l \right| } \right\} \sum _{l'=1}^p \left\{ \prod _{\begin{array}{c} l=1 \\ l\not = l' \end{array}}^p V_l\left( x + \sum _{s=1}^l u_s \right) \right\} \\&\qquad \left( \sum _{s=1}^{l'} u_{s} \nabla + 2 \right) V_{l'}\left( x + \sum _{s=1}^{l'} u_s \right) {\mathrm d}(u_1, \ldots , u_p) , \end{aligned}$$

where the last equality follows by calculating the derivatives by means of the product and chain rule and by reordering the summation.

We can now write, with \(W_{l'}(y) := y \nabla V_{l'}(y)\),

$$\begin{aligned} \left( \sum _{s=1}^{l'} u_{s} \nabla \right) V_{l'}\left( x + \sum _{s=1}^{l'} u_s \right) = W_{l'}\left( x + \sum _{s=1}^l u_s \right) - x \nabla V_{l'}\left( x + \sum _{s=1}^l u_s \right) . \end{aligned}$$

Since \(W_{l'}\) and the derivatives of \(V_{l'}\) are again in \(C_{\mathrm c}^\infty ({\mathbb {R}}^3)\) for all \(l'\), we obtain expressions of the form (5.9). \(\square \)

Lemma 5.8

Let \(p \in {\mathbb {N}}\). Assume that \(V_1,\ldots , V_p \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\), \(n_1, \ldots , n_p \in {\mathbb {N}}_0\). Then for \(j \in \{1,2,3\}\), \(k \not = 0\),

$$\begin{aligned} \partial _{k_j} \left( T^{(n_1)}_{V_1,\left| k \right| } \cdots T^{(n_p)}_{V_p,\left| k \right| } e_k \right)&= \mathrm {i}T^{(n_1)}_{V_1,\left| k \right| } \cdots T^{(n_p)}_{ \hat{{\mathsf {x} }}_j V_p,\left| k \right| }e_k + \mathrm {i}\frac{k_j}{\left| k \right| } \sum _{i=1}^p X_1^{(i)} \cdots X_p^{(i)} e_k,\\ {\hat{D}}_k\left( T^{(n_1)}_{V_1,\left| k \right| } \cdots T^{(n_p)}_{V_p,\left| k \right| } e_k \right)&= \mathrm {i}T^{(n_1)}_{V_1,\left| k \right| } \cdots T^{(n_p)}_{ \frac{k \hat{{\mathsf {x} }} }{\left| k \right| } V_p,\left| k \right| } e_k + \mathrm {i}\sum _{i=1}^p X_1^{(i)} \cdots X_p^{(i)} e_k, \end{aligned}$$

where \(\hat{{\mathsf {x} }}_j\) stands for multiplication by \(x_j\) and

$$\begin{aligned} X_l^{(i)} := {\left\{ \begin{array}{ll} T^{(n_l)}_{V_l,\left| k \right| }, &{}\quad i \not = l, \\ T^{(n_l+1)}_{V_l,\left| k \right| }, &{}\quad i=l. \end{array}\right. } \end{aligned}$$

Proof

This follows by direct computation of the derivative by means of the product rule of the expression (5.6). \(\square \)

Finally, we use the previous estimates for the following two propositions.

Proposition 5.9

For all \(s \in {\mathbb {N}}_0\), \(p,m,n \in {\mathbb {N}}\), \(X \in \{ {\text {Id}}, \nabla _k, \nabla _{k'} \}\), \(Y \in \{ k \nabla _k + k' \nabla _k', \eta (k) k \nabla _k \}\), where \(\eta \in \mathcal {S}({\mathbb {R}}^3)\), there are constants \(n_1, n_2 \in {\mathbb {N}}\), C, such that for all \(k,k' \not = 0\), \(\chi \in \mathcal {S}({\mathbb {R}}^3)\),

$$\begin{aligned} \left| X Y^s \left\langle \varphi _0^{(p)}(k,\cdot ), \chi \varphi _0^{(m)}(k',\cdot ) \right\rangle \right| \le \frac{C}{1+ \left| k-k' \right| ^n} \sup _{\left| \alpha \right| \le n_1} \left\| \left\langle \cdot \right\rangle ^{n_2} \partial _x^\alpha \chi \right\| _1. \end{aligned}$$

Proof

First, let \(Y = k \nabla _k + k' \nabla _k'\). Using an induction argument in s we obtain from Lemma 5.7 that we can write

$$\begin{aligned} (k \nabla _k + k' \nabla _k')^s \left\langle \varphi _0^{(p)}(k,\cdot ), \chi \varphi _0^{(m)}(k',\cdot ) \right\rangle \end{aligned}$$

as linear combination of terms of the form

$$\begin{aligned} (k-k')^{\alpha } \left\langle T_{{V}_1,\left| k \right| } \cdots T_{{V}_p,\left| k \right| } e_k , P \chi T_{{W}_1,\left| k' \right| } \cdots T_{{W}_m,\left| k' \right| } e_{k'} \right\rangle , \end{aligned}$$
(5.11)

for some polynomial P, multi-index \(\alpha \), \({V}_1, \ldots , {V}_p\), \({W}_1, \ldots , {W}_m \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\). Then we obtain the desired estimate for \(X = {\text {Id}}\) by the stationary phase argument of Lemma 5.6, which can be seen using (5.6) and observing that \(\chi \) is a Schwartz function and that the potential V has compact support. For \(X \in \{ \nabla _k , \nabla _{k'}\}\) we apply Lemma 5.8 to the expressions in (5.11), with the result that we can write

$$\begin{aligned} X (k \nabla _k + k' \nabla _k')^s \left\langle \varphi _0^{(p)}(k,\cdot ), \chi \varphi _0^{(m)}(k',\cdot ) \right\rangle \end{aligned}$$

for \(k,k' \not = 0\) as linear combinations of terms

$$\begin{aligned} (k-k')^{\alpha } f(k,k') \left\langle T_{{V}_1,\left| k \right| }^{(n_1)} \cdots T_{{V}_p,\left| k \right| }^{(n_p)} e_k , P \chi T_{{W}_1,\left| k' \right| }^{(n'_1)} \cdots T_{{W}_m,\left| k' \right| }^{(n'_m)} e_{k'} \right\rangle , \end{aligned}$$

with now a bounded function f on \({\mathbb {R}}^3 \times {\mathbb {R}}^3\) and \(n_1 + \ldots + n_p\), \(n'_1 + \ldots + n'_m \in \{0,1\}\). The desired estimate in this case follows now from the same stationary phase argument using (5.6) as before. Finally, for \(Y = \eta (k) k \nabla _k\) one proceeds similarly but now using only Lemma 5.8. \(\square \)

Proposition 5.10

For all \(s \in {\mathbb {N}}_0\), \(p,n \in {\mathbb {N}}\), there exist constants \(n_1, n_2 \in {\mathbb {N}}\), C, such that for all k, \(X \in \{ {\text {Id}}, \nabla _k \}\), and \(\chi \in \mathcal {S}({\mathbb {R}}^3)\) we have

$$\begin{aligned} \left| X (k \nabla _k)^s \left\langle \varphi _0^{(p)}(k,\cdot ), \chi \right\rangle \right| \le \frac{C}{1+ \left| k \right| ^n} \sup _{\left| \alpha \right| \le n_1} \left\| \left\langle \cdot \right\rangle ^{n_2} \partial _x^\alpha \chi \right\| _1. \end{aligned}$$

Proof

Analogously to the proof of Proposition 5.9 we inductively apply Lemma 5.7 to find that

$$\begin{aligned} (k \nabla _k)^s \left\langle \varphi _0^{(p)}(k,\cdot ), \chi \right\rangle \end{aligned}$$

can be written as

$$\begin{aligned} k^{\alpha } \left\langle T_{{V}_1,\left| k \right| } \cdots T_{{V}_p,\left| k \right| } e_k, P \chi \right\rangle , \end{aligned}$$
(5.12)

for some polynomial P, multi-index \(\alpha \), \({V}_1, \ldots , {V}_p \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\). Then after inserting (5.6) and using Lemma 5.8 with \(X= \nabla _k\), the stationary phase argument yields again the desired estimate. \(\square \)

5.3 Estimates of the Remainder Terms

Now we prove arbitrarily fast polynomial decay for the remainder terms of sufficiently high order. We obtain results for remainder terms in Proposition 5.13 and scalar products of Born series terms with remainder terms in Lemma 5.14. The main tool will be the following lemma, where the basic idea is due to Klein and Zemach (cf. [31]). It essentially follows from a stationary phase argument together with a suitable coordinate transformation.

Lemma 5.11

For \(V \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\), \(n_1, n_2 \in {\mathbb {N}}_0\), \(R > 0\) such that \({\text {supp}}V \subseteq B_R(0)\), there exists a constant C such that for all \(\kappa \ge 0\),

$$\begin{aligned} \sup _{\left| x \right| , \left| x' \right| \le R} \left| \int \frac{e^{ \mathrm {i}\kappa \left| x-y \right| }}{\left| x-y \right| ^{1-n_1}} V(y) \frac{ e^{ \mathrm {i}\kappa \left| x'-y \right| }}{\left| x'-y \right| ^{1-n_2}} {\mathrm d}y \right| \le \frac{C}{1 + \kappa }. \end{aligned}$$
(5.13)

Proof

For the proof we use Prolate Spheroidal coordinates, see [31, appendix] and [22, p. 661]. Let \(D = \frac{1}{2} \left| x-x' \right| \). For

$$\begin{aligned} \xi \in [D,\infty ), \qquad \eta \in [-1,1], \qquad \varphi \in [0,2\pi ), \end{aligned}$$

we set

$$\begin{aligned} \varPhi (\xi ,\eta , \varphi ) = \frac{1}{2}(x + x') + {\mathcal {R}} \begin{pmatrix} \sqrt{(\xi ^2 - D^2)(1-\eta ^2)} \cos \varphi \\ \sqrt{(\xi ^2 - D^2)(1-\eta ^2)} \sin \varphi \\ \xi \eta \end{pmatrix}, \end{aligned}$$

where \({\mathcal {R}}\) is the rotation matrix transforming \(e_3\) into \(\frac{x - x'}{\left| x - x' \right| }\). A straightforward computation then shows that

$$\begin{aligned}&\xi = \frac{1}{2}\left( \left| x - \varPhi (\xi ,\eta , \varphi ) \right| + \left| x' - \varPhi (\xi ,\eta , \varphi ) \right| \right) ,\\&\eta = \frac{1}{2D} \left( \left| x - \varPhi (\xi ,\eta , \varphi ) \right| - \left| x' - \varPhi (\xi ,\eta , \varphi ) \right| \right) , \\&\det \varPhi (\xi , \eta , \varphi ) = (\xi + D \eta ) (\xi - D \eta ). \end{aligned}$$

Thus, by change of coordinates,

$$\begin{aligned} \int \frac{e^{ \mathrm {i}\kappa \left| x-y \right| } V(y) e^{ \mathrm {i}\kappa \left| x'-y \right| } }{\left| x-y \right| ^{1-n_1} \left| x'-y \right| ^{1-n_2} } {\mathrm d}y&= \int e^{2\mathrm {i}\kappa \xi } V( \varPhi (\xi , \eta , \varphi )) (\xi + D \eta )^{n_1} (\xi - D \eta )^{n_2} {\mathrm d}(\xi ,\eta ,\varphi ).\\&= \int _D^{\infty } e^{2\mathrm {i}\kappa \xi } h(\xi ) {\mathrm d}\xi , \end{aligned}$$

where \(h(\xi ) := \int V( \varPhi (\xi , \eta , \varphi )) (\xi + D \eta )^{n_1} (\xi - D \eta )^{n_2} {\mathrm d}(\eta , \varphi )\). Let \(E := \frac{1}{2} \left| x+x' \right| \). Notice that by direct computation, for \(\xi \ge D + E\),

$$\begin{aligned} \left| \varPhi (\xi , \eta , \varphi ) \right| \ge \xi - D - E. \end{aligned}$$

Thus, we get that \(h(\xi ) = 0\) for \(\xi \ge R + D + E\). Then, by integration by parts,

$$\begin{aligned} \int _D^{\infty } e^{2\mathrm {i}\kappa \xi } h(\xi ) {\mathrm d}\xi&= \frac{1}{2 \mathrm {i}\kappa } \int _D^{R + D + E} \partial _\xi \left( e^{2\mathrm {i}\kappa \xi }\right) h(\xi ) {\mathrm d}\xi \\&= \frac{1}{2 \mathrm {i}\kappa } \bigg ( - h(D) e^{2\mathrm {i}\kappa D} - \int _D^{R + D + E} e^{2\mathrm {i}\kappa \xi } \partial _\xi h(\xi ) {\mathrm d}\xi \bigg ). \end{aligned}$$

As \(D,E \le R\) are bounded, so is the first term. For the second one notice that

$$\begin{aligned} \partial _\xi h(\xi )&= \int \left\langle \nabla V( \varPhi (\xi , \eta , \varphi )), \partial _\xi \varPhi (\xi ,\eta ,\varphi ) \right\rangle (\xi + D \eta )^{n_1} (\xi - D \eta )^{n_2} {\mathrm d}(\eta , \varphi ) \end{aligned}$$
(5.14)
$$\begin{aligned}&\quad + \int V( \varPhi (\xi , \eta , \varphi )) \partial _\xi ( (\xi + D \eta )^{n_1} (\xi - D \eta )^{n_2} ) {\mathrm d}(\eta , \varphi ). \end{aligned}$$
(5.15)

The term (5.15) is clearly bounded by a constant depending only on R. The term (5.14) is bounded up to a constant by

$$\begin{aligned} \sup _{\eta ,\varphi } \left| \partial _\xi \varPhi (\xi ,\eta , \varphi ) \right| \le C\left( 1 + \frac{\xi }{\sqrt{\xi ^2 - D^2}} \right) , \end{aligned}$$

for some constant C. This is integrable and the integral is also bounded by a constant only depending on R:

$$\begin{aligned} \int _D^{R+D+E} \frac{\xi }{\sqrt{\xi ^2 - D^2}} {\mathrm d}\xi = \sqrt{(R+D+E)^2 - D^2}. \end{aligned}$$

Lemma 5.12

Let \(p \in {\mathbb {N}}\), \(V_1, \ldots , V_{p} \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\) and \(n_1, \ldots , n_p \in {\mathbb {N}}_0\). Then there exists a constant C such that for all \(k,x \in {\mathbb {R}}^3\), continuous bounded functions \(\psi \) on \({\mathbb {R}}^3\),

$$\begin{aligned} \left| (T^{(n_1)}_{V_1,\left| k \right| } \cdots T^{(n_p)}_{V_p,\left| k \right| } \psi )(x) \right| \le \frac{C (1+ \left\langle x \right\rangle ^{n_1-1}) \left\| \psi \right\| _\infty }{1+ \left| k \right| ^{\lfloor \frac{p-1}{2} \rfloor }}. \end{aligned}$$

Proof

First we assume that \(p = 2p^*+1\). Then by Lemma 5.5

$$\begin{aligned} (T^{(n_1)}_{V_1,\left| k \right| }&\cdots T^{(n_p)}_{V_p,\left| k \right| } \psi )(x) \nonumber \\&= \int \frac{e^{\mathrm {i}\left| k \right| \left| x- y_1 \right| }}{\left| x - y_{1} \right| ^{1-n_{1}}} V_{1}(y_1) \end{aligned}$$
(5.16)
$$\begin{aligned}&\qquad \left\{ \prod _{l=1}^{p^*} \frac{e^{\mathrm {i}\left| k \right| \left| y_{2l-1} - y_{2l} \right| }}{\left| y_{2l-1} - y_{2l} \right| ^{1-n_{2l}}} V_{2l}(y_{2l}) \frac{e^{\mathrm {i}\left| k \right| \left| y_{2l} - y_{2l+1} \right| }}{\left| y_{2l} - y_{2l+1} \right| ^{1-n_{2l+1}}} V_{2l+1}(y_{2l+1}) \right\} \nonumber \\&\qquad \psi ( y_p) {\mathrm d}(y_1, y_2, y_3, \ldots , y_p). \end{aligned}$$
(5.17)

In the following let C denote different constants depending only on \(V_l\) and \(n_l\), \(l=1,\ldots ,p\). We estimate the terms in (5.17) for \(l=1,\ldots , p^*\) by

$$\begin{aligned} \left| \int \frac{e^{\mathrm {i}\left| k \right| \left| y_{2l-1} - y_{2l} \right| }}{\left| y_{2l-1} - y_{2l} \right| ^{1-n_{2l}}} V_{2l}(y_{2l}) \frac{e^{\mathrm {i}\left| k \right| \left| y_{2l} - y_{2l+1} \right| }}{\left| y_{2l} - y_{2l+1} \right| ^{1-n_{2l+1}}} {\mathrm d}y_{2l} \right| \le \frac{C}{1 + \left| k \right| } \end{aligned}$$

using Lemma 5.11, the term (5.16) by

$$\begin{aligned} \left| \int \frac{e^{\mathrm {i}\left| k \right| \left| x- y_1 \right| }}{\left| x - y_{1} \right| ^{1-n_{1}}} V_{1}(y_1) {\mathrm d}y_1 \right| \le C (1+ \left\langle x \right\rangle ^{n_1-1}) , \end{aligned}$$

and thus we find

$$\begin{aligned}&\left| (T^{(n_1)}_{V_1,\left| k \right| } \cdots T^{(n_p)}_{V_p,\left| k \right| } \psi )(x) \right| \\&\qquad \le \frac{C (1+ \left\langle x \right\rangle ^{n_1-1}) }{(1+\left| k \right| )^{p^*}} \int \left\{ \prod _{l=0}^{p^*-1} \left| V_{2l+2}(y_{2l+2}) \right| \right\} \psi ( y_p) {\mathrm d}(y_2, y_4, \ldots , y_p) \\&\qquad \le \frac{ C (1+ \left\langle x \right\rangle ^{n_1-1}) \left\| \psi \right\| _\infty }{(1+\left| k \right| )^{p^*}}. \end{aligned}$$

In case p is even, we estimate the first \(p-1\) factors as in the odd case and the remaining expression we estimate using inequality (5.3), which implies that there is a constant C independent of k such that \(\Vert \mathbb {1}_{B_R(0)} T^{(n_p)}_{V_p,\left| k \right| } \psi \Vert _\infty \le C \Vert \psi \Vert _\infty \). \(\square \)

Proposition 5.13

Let \(n \in {\mathbb {N}}_0\), \(m \in \{0,1\}\), \(j \in \{1,2,3\}\).

  1. (a)

    For all \(k \not = 0\), the expression

    $$\begin{aligned} \partial ^{m}_{k_j} {\hat{D}}_k^n T_{V,\left| k \right| } \cdots T_{V,\left| k \right| } \varphi (k,\cdot ) \end{aligned}$$

    can be written as linear combination of terms

    $$\begin{aligned} f(k) T^{(n_1)}_{V,\left| k \right| } \cdots T^{(n_p)}_{V,\left| k \right| } \partial ^{m'}_{k_j} {\hat{D}}_k^{n'} \varphi (k,\cdot ), \end{aligned}$$
    (5.18)

    where f is a bounded function on \({\mathbb {R}}^3 \setminus \{ 0 \}\), \(0 \le n' \le n\), \(0 \le m' \le m\), and \(n_1+ \cdots + n_p + m' + n' = m+n\).

  2. (b)

    For any \(p \in {\mathbb {N}}\), there exists a constant C such that we have for all \(k \not = 0\), \(x \in {\mathbb {R}}^3\),

    $$\begin{aligned} \left| \partial ^{m}_{k_j} {\hat{D}}_k^n \varphi _{\mathsf {R} }^{(p)}(k,x) \right| \le \frac{C (1+\left\langle x \right\rangle ^{n+m-1}) }{1+ \left| k \right| ^{\lfloor \frac{p-1}{2} \rfloor }}. \end{aligned}$$

Proof

Part (a) follows by the product rule from Lemma 5.5. Part (b) follows from (a), Lemma 5.12, Proposition 5.1, and the fact that V has compact support. \(\square \)

Lemma 5.14

Let \(p,m,r \in {\mathbb {N}}\) with \(m \ge 2r+4\), \(V_1, \ldots , V_p\), \(W_1, \ldots , W_m \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\), and \(n_1, \ldots , n_p\), \(n'_1, \ldots , n'_m \in {\mathbb {N}}_0\). Then there exists a constant C, \(n_0 \in {\mathbb {N}}_0\), such that for all k, \(k'\), continuous bounded functions \(\psi \), and \(\chi \in \mathcal {S}({\mathbb {R}}^3)\),

$$\begin{aligned}&\left| \left\langle T^{(n_1)}_{V_1,\left| k \right| } \cdots T^{(n_p)}_{V_p,\left| k \right| } e_k ,\chi T^{(n'_1)}_{W_1,\left| k' \right| } \cdots T^{(n'_m)}_{W_m,\left| k' \right| }\psi \right\rangle \right| \nonumber \\&\quad \le \frac{C \sup _{|\alpha |\le r} \left\| (1+ \left\langle \cdot \right\rangle ^{n_0}) \partial ^\alpha \chi \right\| _1 \left\| \psi \right\| _\infty }{(1+ \left| k \right| ^{\lfloor \frac{p-1}{2} \rfloor + r} )(1+ \left| k' \right| ^{\frac{m}{2} -2 -r }) }. \end{aligned}$$
(5.19)

Proof

To shorten the notation we assume that \(n_1 = \cdots = n_p = n'_1 = \cdots = n'_m = 0\). The proof works for the other cases with the obvious change of notation (in fact the bounds in the proof given below only improve). The bound for \(r=0\) follows from Lemma 5.12. Now let us consider \(r \ge 1\). For this we show the following identity. Fix \(j=1,2,3\). We claim that for each \(n \in {\mathbb {N}}_0\) there exist coefficients \(c(\cdots )\) such that for all \({V}_1, \ldots , {V}_{p}\), \({W}_1, \ldots , {W}_{m} \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\), \(\psi \in C_{\mathsf {b} }({\mathbb {R}}^3)\), and \(\chi \in \mathcal {S}({\mathbb {R}}^3)\) we have

$$\begin{aligned}&k_j^n \left\langle T_{{V}_1,\left| k \right| } \cdots T_{{V}_{{p}},\left| k \right| } e_k ,\chi T_{{W}_1,\left| k' \right| } \cdots T_{{W}_{{m}},\left| k' \right| } {\psi } \right\rangle \nonumber \\&\quad = \sum _{l=1}^{m+1} \sum _{\begin{array}{c} {\underline{\mu }} \in {\mathbb {N}}_0^m \end{array}} \sum _{\begin{array}{c} {\underline{\nu }} \in \{0,1\}^m \end{array}} \sum _{\underline{s} \in {\mathbb {N}}_0^p , \underline{t} \in {\mathbb {N}}_0^m , u \in {\mathbb {N}}_0 } \end{aligned}$$
(5.20)
$$\begin{aligned}&\qquad 1_{ \nu _i = \mu _i = 0 , \forall i < l } \ 1_{ \mu _l + \nu _l \ge 1 } \ 1_{ \nu _i = 1 , \forall i > l } \ 1_{|{\underline{\mu }}| + |{\underline{\nu }}| + |\underline{s}| + |\underline{t}| + u = n } \nonumber \\&\qquad c(n, l,{\underline{\nu }},{\underline{\mu }},\underline{s},\underline{t},u) \nonumber \\&\qquad \left\langle \left\{ \prod _{a=1}^p T_{{V}_a^{[s_a]},\left| k \right| } \right\} e_k ,\chi ^{[u]} \left\{ \prod _{b=1}^{m} S_{W_{b}^{[t_{b}]},\left| k' \right| }^{(\mu _{b},\nu _b)} \right\} \psi \right\rangle \end{aligned}$$
(5.21)

where we used the multi-index notation \(\underline{s} = (s_1,\ldots ,s_p)\), denoted by \(f^{[n]}\) the n-th partial derivative in the j-th coordinate direction, and defined \( S^{(\mu ,\nu )}_{W,\kappa }\) as the operator with integral kernel

$$\begin{aligned} S^{(\mu ,\nu )}_{W,\kappa }(x,y) := ( \mathrm {i}\kappa )^\mu e^{\mathrm {i}\kappa \left| x -y \right| } \partial _{x_j}^\nu \left( \frac{1}{\left| x-y \right| } \left( \frac{x_j-y_j}{|x-y|} \right) ^\mu \right) W(y) , \quad x, y \in \mathbb {R}^3 . \end{aligned}$$

Oberve that \(S^{(0,0)}_{W,\kappa } = T_{W,\kappa }\). We prove (5.20) by induction in n. If \(n=0\), we only need to consider \(l=m+1\) and \({\underline{\mu }}, {\underline{\nu }}, \underline{s}, \underline{t}, u\) equal zero. Suppose (5.20) holds for n. We want to show, that it then also holds for \(n+1\). For this we fix an l and assume that (5.21) is nonzero. This implies in particular that \((\mu _1,\nu _1), \ldots , (\mu _{l-1},\nu _{l-1})\) all equal (0, 0) and \(\mu _l + \nu _l \ge 1 \). Using (5.5) we find

$$\begin{aligned}&k_j \left\langle \left\{ \prod _{a=1}^p T_{{V}_a^{[s_a]},\left| k \right| } \right\} e_k ,\chi ^{[u]} \left\{ \prod _{b=1}^{m} S_{W_{b}^{[t_{b}]},\left| k' \right| }^{(\mu _{b},\nu _b)} \right\} \psi \right\rangle \nonumber \\&\quad \int \frac{e^{-\mathrm {i}\left| k \right| \left| v_1 \right| }}{\left| v_1 \right| } V_1^{[s_1]}(v_1 + x) \frac{e^{-\mathrm {i}\left| k \right| \left| v_2 \right| }}{\left| v_2 \right| } V_2^{[s_2]}(v_2 + v_1 + x) \ldots \frac{e^{-\mathrm {i}\left| k \right| \left| v_p \right| }}{\left| v_p \right| } V_p^{[s_p]}\left( \sum _{l=1}^p v_l + x \right) \nonumber \\&\qquad e^{-\mathrm {i}k \sum _{l=1}^{p} v_l} ( \mathrm {i}\nabla _{x} e^{- \mathrm {i}k x} ) \chi ^{[u]}(x) \frac{e^{\mathrm {i}\left| k' \right| \left| x-x_1 \right| }}{\left| x-x_1 \right| } W_1^{[t_1]}(x_1) \frac{e^{\mathrm {i}\left| k' \right| \left| x_1-x_2 \right| }}{\left| x_1-x_2 \right| } W_2^{[t_2]}(x_2) \nonumber \\&\qquad \ldots \frac{e^{\mathrm {i}\left| k' \right| \left| x_{l-2}-x_{l-1} \right| }}{\left| x_{l-2}-x_{l-1} \right| } W_{l-1}(x_{l-1}) S_{W_{l}^{[t_{l}]},\left| k' \right| }^{(\mu _{l},\nu _l)}(x_{l-1},x_l) \left( \left\{ \prod _{b=l+1}^m S_{W_{b}^{[t_{b}]},\left| k' \right| }^{(\mu _{b},\nu _b)} \right\} \psi \right) (x_l) \nonumber \\&\qquad {\mathrm d}(v_1, \ldots , v_p , x, x_1, \ldots , x_l) . \end{aligned}$$
(5.22)

We now use integration by parts with respect to x. By the product rule, we have a linear combination of several different terms. The ones with derivatives of the potentials \(V_1^{(s_1)}, \ldots , V_p^{(s_p)}\) and \(\chi ^{(u)} \) will simply increase exactly one of the indices \(s_1,\ldots ,s_p,u\) by one. For the remaining term containing x we use

$$\begin{aligned} \nabla _x \frac{e^{\mathrm {i}\left| k' \right| \left| x-x_1 \right| }}{\left| x-x_1 \right| } = \nabla _{x_1} \frac{e^{\mathrm {i}\left| k' \right| \left| x-x_1 \right| }}{\left| x-x_1 \right| }. \end{aligned}$$

Then we use again integration by parts and obtain a term involving \(\nabla W_1\), which can be treated as before, and a term involving

$$\begin{aligned} \nabla _{x_1} \frac{e^{\mathrm {i}\left| k' \right| \left| x_1-x_2 \right| }}{\left| x_1-x_2 \right| }. \end{aligned}$$

We repeat this procedure until we arrive at the right-hand side at the term

$$\begin{aligned} \nabla _{x_{l-2}} \frac{e^{\mathrm {i}\left| k' \right| \left| x_{l-2}-x_{l-1} \right| }}{\left| x_{l-2}-x_{l-1} \right| }. \end{aligned}$$

If \(\nu _l=1\), we use

$$\begin{aligned} \partial _{x_j} \frac{e^{\mathrm {i}\left| k' \right| \left| x -y \right| }}{\left| x - y \right| } W(y) = S_W^{(1,0)}(x,y) + S_W^{(0,1)}(x,y) , \end{aligned}$$

which follows by the product rule. If \(\nu _l = 0\), then we do once more integration by parts and use the relation, which is straightforward to verify,

$$\begin{aligned} \partial _{x_j} S_W^{(\mu _l,0)}(x,y) = S_W^{(\mu _l+1,0)}(x,y) + S_W^{(\mu _l,1)}(x,y) . \end{aligned}$$

We conclude that there exist coefficients \(d(\cdots )\) such that

$$\begin{aligned}&\text {l.h.s. of } (5.22) \\&\quad = \sum _{\tilde{l}=l-1}^{l} \sum _{\begin{array}{c} \tilde{{\underline{\mu }}} \in {\mathbb {N}}_0^m \end{array}} \sum _{\begin{array}{c} \tilde{{\underline{\nu }}} \in \{0,1\}^m \end{array}} \sum _{\tilde{\underline{s}} \in {\mathbb {N}}_0^p , \tilde{\underline{t}} \in {\mathbb {N}}_0^m , \tilde{u} \in {\mathbb {N}}_0 } \\&\qquad 1_{ {\tilde{\nu }}_i = {\tilde{\nu }}_i = 0 , \forall i < \tilde{ l} } \ 1_{ {\tilde{\mu }}_{\tilde{l}} + {\tilde{\nu }}_{\tilde{l}} \ge 1 } \ 1_{ {\tilde{\nu }}_i = 1 , \forall i > \tilde{ l} } \ 1_{|\tilde{{\underline{\mu }}}| + |\tilde{{\underline{\nu }}}| + |\tilde{\underline{s}}| + |\tilde{\underline{t}}| + \tilde{ u} = n+1 } \\&\qquad d(n, l,{\underline{\nu }},{\underline{\mu }},\underline{s},\underline{t},u; \tilde{l},\tilde{{\underline{\nu }}},\tilde{{\underline{\mu }}},\tilde{\underline{s}},\tilde{\underline{t}},\tilde{u} ) \\&\qquad \left\langle \left\{ \prod _{a=1}^p T_{{V}_a^{[\tilde{s_a}]},\left| k \right| } \right\} e_k ,\chi ^{[\tilde{u}]} \left\{ \prod _{b=1}^{m} S_{W_{b}^{[\tilde{t_{b}}]},\left| k' \right| }^{(\tilde{\mu _{b}},\tilde{\nu _b})} \right\} \psi \right\rangle . \end{aligned}$$

Using the above relation for every summand, we find that (5.20) holds for \(n+1\).

Now using (5.20), we show (5.19). For all W and \(\mu \), \(\nu \in \{0,1\}\) there exists a constant C such that

$$\begin{aligned} \Vert S_{W,\kappa }^{(\mu ,\nu )} \psi \Vert _\infty \le C \kappa ^\mu \Vert \psi \Vert _\infty \end{aligned}$$
(5.23)

for all \(\kappa \ge 0\) and \(\psi \in C_{\mathsf {b} }({\mathbb {R}}^3)\). This follows from Proposition 5.3 (a) by observing that for each \(\mu \in {\mathbb {N}}_0\) there exists as constant \(C_\mu \) such that for all \(y \ne x\) we have

$$\begin{aligned} \left| \partial _{x_j} \left( \frac{1}{\left| x-y \right| } \left( \frac{x_j-y_j}{|x-y|} \right) ^\mu \right) \right| \le C_\mu |x-y|^{-2} . \end{aligned}$$

Now using Lemma 5.12 together with (5.23), we find for fixed l a constant C, such that whenever (5.21) equals one we have

$$\begin{aligned}&\left| \left\langle \left\{ \prod _{a=1}^p T_{{V}_a^{[s_a]},\left| k \right| } \right\} e_k ,\chi ^{[u]} \left\{ \prod _{b=1}^{m} S_{W_{b}^{[t_{b}]},\left| k' \right| }^{(\mu _{b},\nu _b)} \right\} \psi \right\rangle \right| \\&\quad \le \frac{C \sup _{|\alpha | \le n} \Vert ( 1+ \left\langle x \right\rangle ^{n_0} ) \partial ^\alpha \chi \Vert _1 \left\| \psi \right\| _\infty }{(1+ \left| k \right| ^{\lfloor \frac{p-1}{2} \rfloor } )( 1+ \left| k' \right| ^{\lfloor \frac{l-2}{2} \rfloor }) } |k'|^{|{\underline{\mu }}|} . \end{aligned}$$

Now since (5.21) equal one implies \(|{\underline{\nu }}| \ge m-l\) and \( |{\underline{\mu }}| + |{\underline{\nu }}| \le n \) we find that

$$\begin{aligned} |{\underline{\mu }}| \le n - m + l . \end{aligned}$$

Thus we find for all \(l=1,\ldots ,m+1\),

$$\begin{aligned} \left\lfloor \frac{l-2}{2} \right\rfloor - |{\underline{\mu }}| \ge \frac{m}{2} - 2 - n . \end{aligned}$$

Since \(j = 1,2,3\) was arbitrary, the claimed inequality now follows. \(\square \)

Proposition 5.15

Let \(s \in {\mathbb {N}}_0\), \(n \in {\mathbb {N}}\), \(\eta \in \mathcal {S}({\mathbb {R}}^3)\), \(X \in \{ {\text {Id}}, \nabla _k, \nabla _{k'} \}\), \(Y \in \{ k \nabla _k + k' \nabla _k', \eta (k) k \nabla _k \}\). Then there exists a constant \(m_0 \in {\mathbb {N}}\), such for all \(m \ge m_0\) and \(p \in {\mathbb {N}}\), there are \(n_1, n_2 \in {\mathbb {N}}\), C, such that for all \(\chi \in \mathcal {S}({\mathbb {R}}^3)\), \(k,k' \not = 0\),

$$\begin{aligned}&\left| X Y^s \left\langle \varphi _0^{(p)}(k,\cdot ), \chi \varphi _{\mathsf {R} }^{(m)}(k',\cdot ) \right\rangle \right| \le \frac{C \sup _{\left| \alpha \right| \le n_1} \left\| \left\langle \cdot \right\rangle ^{n_2} \partial ^\alpha \chi \right\| _1 }{(1+\left| k \right| ^n)(1+\left| k' \right| ^n)} . \end{aligned}$$

Proof

By applying Lemma 5.8 and (5.18) for the left and right part of the inner product, respectively, we can write

$$\begin{aligned} X Y^s \left\langle \varphi _0^{(p)}(k,\cdot ), \chi \varphi _{\mathsf {R} }^{(m)}(k',\cdot ) \right\rangle \end{aligned}$$

for all given X and s as a linear combination of expressions

$$\begin{aligned} k^\alpha (k')^\beta f(k,k') \left\langle T^{(n_1)}_{V_1,\left| k \right| } \cdots T^{(n_p)}_{V_p,\left| k \right| } e_k ,\chi T^{(n'_1)}_{W_1,\left| k' \right| } \cdots T^{(n'_m)}_{W_m,\left| k' \right| } \varphi (k',\cdot ) \right\rangle , \end{aligned}$$

where \(\alpha ,\beta \) are multi-indices with \(\left| \alpha \right| , \left| \beta \right| \le s\), f is a bounded function on \({\mathbb {R}}^3 \times {\mathbb {R}}^3\), \(V_1, \ldots , V_p\), \(W_1, \ldots , W_m \in C_{\mathrm c}^\infty ({\mathbb {R}}^3)\), and \(n_1, \ldots , n_p\), \(n'_1, \ldots , n'_m \in {\mathbb {N}}_0\). Now we can estimate these expressions with Lemma 5.14. \(\square \)

5.4 Commutator with the Interaction

This part provides the key for the proof of Proposition 4.2. In the following we omit for the moment the regularity function \(\kappa \) of the coupling and work with multiplication operators \(H(\omega ,\varSigma )\), \((\omega ,\varSigma ) \in {\mathbb {R}}_+ \times \mathbb {S}^2\). Throughout this section we shall always assume

$$\begin{aligned} H(\omega ,\varSigma )(x) = \chi (x) {\tilde{H}}(\omega ,\varSigma )(x) \end{aligned}$$
(5.24)

where \(\chi \in \mathcal {S}({\mathbb {R}}^3)\) and \(\tilde{H}\) is a function on \(I \times \mathbb {S}^2 \times {\mathbb {R}}^3\), where \(I = (0,\infty )\) or \(I = [0,\infty )\), such that for some \(s \in {\mathbb {N}}_0\) the following holds.

(\(\text {J}_s\)):

For all \(n \in \{0,\ldots , s\}\) and \(\alpha \in {\mathbb {N}}_0^3\) the partial derivatives \( \partial _x^\alpha \partial ^n_\omega \tilde{H}\) exist and are continuous on \(I \times \mathbb {S}^2 \times {\mathbb {R}}^3\), and there exists a polynomial P and \(M \in {\mathbb {N}}_0\) such that

$$\begin{aligned} \left| \partial _x^\alpha \partial ^n_\omega \tilde{H}(\omega ,\varSigma )(x) \right| \le P(\omega ) \langle x \rangle ^M , \qquad (\omega ,\varSigma ,x) \in I \times \mathbb {S}^2 \times {\mathbb {R}}^3. \end{aligned}$$

To show that a commutator \([T,{A_{\mathrm p}^{(\epsilon )}}]\), for a bounded operator T on \(L^2({\mathbb {R}}^3)\), is bounded, we shall make of use the following decomposition on \(\ell ^2(N) \oplus L^2({\mathbb {R}}^3)\),

$$\begin{aligned} \mathcal {V}[T,{A_{\mathrm p}^{(\epsilon )}}] \mathcal {V}^* = \begin{pmatrix} 0 &{}\quad V_{\mathsf {d} }T V_{{\mathsf {c} }}^* \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon \\ -\eta _\epsilon A_{\mathsf {D} }\eta _\epsilon V_{{\mathsf {c} }}T V_{\mathsf {d} }^* &{}\quad [V_{{\mathsf {c} }}T V_{{\mathsf {c} }}^*, \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon ] \end{pmatrix}, \end{aligned}$$

where \(\mathcal {V}\) is the unitary operator defined in (4.5) and \(N \in {\mathbb {N}}\) is the number of linearly independent eigenfunctions of \(H_{\mathrm p}\). We treat the off-diagonal terms in Proposition 5.16 and the term on the diagonal in Lemma 5.19.

Proposition 5.16

Suppose \(\tilde{H}\) satisfies (\(\mathrm{J}_0\)). Then for all \(n \in {\mathbb {N}}_0\), \(j \in \{1,2,3\}\), \((\omega ,\varSigma )\), the operators

  1. (1)

    \( A_{\mathsf {D} }^n V_{{\mathsf {c} }}H(\omega ,\varSigma )P_{{\text {disc}}}\),

  2. (2)

    \(\hat{{\mathsf {k} }}_j A_{\mathsf {D} }^n V_{{\mathsf {c} }}H(\omega ,\varSigma )P_{{\text {disc}}}\),

  3. (3)

    \( {\hat{{\mathsf {q} }}}_j A_{\mathsf {D} }^n V_{{\mathsf {c} }}H(\omega ,\varSigma )P_{{\text {disc}}}\),

are well-defined, their norms can be estimated uniformly in \(\varSigma \) by a polynomial in \(\omega \), and they are continuous in \((\omega ,\varSigma )\). Furthermore, if \(\tilde{H}\) satisfies (\(\mathrm{J}_{s}\)), then (1)–(3) are s times continuously differentiable with respect to \(\omega \) in the operator norm topology and

$$\begin{aligned} \partial _\omega ^s A_{\mathsf {D} }^n V_{{\mathsf {c} }}H(\omega ,\varSigma )P_{{\text {disc}}}&= A_{\mathsf {D} }^n V_{{\mathsf {c} }}\partial _\omega ^s H(\omega ,\varSigma )P_{{\text {disc}}}, \\ \partial _\omega ^s \hat{{\mathsf {k} }}_j A_{\mathsf {D} }^n V_{{\mathsf {c} }}H(\omega ,\varSigma )P_{{\text {disc}}}&= \omega \hat{{\mathsf {k} }}_j A_{\mathsf {D} }^n V_{{\mathsf {c} }}\partial _\omega ^s H(\omega ,\varSigma )P_{{\text {disc}}}, \\ \partial _\omega ^s {\hat{{\mathsf {q} }}}_j A_{\mathsf {D} }^n V_{{\mathsf {c} }}H(\omega ,\varSigma )P_{{\text {disc}}}&= {\hat{{\mathsf {q} }}}_j A_{\mathsf {D} }^n V_{{\mathsf {c} }}\partial _\omega ^s H(\omega ,\varSigma )P_{{\text {disc}}}. \end{aligned}$$

Proof

Let \(m \in \{0,1\}\) and \(n \in {\mathbb {N}}_0\). Choose N big enough so that we find by means of Proposition 5.13 a constant C and an \(n_0 \in {\mathbb {N}}_0\) such that

$$\begin{aligned} \left| \int \partial ^{m}_{k_j} {\hat{D}}_k^n \varphi _{\mathsf {R} }^{(N)}(k,x) f(x) \psi _{\mathrm d}(x) {\mathrm d}x \right| \le \frac{C \Vert \langle \cdot \rangle ^{n_0} f \Vert \left\| \psi _{\mathrm d} \right\| }{1+ \left| k \right| ^6}, \end{aligned}$$
(5.25)

for all \(f \in \mathcal {S}({\mathbb {R}}^3)\) , \(\psi _{\mathrm d}\in {\text {ran}}P_{{\text {disc}}}\) and \(k \not = 0\). Expanding \(\varphi (k,x)\) using Proposition 5.4 we obtain for \(\psi _{\mathrm d}\in {\text {ran}}P_{{\text {disc}}}\), \(k \not = 0\),

$$\begin{aligned} V_{{\mathsf {c} }}H(\omega ,\varSigma )\psi _{\mathrm d}(k)&= (2\pi )^{-3/2} \int \overline{\varphi (k,x)} H(\omega ,\varSigma )(x) \psi _{\mathrm d}(x) {\mathrm d}x \nonumber \\&= T_0(\omega ,\varSigma , k) + T_{\mathsf {R} }(\omega ,\varSigma ,k) , \end{aligned}$$
(5.26)

where

$$\begin{aligned} T_0(\omega ,\varSigma ; k)&:= (2\pi )^{-3/2} \sum _{l=0}^{N-1} \int \overline{\varphi _0^{(l)}(k,x)} H(\omega ,\varSigma )(x) \psi _{\mathrm d}(x) {\mathrm d}x , \end{aligned}$$
(5.27)
$$\begin{aligned} T_{\mathsf {R} }(\omega ,\varSigma ; k)&:= (2\pi )^{-3/2} \int \overline{\varphi _{\mathsf {R} }^{(N)}(k,x)}H(\omega ,\varSigma )(x) \psi _{\mathrm d}(x) {\mathrm d}x . \end{aligned}$$
(5.28)

The terms which appear if we apply \(A_{\mathsf {D} }^n\), \(\hat{{\mathsf {k} }}_j\), \({\hat{{\mathsf {q} }}}_j\), \(n \in {\mathbb {N}}_0\), \(j \in \{1,2,3\}\) to (5.27) can be estimated by means of Proposition 5.10 with the result that for some constant C and \(n_1, n_2 \in {\mathbb {N}}_0\),

$$\begin{aligned}&| A_{\mathsf {D} }^n T_0(\omega ,\varSigma ; k) | , \ | \hat{{\mathsf {k} }}_j A_{\mathsf {D} }^n T_0(\omega ,\varSigma ; k) | , \ | {\hat{{\mathsf {q} }}}_j A_{\mathsf {D} }^n T_0(\omega ,\varSigma ; k) | \ \nonumber \\&\quad \le \frac{ C \sup _{\left| \alpha \right| \le n_1} \left\| \left\langle \cdot \right\rangle ^{n_2} \partial _x^\alpha ( H(\omega ,\varSigma ) \psi _{\mathrm d}) \right\| _1 }{1 + \left| k \right| ^2} \end{aligned}$$
(5.29)

for all \((\omega , \varSigma )\), \(k \ne 0\), and \(\psi _{\mathrm d}\in {\text {ran}}P_{{\text {disc}}}\). The terms coming from (5.28) can be estimated using (5.25) such that for some constant C and \(n_1 \in {\mathbb {N}}_0\),

$$\begin{aligned}&| A_{\mathsf {D} }^n T_{\mathsf {R} }(\omega ,\varSigma ; k) | , \ | \hat{{\mathsf {k} }}_j A_{\mathsf {D} }^n T_{\mathsf {R} }(\omega ,\varSigma ; k) | , \ | {\hat{{\mathsf {q} }}}_j A_{\mathsf {D} }^n T_{\mathsf {R} }(\omega ,\varSigma ; k) | \ \nonumber \\&\quad \le \frac{C \Vert \left\langle \cdot \right\rangle ^{n_1} H(\omega ,\varSigma ) \Vert \left\| \psi _{\mathrm d} \right\| }{1+ \left| k \right| ^2} \end{aligned}$$
(5.30)

for all \((\omega , \varSigma )\), \(k \ne 0\), and \(\psi _{\mathrm d}\in {\text {ran}}P_{{\text {disc}}}\). Now observe that by elliptic regularity (cf. [26, IX.6]) we have \(\psi _{\mathrm d}\in C^\infty ({\mathbb {R}}^3)\) and \(\partial ^\alpha \psi _{\mathrm d}\in L^2({\mathbb {R}}^3)\) for all \(\alpha \in {\mathbb {N}}_0^3\). Thus, as \(\tilde{H}\) satisfies (\(\mathrm{J}_0\)), it follows that for fixed \(\psi _{\mathrm d}\) and H there exists a polynomial P and \(n_1, n_2 \in {\mathbb {N}}_0\) such that for all \((\omega ,\varSigma )\),

$$\begin{aligned} \sup _{\left| \alpha \right| \le n_1} \left\| \left\langle \cdot \right\rangle ^{n_2} \partial _x^\alpha ( H(\omega ,\varSigma ) \psi _{\mathrm d}) \right\| _1 , \ \Vert \langle \cdot \rangle ^r H(\omega ,\varSigma ) \Vert \le P(\omega ) , \end{aligned}$$

using Cauchy–Schwarz and standard estimates involving Schwartz functions. Collecting esimates and using that the discrete spectrum is finite we see that the operators (1), (2) and (3) are well-defined and their norms can be estimated by a polynomial in \(\omega \). Continuity in \((\omega ,\varSigma )\) with respect to the operator norm topology now follows from linearity, the bounds (5.29) and (5.30), and the fact that \(\tilde{H}\) satisfies (\(\mathrm{J}_0\)) (and again standard estimates involving Schwartz functions). If \(s=1\), an analogous argument implies differentiability in \(\omega \) with the derivative given by replacing H by \(\partial _\omega H\). Now the claim for arbitrary s follows by induction. \(\square \)

Lemma 5.17

Suppose \(\tilde{H}\) satisfies (\(\mathrm{J}_0\)). Then for all \((\omega ,\varSigma )\) and \(k,k' \in {\mathbb {R}}^3\) let

$$\begin{aligned} K_{\omega ,\varSigma }[H](k,k') := \int \overline{\varphi (k,x)} H(\omega ,\varSigma )(x) \varphi (k',x) {\mathrm d}x . \end{aligned}$$
(5.31)

Then for all \(Z \in \{ k \nabla _k + k' \nabla _k', \eta _1(k) k \nabla _k + \eta _2(k) + \eta _1(k') k' \nabla _{k'} + \eta _2(k') \}\), where \(\eta _1, \eta _2 \in \mathcal {S}({\mathbb {R}}^3)\), \(j \in \{1,2,3\}\), and \(s \in {\mathbb {N}}_0\) there exists a polynomial P such that the absolute values of

  1. (1)

    \(Z^s K_{\omega ,\varSigma }[H](k,k')\),

  2. (2)

    \(\partial _{k_j} Z^s K_{\omega ,\varSigma }[H](k,k')\), \(\partial _{k'_j} Z^s K_{\omega ,\varSigma }[H](k,k')\),

  3. (3)

    \( (k_j - k_j') Z^s K_{\omega ,\varSigma }[H](k,k')\),

are bounded from above by

$$\begin{aligned} P(\omega ) \left( \frac{1}{(1+\left| k \right| ^2)(1+ \left| k' \right| ^2)} + \frac{1}{1+\left| k-k' \right| ^4} \right) \end{aligned}$$
(5.32)

for all \((\omega ,\varSigma )\), \(k,k' \not =0\) . Furthermore the following is satisfied.

  1. (a)

    For fixed \(k,k' \not = 0\), the functions \({\mathbb {R}}_+ \times \mathbb {S}^2 \rightarrow {\mathbb {C}}\) mapping \((\omega ,\varSigma )\) to the expressions (1)–(3), are continuous. If \(\tilde{H}\) satisfies (\(\mathrm{J}_{s}\)), these functions are s times continuously differentiable in \(\omega \) and the s-th partial derivative with respect to \(\omega \) is obtained by replacing H by \( \partial _\omega ^s H\).

  2. (b)

    The integral kernels (1)–(3) define bounded operators in \(L^2(\mathbb {R}^3)\) whose norms are uniformly bounded in \(\varSigma \) by a polynomial in \(\omega \). With respect to the operator norm toplogy the following holds. These operators depend continuously on \((\omega ,\varSigma )\). If \(\tilde{H}\) satisfies (\(\mathrm{J}_{s}\)), these operators are s times continuously differentiable in \(\omega \) and the s-th partial derivative with respect to \(\omega \) is obtained by replacing H by \( \partial _\omega ^s H\).

Proof

Let \(X \in \{ {\text {Id}}, \partial _{k_j}, \partial _{k'_j}, k_j - k_j'\}\). Assume first that

$$\begin{aligned} Y \in \{ k \nabla _k + k' \nabla _k', \eta _1(k) k \nabla _k \}. \end{aligned}$$

Fix \(s \in {\mathbb {N}}_0\). Using Proposition 5.4 we write

$$\begin{aligned} K_{\omega ,\varSigma }[H](k,k')&= \int \overline{\varphi (k,x)} H(\omega ,\varSigma )(x) \varphi (k',x) {\mathrm d}x \end{aligned}$$
(5.33)
$$\begin{aligned}&= \sum _{l,l'=0}^{N-1} \int \overline{ \varphi _0^{(l')}(k,x) } H(\omega ,\varSigma )(x) \varphi _0^{(l)}(k',x) {\mathrm d}x \end{aligned}$$
(5.34)
$$\begin{aligned}&\qquad + \sum _{l=0}^{N-1} \int \overline{ \varphi _{\mathsf {R} }^{(N)}(k,x)} H(\omega ,\varSigma )(x) \varphi _0^{(l)}(k',x) {\mathrm d}x \end{aligned}$$
(5.35)
$$\begin{aligned}&\qquad +\sum _{l=0}^{N-1} \int \overline{ \varphi _0^{(l)}(k,x)} H(\omega ,\varSigma )(x) \varphi _{\mathsf {R} }^{(N)}(k',x) {\mathrm d}x \end{aligned}$$
(5.36)
$$\begin{aligned}&\qquad + \int \overline{ \varphi _{\mathsf {R} }^{(N)}(k,x) } H(\omega ,\varSigma )(x) \varphi _{\mathsf {R} }^{(N)}(k',x) {\mathrm d}x. \end{aligned}$$
(5.37)

By Proposition 5.15 we can choose N large enough such that there exist constants \( n_1, n_2 \in {\mathbb {N}}\), C, such that for all \(f \in \mathcal {S}({\mathbb {R}}^3)\) and \(k,k' \not = 0\), and \(p=1,\ldots ,N\),

$$\begin{aligned}&\left| X Y^s \left\langle \varphi _0^{(p)}(k,\cdot ), f \varphi _{\mathsf {R} }^{(N)}(k',\cdot ) \right\rangle \right| \le \frac{C \sup _{\left| \alpha \right| \le n_1} \left\| \left\langle \cdot \right\rangle ^{n_2} \partial ^\alpha f \right\| _1 }{(1+\left| k \right| ^2)(1+\left| k' \right| ^2)} , \end{aligned}$$
(5.38)

which implies that for all \((\omega ,\varSigma )\),

$$\begin{aligned} | X Y^s (5.35) | , \ | X Y^s (5.36) | \le \frac{C \sup _{\left| \alpha \right| \le n_1} \left\| \left\langle \cdot \right\rangle ^{n_2} \partial ^\alpha H(\omega ,\varSigma ) \right\| _1 }{(1+|k|^2)(1+|k'|^2)} , \end{aligned}$$
(5.39)

Moreover, by Proposition 5.9 there are constants \(n_1, n_2 \in {\mathbb {N}}\), C, such that for all \(k,k' \not = 0\),

$$\begin{aligned} | X Y^s (5.34) | \le \frac{ C \sup _{\left| \alpha \right| \le n_1} \left\| \left\langle \cdot \right\rangle ^{n_2} \partial _x^\alpha H(\omega ,\varSigma ) \right\| _1}{1+\left| k-k' \right| ^4} . \end{aligned}$$
(5.40)

Finally, using Proposition 5.13 we see, by possibly making N larger, that there exist constants \( n_1 \in {\mathbb {N}}\) and C such that

$$\begin{aligned} | X Y^s (5.37) | \ \le \frac{C \Vert \langle \cdot \rangle ^{n_1} H(\omega ,\varSigma ) \Vert _1 }{(1+|k|^2)(1+|k'|^2)} . \end{aligned}$$
(5.41)

On the other hand since \(\tilde{H}\) satisfies (\(\mathrm{J}_0\)) and \(H = \chi \tilde{H}\), there exists for each \(n_1 \in {\mathbb {N}}_0\) and \(\alpha \in {\mathbb {N}}_0^3\) a polynomial P such that

$$\begin{aligned} \left\| \langle \cdot \rangle ^{n_1} \partial _x^{\alpha } H(\omega ,\varSigma )) \right\| _1 \le P(\omega ), \qquad (\omega ,\varSigma ) \in {\mathbb {R}}_+ \times \mathbb {S}^2 . \end{aligned}$$
(5.42)

It follows as a consequence of (5.38)–(5.42) that

$$\begin{aligned} | X Y^s K_{\omega ,\varSigma }[H](k,k') | \le \text { r.h.s. of}\, (5.32) . \end{aligned}$$
(5.43)

This shows (1)–(3) in case \(Z = k \nabla _k + k' \nabla _{k'} \). We note that \(Y = \eta _1(k) k \nabla _k \) will be used below.

Let us now assume

$$\begin{aligned} Z = \eta _1(k) k \nabla _k + \eta _2(k) + \eta _1(k') k' \nabla _{k'} + \eta _2(k') . \end{aligned}$$
(5.44)

To estimate derivatives acting on both sides of (5.33) we use Proposition 5.1, with the result that for all \(r,r' \in \{0,1\}\) and \(s,s' \in {\mathbb {N}}_0\) there exist \(n_1 \in {\mathbb {N}}_0\) and C such that for all nonzero \(k,k'\),

$$\begin{aligned} |\partial _{k_j}^r ( k \nabla _k)^s \partial _{k'_j}^{r'} (k' \nabla _{k'})^{s'}K_{\omega ,\varSigma }[H](k,k') | \le C \Vert \langle \cdot \rangle ^{n_1} H(\omega ,\varSigma ) \Vert _1 \langle k \rangle ^{s} \langle k' \rangle ^{s'} . \end{aligned}$$
(5.45)

To estimate the norm occurring on the right-hand side we shall use that for \(\tilde{H}\) satisfying (\(\mathrm{J}_0\)) and \(n_1 \in {\mathbb {N}}_0\) there exists a polynomial P such that

$$\begin{aligned} \Vert \langle \cdot \rangle ^{n_1} H(\omega ,\varSigma ) \Vert _1 \le P(\omega ) . \end{aligned}$$
(5.46)

Let \(W(k) = \eta _1(k) k \nabla _k + \eta _2(k) \). Then by the binomial theorem

$$\begin{aligned} Z^n K_{\omega ,\varSigma }[H](k,k')&= \sum _{l=0}^n \left( {\begin{array}{c}n\\ l\end{array}}\right) W(k)^{l} W(k')^{n-l} K_{\omega ,\varSigma }[H](k,k') . \end{aligned}$$
(5.47)

We see, after commuting Schwartz functions to the left, that for each \(l \ge 1\) there exist functions \(\eta ^{(l,s)}, {\tilde{\eta }}^{(l,s)} \in \mathcal {S}({\mathbb {R}}^3)\), \(0 \le s \le l\), such that

$$\begin{aligned} W(k)^{l} = \sum _{s=0}^l \eta ^{(l,s)} (\eta (k) k \nabla _k)^s = \sum _{s=0}^l {\tilde{\eta }}^{(l,s)} (k \nabla _k)^s . \end{aligned}$$
(5.48)

Let us first consider the terms in (5.47) for \(l=0\) and \(l=n\). Using the first equality in (5.48) and (5.43) (as well as its adjoint) for \(Y = \eta _1(k) k \nabla _k \), we find

$$\begin{aligned} | X W^n(k) K_{\omega ,\varSigma }[H](k,k') | , \ | X W^n(k') K_{\omega ,\varSigma }[H](k,k') | \le \text { r.h.s. of} \ (5.32) . \end{aligned}$$
(5.49)

The terms in (5.47) for \(l \in \{1,\ldots ,n-1\}\) are estimated using (5.45), the second equality in (5.48) controlling the growth in k and \(k'\), and finally (5.46). Thus we find with (5.49)

$$\begin{aligned} | X Z^n K_{\omega ,\varSigma }[H](k,k') | \le \text { r.h.s. of} \ (5.32) . \end{aligned}$$

This shows (1)–(3) in the case (5.44). It remains to prove (a) and (b).

  1. (a)

    The continuity property in \((\omega ,\varSigma )\) for fixed nonzero \(k, k'\) can be seen from the integral (5.33), using dominated convergence with the property that \(\tilde{H}\) satisfies (\(\mathrm{J}_0\)). To this end, we note that the integrand contains a Schwartz function and that the derivatives of the scattering functions are bounded by polynomials, as shown in Proposition 5.1. If \(s=1\), we conclude analogously differentiability in \(\omega \), and furthermore, that the derivative is given by replacing H with \(\partial _\omega H\). For arbitrary s the claim then follows by induction.

  2. (b)

    We first note, that operators with integral kernels satisfying a bound (5.32) are bounded by \(P(\omega )\). To this end, observe that an integral operator T with integral kernel

    $$\begin{aligned} t :\quad (k,k') \mapsto \frac{1}{(1+\left| k \right| ^2)(1+ \left| k' \right| ^2)} \end{aligned}$$

    is Hilbert-Schmidt and its norm is estimated by \(\Vert T \Vert \le \Vert t \Vert _2\), and that an operator S with integral kernel

    $$\begin{aligned} \quad (k,k') \mapsto \frac{1}{1+\left| k-k' \right| ^4} =: s(k-k') \end{aligned}$$

    is bounded by Young’s inequality for convolutions: \(\Vert S \psi \Vert _2 = \left\| s * \psi \right\| _2 \le \left\| s \right\| _1 \left\| \psi \right\| _2\), \(s \in L^1({\mathbb {R}}^3)\), \(\psi \in L^2({\mathbb {R}}^3)\). In view of this, continuity in \((\omega ,\varSigma )\) with respect to the operator norm topology now follows from linearity, the bounds (5.39)–(5.41) as well as (5.46), and the fact that \(\tilde{H}\) satisfies (\(\mathrm{J}_0\)) (and a standard estimate involving Schwartz functions). If \(s=1\), we conclude analogously differentiability in \(\omega \), and that the derivative is given by replacing H with \(\partial _\omega H\). For arbitrary s the claim then follows by induction. \(\square \)

Remark 5.18

We note that for the proof of the main theorem we will only use Part (b) of Lemma 5.17 and Part (a) will not be needed. We nevertheless included Part (a) in Lemma 5.17, since in principle we could work with a weaker topology.

In the following lemma we estimate the coupling functions first in “scattering space”.

Lemma 5.19

Suppose \(\tilde{H}\) satisfies (\(\mathrm{J}_0\)). Then for all \(\epsilon \ge 0\), \(n \in {\mathbb {N}}_0\), \(j \in \{1,2,3\}\), \((\omega ,\varSigma )\),

  1. (1)

    \( {\text {ad}}^{(n)}_{\eta _\epsilon A_{\mathsf {D} }\eta _\epsilon }( V_{{\mathsf {c} }}H(\omega ,\varSigma )V_{{\mathsf {c} }}^* ) \),

  2. (2)

    \( {\text {ad}}_{\hat{{\mathsf {k} }}_j} \left( {\text {ad}}^{(n)}_{\eta _\epsilon A_{\mathsf {D} }\eta _\epsilon }( V_{{\mathsf {c} }}H(\omega ,\varSigma )V_{{\mathsf {c} }}^* ) \right) \),

  3. (3)

    \( {\text {ad}}_{{\hat{{\mathsf {q} }}}_j} \left( {\text {ad}}^{(n)}_{ \eta _\epsilon A_{\mathsf {D} }\eta _\epsilon }( V_{{\mathsf {c} }}H(\omega ,\varSigma )V_{{\mathsf {c} }}^* )\right) \),

  4. (4)

    \( {\hat{{\mathsf {q} }}}_j {\text {ad}}^{(n)}_{A_{\mathsf {D} }}( V_{{\mathsf {c} }}H(\omega ,\varSigma )V_{{\mathsf {c} }}^* ) \),

are well-defined bounded operators in \(L^2(\mathbb {R}^3)\) and we can estimate their norms uniformly in \(\varSigma \) by a polynomial in \(\omega \). With respect to the operator norm toplogy the following holds. (1)–(4) are continuous \(\mathcal {L}(L^2({\mathbb {R}}^3))\)-valued functions of \((\omega ,\varSigma )\). Moreover if \(\tilde{H}\) satisfies (\(\mathrm{J}_{s}\)), then the functions (1)–(4) are s times continuously differentiable with respect to \(\omega \) and the s-th partial derivative of (1)–(4) with respect to \(\omega \) is obtained by replacing H by \( \partial _\omega ^s H\).

Proof

From Theorem 4.1 we see that for all \((\omega ,\varSigma )\),

$$\begin{aligned} V_{{\mathsf {c} }}H(\omega ,\varSigma )V_{{\mathsf {c} }}^* \psi (k) = (2\pi )^{-3} \int K_{\omega ,\varSigma }[H](k,k') \psi (k') {\mathrm d}k', \end{aligned}$$

with \(K_{\omega ,\varSigma }\) defined in (5.31). Thus the lemma follows directly from Lemma 5.17, observing that \(\eta _\epsilon A_D \eta _\epsilon = \frac{\mathrm {i}\eta _\epsilon ^2(k)}{2} k \nabla _k + \frac{ \mathrm {i}\eta _\epsilon (k)}{4} ( 2 k \nabla _{k} \eta _\epsilon (k) + 3 \eta _\epsilon (k) )\). \(\square \)

Let us now prove the central proposition of this section, which can be thought of as a preliminary version of Proposition 4.2 but without the cutoff function \(\kappa \). For the proof we need the following auxiliary lemma.

Lemma 5.20

Let \(\mathcal {H}_0\) and \(\mathcal {H}_1\) be Hilbert spaces. Let B be a bounded operator in \(\mathcal {H}_0\) and let \(V :\mathcal {H}_0 \rightarrow \mathcal {H}_1\) a partial isometry with \({\text {ran}}V = \mathcal {H}_1\). Let P be the orthogonal projection onto the kernel of V. Suppose A is a self-adjoint operator in \(\mathcal {H}_1\) such that for all \(j=1,\ldots ,n\) the set \({\text {ran}}V (V^* A V)^{j-1} B P\) is contained in the domain of A and the operators \({\text {ad}}_A^{(j)}(V B V^*)\) and \( (V^* A V)^j B P\) are bounded. Then \({\text {ad}}_{V^* A V}^{(n)}(B)\) is a bounded operator on \(\mathcal {H}_0\) and

$$\begin{aligned} {\text {ad}}_{V^* A V}^{(n)}(B) = V^* {\text {ad}}_{ A }^{(n)}(V B V^*) V + ( \mathrm {i}V^* A V )^n B P + P B ( -\mathrm {i}V^* A V )^n . \end{aligned}$$

Proof

This follows by induction in n and a straightforward calculation. \(\square \)

Proposition 5.21

Suppose \(\tilde{H}\) satisfies (\(\mathrm{J}_{s}\)). Then for all \(\epsilon \ge 0\), \(n \in {\mathbb {N}}_0\), \(j \in \{1,2,3\}\), \((\omega ,\varSigma )\), \(r=0,\ldots ,s\) the operators

  1. (1)

    \( \ \partial _\omega ^r {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( H(\omega ,\varSigma ))\),

  2. (2)

    \( \ \partial _\omega ^r {\text {ad}}_{V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}}\left( {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( H(\omega ,\varSigma )) \right) \),

  3. (3)

    \( \ \partial _\omega ^r {\text {ad}}_{V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}}\left( {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( H(\omega ,\varSigma )) \right) \),

  4. (4)

    \( \ \partial _\omega ^r V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}{\text {ad}}^{(n)}_{{A_{\mathrm p}}}( H(\omega ,\varSigma )) \) and \( \partial _\omega ^r {\text {ad}}^{(n)}_{{A_{\mathrm p}}}( H(\omega ,\varSigma )) V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}\),

where the derivative \(\partial _\omega \) is understood with respect to the operator norm topology, are well-defined, bounded, and we can estimate their norms uniformly in \(\varSigma \) by a polynomial in \(\omega \). The operators (1)–(4) depend continuously on \((\omega ,\varSigma )\) with respect to the operator norm topology.

Proof

Follows directly from Proposition 5.16, Lemma 5.19 and an application of Lemma 5.20, with \(V = V_{\mathrm c}\), \(P = P_{\mathrm { disc}}\), \(A = A_{\mathsf {D} }\), and \(B = H(\omega ,\varSigma )\). \(\square \)

Proof of Proposition 4.2

To show the proposition we will use Proposition 5.21 and Lemma A.3. First we consider the case where (i) of (I3) holds. Let \(F, \tilde{F} :{\mathbb {R}}_+ \times \mathbb {S}^2 \rightarrow \mathcal {L}(\mathcal {H}_{\mathrm p})\), where \(F(\omega ,\varSigma ) = \kappa (\omega ) {\tilde{F}}(\omega ,\varSigma )\) and \(\tilde{F} \) is one of the functions

$$\begin{aligned} \begin{aligned}&{\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( \chi \tilde{G}(\cdot ) ), ~{\text {ad}}_{V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}}\left( {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( \chi \tilde{G}(\cdot ) )\right) ,~ {\text {ad}}_{V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}}\left( {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( \chi \tilde{ G}(\cdot ))\right) ,\\&V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}{\text {ad}}^{(n)}_{{A_{\mathrm p}}}( \chi \tilde{G}(\cdot ) ) , \end{aligned} \end{aligned}$$
(5.50)

for \(j \in \{1,2,3\}\). Since (I1) implies that \(\tilde{G}\) satisfies (\(\mathrm{J}_{3}\)) we find that the \( \mathcal {L}(\mathcal {H}_{\mathrm p})\)-valued functions (5.50) are well-defined by Proposition 5.21 as well as their first three partial derivatives with respect to \(\omega \in {\mathbb {R}}_+\). From Leibniz’ rule we find for \(m \le 3\),

$$\begin{aligned} \partial _\omega ^m F(\omega ,\varSigma ) = \sum _{l=0}^m \left( {\begin{array}{c}m\\ l\end{array}}\right) \partial _\omega ^{l} \kappa (\omega ) \partial _\omega ^{m-l} {\tilde{F}}(\omega ,\varSigma ) . \end{aligned}$$
(5.51)

By Proposition 5.21 there exists a polynomial P such that for all \(l = 0, \ldots , m\), \((\omega ,\varSigma )\),

$$\begin{aligned} \left\| \partial _\omega ^{m-l} {\tilde{F}}(\omega ,\varSigma ) \right\| \le P(\omega ). \end{aligned}$$
(5.52)

Now Condition (2) of Lemma A.3 holds for F by (5.52), (5.51) and (I2). Condition (1) of Lemma A.3 is seen to hold by (5.52), (5.51) and (i) of (I3). Thus, by Lemma A.3\((u,\varSigma ) \mapsto \partial ^m_u \tau _\beta (F)(u,\varSigma )\) belongs to \(L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(\mathcal {H}_{\mathrm p}))\) for all \(0 \le m \le 3\), and moreover (4.16) holds. Hence we have shown Proposition 4.2 in case (i) of (I3) holds.

Let us now assume the case where (ii) of (I3) holds. To this end, let \(F_0, \tilde{F}_0 :[0,\infty ) \times \mathbb {S}^2 \rightarrow \mathcal {L}(\mathcal {H}_{\mathrm p})\), where \(F_0(\omega ,\varSigma ) = \kappa _0(\omega ) {\tilde{F}}_0(\omega ,\varSigma )\) and \(\tilde{F}_0 \) is one of the functions

$$\begin{aligned} \begin{aligned}&{\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( \chi \tilde{G}_0(\cdot ) ), ~{\text {ad}}_{V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}_j V_{{\mathsf {c} }}}\left( {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( \chi \tilde{ G}_0(\cdot ) )\right) ,~ {\text {ad}}_{V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}}\left( {\text {ad}}^{(n)}_{{A_{\mathrm p}^{(\epsilon )}}}( \chi \tilde{G}_0(\cdot ))\right) ,\\&V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}{\text {ad}}^{(n)}_{{A_{\mathrm p}}}(\chi \tilde{ G}_0 ) + {\text {ad}}^{(n)}_{{A_{\mathrm p}}}( \chi \tilde{G}_0 ) V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}\end{aligned} \end{aligned}$$
(5.53)

for \(j \in \{1,2,3\}\). The verification of Assumption (2) of Lemma A.3 for \(F_0\) is analogous to the first case. Now (ii) of (I3) implies that \(\tilde{G}_0\) satisfies (\(\mathrm{J}_{s}\)) where \(s = \max \{0,3-J\}\) we find that the \( \mathcal {L}(\mathcal {H}_{\mathrm p})\)-valued functions (5.50) are well-defined and continuous by Proposition 5.21 as well as their first s partial derivatives with respect to \(\omega \in [0,\infty )\). By assumption (ii) of (I3) it is straightforward to verify that \(F_0\) satisfies the Assumption (1’) of Lemma A.3, noting that the adjoints are obtained by replacing \(\kappa _0 \chi \tilde{G}_0\) by \(\overline{ (\kappa _0 \chi \tilde{G}_0)}\). Thus Proposition 4.2 now follows by Lemma A.3, observing that for (4) we use the identity

$$\begin{aligned}&V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}{\text {ad}}^{(n)}_{{A_{\mathrm p}}}( \tau _\beta (G )) \\&\quad = \frac{1}{2} \left( V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}{\text {ad}}^{(n)}_{{A_{\mathrm p}}}( \tau _\beta (G ) ) + {\text {ad}}^{(n)}_{{A_{\mathrm p}}}( \tau _\beta (G )) V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}- \mathrm {i}{\text {ad}}_{V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}}({\text {ad}}^{(n)}_{{A_{\mathrm p}}}( \tau _\beta ( G) )) \right) . \end{aligned}$$

Finally, note that the proposition still holds true if we replace G by \(G^*\) since the conditions (I1)–(I3) obviously follow for \(G^*\). \(\square \)

6 Proof of Positivity and of the Main Theorem

In this section the main estimates and the positivity proof of the commutator are discussed. First we introduce the two terms \(A_0\) and \(C_{ Q }\) which we add to \(C_1\) as already mentioned in the overview of the proof. Then we show how the main theorem is proven given that we know that the sum of all three terms is positive (Proposition 6.2). Subsequently, in Sect. 6.2 we show how we estimate these three terms separately and which error terms occur. With that we conclude by proving Proposition 6.2.

In this section we assume that (H1), (H2), (I1)–(I3) hold, i.e., the assumptions of Theorem 2.6 are satisfied.

6.1 Putting Things Together, Proof of the Main Theorem

Remember that we have on \({\mathcal {D}}\),

$$\begin{aligned} C_1 = V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{{\text {f}}}+ {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}+ \widehat{N_{{\text {f}}}}+ \lambda W_1 , \end{aligned}$$
(6.1)

where \(W_1\) was the commutator with the interaction, cf. (4.14). Obviously, \(C_1\) is strictly positive on the orthogonal complement of the vacuum subspace for \(\lambda = 0\) and its first two terms are positive on \(({\text {ran}}(P_{{\text {disc}}}\otimes P_{{\text {disc}}}))^\perp \otimes \mathfrak {F}\).

On the space \({\text {ran}}\varPi \) we use the Fermi Golden Rule and introduce the corresponding conjugate operator \(A_0\) in order to obtain a positive expression in Proposition B.2 as a commutator with \(L_\lambda \). Such an operator was introduced for zero temperature systems in [1] and later adapted to the positive temperature setting in [19]. It is a bounded self-adjoint operator on \(\mathcal {H}\), given by

$$\begin{aligned} A_0 := \mathrm {i}\lambda (\varPi W R_\varepsilon ^2 { \varPi }^\perp - { \varPi }^\perp R_\varepsilon ^2 W \varPi ), \end{aligned}$$
(6.2)

where \(R_\varepsilon ^2 := (L_0^2 + \varepsilon ^2)^{-1}\) and \(\varepsilon > 0\). One can show that \({\text {ran}}A_0 \subseteq \mathcal {D}(L_\lambda )\), since \({\text {ran}}A_0 \subseteq \mathcal {H}_{\mathrm p}\otimes \mathcal {H}_{\mathrm p}\otimes \mathfrak {F}_{{\text {fin}}}\) (cf. Lemma B.1). Furthermore,

$$\begin{aligned} \mathrm {i}[L_\lambda , A_0] = - \lambda [L_\lambda , \varPi W R_\varepsilon ^2 { \varPi }^\perp - { \varPi }^\perp R_\varepsilon ^2 W \varPi ] \end{aligned}$$
(6.3)

is bounded and extends to a self-adjoint operator as well, and we have

$$\begin{aligned} \varPi \mathrm {i}[ L_\lambda , A_0] \varPi \restriction _{{\text {ran}}\varPi } > 0 \end{aligned}$$

for suitable \(\varepsilon > 0\), see Proposition B.2.

To obtain positivity on the remaining space \((\ker L_{\mathrm p})^\perp \otimes {\text {ran}}P_\varOmega \) we introduce the bounded self-adjoint operator

$$\begin{aligned} C_{ Q } := Q \otimes P_\varOmega + \frac{\lambda }{2} W ( L_{\mathrm p}^{-1} Q \otimes P_\varOmega ) + \frac{\lambda }{2} ( W ( L_{\mathrm p}^{-1} Q \otimes P_\varOmega ) )^* \end{aligned}$$
(6.4)

on \(\mathcal {H}\), where \(L_{\mathrm p}^{-1}\) is to be understood in the sense of functional calculus as an unbounded operator, and

$$\begin{aligned} Q := L_{\mathrm p}^2 \mathbb {1}_{[-1,1]}(L_{\mathrm p}) + \mathbb {1}_{(-\infty ,-1) \cup (1,\infty )}(L_{\mathrm p}). \end{aligned}$$
(6.5)

Notice that by construction \({\text {ran}}Q \subseteq \mathcal {D}(L_{\mathrm p}^{-1})\), \(L_{\mathrm p}^{-1} Q\) is bounded and self-adjoint, so the definition of \(C_{ Q }\) makes sense. Furthermore, the first summand \( Q \otimes P_\varOmega \) is indeed positive on \((\ker L_{\mathrm p})^\perp \otimes {\text {ran}}P_\varOmega \).

The goal is to show that the sum of the three operators (6.1), (6.3) and (6.4) is positive and has zero expectation with any element of the kernel of \(L_\lambda \). To this end we define for \(\psi \in \mathcal {D}(q_{C_1}) \cap \mathcal {D}(L_\lambda )\) and some \(\theta > 0\),

$$\begin{aligned} q_{{\text {tot}}}(\psi ) := q_{C_1}(\psi ) + \theta \left\langle \psi ,\mathrm {i}[L_\lambda , A_0] \psi \right\rangle + \left\langle \psi ,C_{ Q } \psi \right\rangle , \end{aligned}$$
(6.6)

where \(q_{C_1}\) denoted the form corresponding to \(C_1\). By the virial theorem for \(C_1\) and by construction of the other two terms, this form is actually non-positive for any \(\psi \in \ker L_\lambda \). This is the content of the following proposition.

Proposition 6.1

For arbitrary \(\lambda \), \(\varepsilon \), \(\theta \), and \(\psi \in \ker L_\lambda \) we have \(\psi \in \mathcal {D}(q_{C_1})\) and \(q_{{\text {tot}}}(\psi ) \le 0\).

Proof

Let \(\psi \in \ker L_\lambda \). By Theorem 4.5 we know that \(\psi \in \mathcal {D}(q_{C_1})\), and

$$\begin{aligned} q_{C_1}(\psi ) \le 0. \end{aligned}$$

It is clear that \(\left\langle \psi ,\mathrm {i}[L_\lambda , A_0] \psi \right\rangle = 0\). Furthermore, by construction, the last term vanishes:

$$\begin{aligned} 0&= \left\langle (L_{\mathrm p}^{-1} Q \otimes P_\varOmega ) L_\lambda \psi , \psi \right\rangle \\&= \left\langle \psi , L_\lambda (L_{\mathrm p}^{-1} Q \otimes P_\varOmega ) \psi \right\rangle \\&= \left\langle \psi , (L_{\mathrm p}\otimes {\text {Id}}_{{\text {f}}}+ \lambda W) (L_{\mathrm p}^{-1} Q \otimes P_\varOmega ) \psi \right\rangle \\&= \left\langle \psi , (Q \otimes P_\varOmega ) \psi + \lambda W (L_{\mathrm p}^{-1} Q \otimes P_\varOmega ) \psi \right\rangle . \end{aligned}$$

Then \(\left\langle \psi ,C_{ Q } \psi \right\rangle = 0\) follows by adding the complex conjugate term. Thus,

$$\begin{aligned} q_{{\text {tot}}}(\psi ) = q_{C_1}(\psi ) \le 0. \end{aligned}$$

\(\square \)

We prove in the next subsection the following proposition which states that \(q_{{\text {tot}}}\) is in fact positive. Remember that \(\gamma _\beta (\varepsilon )\) is the constant appearing in (F).

Proposition 6.2

Let \(\beta _0 > 0\) and \(\varepsilon > 0\). Then there exists a constant \(C > 0\) such that for all \(\beta \ge \beta _0\) and \(0< \left| \lambda \right| < C \min \{ 1 , \gamma _\beta ^2(\varepsilon )\}\), we have \(q_{{\text {tot}}}> 0\). That is, \(q_{{\text {tot}}}\ge 0\) and \(q_{{\text {tot}}}(\psi ) = 0\) for some \(\psi \in \mathcal {D}(q_{C_1}) \cap \mathcal {D}(L_\lambda )\) implies \(\psi = 0\).

Let us now give the proof of the main theorem.

Proof of Theorem 2.6

The theorem follows from Proposition 6.2 together with Proposition 6.1. \(\square \)

6.2 Error Estimates

In the following proposition we prove separate estimates from below for the three operators (6.1), (6.3) and (6.4). We use the short-hand notation

$$\begin{aligned} \widehat{P_\varOmega }:= {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{\mathrm p}\otimes P_\varOmega . \end{aligned}$$

Proposition 6.3

The following holds.

  1. (a)

    There exist constants \(c_1\), \(c_2 > 0\) such that, for all \(\lambda \in {\mathbb {R}}\), we have in the sense of quadratic forms on \(\mathcal {D}(q_{C_1})\),

    $$\begin{aligned} C_1&\ge \big [V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\nonumber \\&\qquad \quad - c_1 (1+ \beta ^{-1}) \lambda ^2 \big ( V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2} V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2} V_{{\mathsf {c} }}+ { (P_{\mathsf {ess} }\otimes P_{\mathsf {ess} }) }^\perp \big ) \big ] \nonumber \\&\qquad \quad \otimes {\text {Id}}_{{\text {f}}}+ c_2 { \widehat{P_\varOmega } }^\perp . \end{aligned}$$
    (6.7)
  2. (b)

    For all \(\varepsilon >0\) there exist constants \(c_1, c_2, c_3 > 0\) (depending on \(\varepsilon \)) such that for \(\left| \lambda \right| < 1\),

    $$\begin{aligned} \mathrm {i}[L_\lambda , A_0]&\ge (1- c_1 \left| \lambda \right| ) 2 \lambda ^2 \varPi W R_\varepsilon ^2 W \varPi - c_2 (1+\beta ^{-1}) \left| \lambda \right| { \widehat{P_\varOmega } }^\perp \\&\qquad - c_3 (1+\beta ^{-1}) \lambda ^2 \mathbb {1}_{L_{\mathrm p}\not = 0} \big ( V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2} V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2} V_{{\mathsf {c} }}\\&\qquad + { (P_{\mathsf {ess} }\otimes P_{\mathsf {ess} }) }^\perp \big ) \mathbb {1}_{L_{\mathrm p}\not = 0} \otimes P_\varOmega . \end{aligned}$$
  3. (c)

    There exists a constant \(c_1 > 0\) such that, for all \(\lambda \in {\mathbb {R}}\),

    $$\begin{aligned} C_{ Q } \ge (1-c_1 \left| \lambda \right| (1+\beta ^{-1}) ) Q \otimes P_\varOmega - \left| \lambda \right| { \widehat{P_\varOmega } }^\perp . \end{aligned}$$

Before we can give the proof of Proposition 6.3 we need some preparatory lemmas. It is convenient to introduce some further notation for the interaction and the commuted interaction. We separate them into parts which act on the left and right of the particle space tensor product, respectively,

$$\begin{aligned} I^{(\mathsf {l} )}_1(u,\varSigma )&:= (-\mathrm {i}\partial _u) \tau _\beta (G)(u,\varSigma ) + \tau _\beta ( {\text {ad}}_{{A_{\mathrm p}}}(G))(u,\varSigma ) , \\ I^{(\mathsf {r} )}_1(u,\varSigma )&:= (-\mathrm {i}\partial _u) e^{-\beta u /2} \tau _\beta ({\overline{G}}^*)(u,\varSigma ) - e^{-\beta u /2} \tau _\beta ( {\text {ad}}_{{A_{\mathrm p}}} ({\overline{G}}^*))(u,\varSigma ). \end{aligned}$$

We introduce also integrated versions which will be used in the further estimates,

$$\begin{aligned} w&:= \int I(u,\varSigma )^* I(u,\varSigma ) {\mathrm d}(u, \varSigma ), \\ w_1&:= \int I_1 (u,\varSigma )^* I_1(u,\varSigma ) {\mathrm d}(u, \varSigma ), \end{aligned}$$

and the left and right parts, for \(\alpha = \mathsf {l} ,\mathsf {r} \),

$$\begin{aligned} w^{(\alpha )}&:= \int I^ {(\alpha )}(u,\varSigma )^* I^ {(\alpha )}(u,\varSigma ) {\mathrm d}(u, \varSigma ), \\ w_1^{(\alpha )}&:= \int I_1^ {(\alpha )}(u,\varSigma )^* I_1^ {(\alpha )}(u,\varSigma ) {\mathrm d}(u, \varSigma ). \end{aligned}$$

By construction \(W_1 = \varPhi (I_1)\) and we have the decomposition

$$\begin{aligned} I_1(u,\varSigma )&= I^{(\mathsf {l} )}_1(u,\varSigma ) \otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes I^{(\mathsf {r} )}_1(u,\varSigma ). \end{aligned}$$
(6.8)

First, we estimate the commuted interaction term \(W_1\) appearing in (6.7). For this, we prove a bound for w and \(w_1\).

Lemma 6.4

There exist constants C independent of \(\beta \) such that, for \(\alpha = \mathsf {l} ,\mathsf {r} \),

  1. (a)

    \(\left\| I^{(\alpha )} \right\| ^2_{ L^2({\mathbb {R}}\times \mathbb {S}^2, \L (\mathcal {H}_{\mathrm p}) )} \le C ( 1+ \beta ^{-1})\),

  2. (b)

    \(\left\| I_1^{(\alpha )} \right\| ^2_{ L^2({\mathbb {R}}\times \mathbb {S}^2, \L (\mathcal {H}_{\mathrm p}) )} \le C ( 1+ \beta ^{-1})\),

  3. (c)

    \(P_{\mathsf {ess} }w^{(\alpha )} P_{\mathsf {ess} }\le C ( 1+ \beta ^{-1}) V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2} V_{{\mathsf {c} }}\),

  4. (d)

    \( P_{\mathsf {ess} }w_1^{(\alpha )}P_{\mathsf {ess} }\le C ( 1+ \beta ^{-1}) V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2} V_{{\mathsf {c} }}\).

Proof

By Proposition 4.2 we have that, for all \(j \in \{1,2,3\}\), \(\alpha = \mathsf {l} , \mathsf {r} \),

$$\begin{aligned} I^{(\alpha )},~ I^{(\alpha )}_1,~ I^{(\alpha )} V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}, ~ I^{(\alpha )}_1 V_{{\mathsf {c} }}^* {\hat{{\mathsf {q} }}}_j V_{{\mathsf {c} }}\in L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(\mathcal {H}_{\mathrm p})), \end{aligned}$$

and there is a constant C independent of \(\beta \) such that we can estimate the norm of these expressions by

$$\begin{aligned} C (1+\beta ^{-1}). \end{aligned}$$

Thus, the same applies for \( I^{(\alpha )} V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle V_{{\mathsf {c} }}, ~ I^{(\alpha )}_1 V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle V_{{\mathsf {c} }}\in L^2({\mathbb {R}}\times \mathbb {S}^2, \mathcal {L}(\mathcal {H}_{\mathrm p}) )\). Consequently, we obtain, for \(\alpha = \mathsf {l} , \mathsf {r} \), and some constant \(C >0\) not depending on \(\beta \),

$$\begin{aligned} P_{\mathsf {ess} }&w^{(\alpha )} P_{\mathsf {ess} }\\&= V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^ {-1} V_{{\mathsf {c} }}\int (I^{(\alpha )}(u,\varSigma ) V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle V_{{\mathsf {c} }})^* I^{(\alpha )}(u,\varSigma ) V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle V_{{\mathsf {c} }}{\mathrm d}( u, \varSigma ) V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^ {-1} V_{{\mathsf {c} }}\\&\ge C ( 1+ \beta ^{-1}) V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2 } V_{{\mathsf {c} }}. \end{aligned}$$

The proof for \(w^{(\alpha )}_{1}\) is analogous. \(\square \)

Lemma 6.5

For any \(\delta > 0\) we have

$$\begin{aligned} \pm \lambda W_1&\le \delta \widehat{N_{{\text {f}}}}+ \frac{1}{\delta } \lambda ^2 w_1 \otimes {\text {Id}}_{{\text {f}}}\end{aligned}$$
(6.9)

in the sense of forms.

Proof

The standard estimates of the creation and annihilation operators lead to the inequality above. \(\square \)

Now, we estimate the second term involving \(A_0\). As an immediate consequence of the definition, (6.2), we have

$$\begin{aligned} \mathrm {i}[L_\lambda , A_0] = - \lambda [L_\lambda , \varPi W R_\varepsilon ^2 { \varPi }^\perp - { \varPi }^\perp R_\varepsilon ^2 W \varPi ]. \end{aligned}$$

With respect to the different subspaces we obtain

$$\begin{aligned} \varPi \mathrm {i}[ L_\lambda , A_0] \varPi&= 2 \lambda ^2 \varPi W R_\varepsilon ^2 W \varPi , \end{aligned}$$
(6.10)
$$\begin{aligned} { \varPi }^\perp \mathrm {i}[ L_\lambda , A_0] { \varPi }^\perp&= - \lambda ^2 ( { \varPi }^\perp W \varPi W R_\varepsilon ^2 { \varPi }^\perp + { \varPi }^\perp R_\varepsilon ^2 W \varPi W { \varPi }^\perp ), \end{aligned}$$
(6.11)
$$\begin{aligned} \varPi \mathrm {i}[ L_\lambda , A_0] { \varPi }^\perp&= \lambda \varPi W R_\varepsilon ^2 { \varPi }^\perp L_\lambda { \varPi }^\perp . \end{aligned}$$
(6.12)

The Fermi Golden Rule condition implies strict positivity of \(\varPi \mathrm {i}[ L_\lambda , A_0] \varPi \), see Proposition B.2. The other expressions can be potentially negative and are estimated in the following lemma, which we use later for a Birman–Schwinger argument. It contains slightly sharper estimates than a similar one in [8].

Lemma 6.6

For all \(\varepsilon > 0\) and all \(\lambda \in {\mathbb {R}}\) the following holds.

  1. (a)

    We have \( { \varPi }^\perp \mathrm {i}[L_\lambda , A_0] { \varPi }^\perp = \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} { \varPi }^\perp \mathrm {i}[L_\lambda , A_0] { \varPi }^\perp \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} \). Moreover,

    $$\begin{aligned} \left\| { \varPi }^\perp \mathrm {i}[L_\lambda , A_0] { \varPi }^\perp \right\| \le 2 \frac{\lambda ^2}{\varepsilon ^2} \left\| I \right\| ^2 . \end{aligned}$$
  2. (b)

    For arbitrary \(\delta _1, \delta _2 > 0\),

    $$\begin{aligned} { \varPi }^\perp&\mathrm {i}[L_\lambda , A_0] \varPi + \varPi \mathrm {i}[L_\lambda , A_0] { \varPi }^\perp \le (\left| \lambda \right| \delta _1 + \delta _2 \lambda ^2) \varPi W R_\varepsilon ^2 W \varPi + \frac{ \left| \lambda \right| }{ \delta _1} \mathbb {1}_{\widehat{N_{{\text {f}}}}= 1} \\&\quad + 2 \frac{ \lambda ^2}{\delta _2} \bigg ( \mathbb {1}_{\widehat{N_{{\text {f}}}}=2} a^*(I) R_\varepsilon ^2 a(I) \mathbb {1}_{\widehat{N_{{\text {f}}}}=2} \\&\quad + { \varPi }^\perp \int \frac{ I(u,\varSigma )^* I(u,\varSigma )}{u^2 + \varepsilon ^2} {\mathrm d}(u, \varSigma ) \otimes P_\varOmega { \varPi }^\perp \bigg ). \end{aligned}$$

Proof

  1. (a)

    Consider the first term in (6.11),

    $$\begin{aligned} { \varPi }^\perp W \varPi W R_\varepsilon ^2 { \varPi }^\perp = a^*(I) \varPi a(I) R_\varepsilon ^2 { \varPi }^\perp . \end{aligned}$$

    Clearly, this operator vanishes everywhere except on \({\text {ran}}\mathbb {1}_{\widehat{N_{{\text {f}}}}=1}\). By standard estimates of creation and annihilation operators we obtain

    $$\begin{aligned} \left\| a^*(I) \varPi a(I) \right\| \le \left\| I \right\| ^2 , \end{aligned}$$

    thus

    $$\begin{aligned} { \left\| { \varPi }^\perp W \varPi W R_\varepsilon ^2 { \varPi }^\perp \right\| \le \frac{ \left\| I \right\| ^2 }{\varepsilon ^2}, } \end{aligned}$$

    which proves the claim, as the second term in (6.11) is just the adjoint of the first one.

  2. (b)

    Using (6.12), and the operator inequality (4.17) we get

    $$\begin{aligned} { \varPi }^\perp&\mathrm {i}[L_\lambda , A_0] \varPi + \varPi \mathrm {i}[L_\lambda , A_0] { \varPi }^\perp \nonumber \\&= \lambda ( \varPi W R_\varepsilon ^2 L_0 \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} + \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} L_0 R_\varepsilon ^2 W \varPi ) + \nonumber \\&\qquad \lambda ^2 ( \varPi W R_\varepsilon ^2 \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} W { \varPi }^\perp + { \varPi }^\perp W \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} R_\varepsilon ^2 W \varPi ) \nonumber \\&\le \left| \lambda \right| ( \delta _1 \varPi W R_\varepsilon ^2 W \varPi + \delta _1^{-1} \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} L_0 R_\varepsilon ^2 L_0 \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} ) \end{aligned}$$
    (6.13)
    $$\begin{aligned}&\qquad + \lambda ^2( \delta _2 \varPi W R_\varepsilon ^2 W \varPi + \delta _2^{-1} { \varPi }^\perp W \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} R_\varepsilon ^2 \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} W { \varPi }^\perp ). \end{aligned}$$
    (6.14)

    We have

    $$\begin{aligned} \left\| \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} L_0 R_\varepsilon ^2 L_0 \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} \right\| \le 1, \end{aligned}$$

    which yields a bound for the second operator in (6.13). The second one in (6.14) only operates on the space \({\text {ran}}( \mathbb {1}_{\widehat{N_{{\text {f}}}}= 0} + \mathbb {1}_{\widehat{N_{{\text {f}}}}= 2})\), so we can write

    $$\begin{aligned} { \varPi }^\perp W \mathbb {1}_{\widehat{N_{{\text {f}}}}=1}&R_\varepsilon ^2 \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} W { \varPi }^\perp \nonumber \\&= ( \mathbb {1}_{\widehat{N_{{\text {f}}}}= 0} + \mathbb {1}_{\widehat{N_{{\text {f}}}}= 2} ) { \varPi }^\perp W \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} R_\varepsilon ^2 \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} W { \varPi }^\perp ( \mathbb {1}_{\widehat{N_{{\text {f}}}}= 0} + \mathbb {1}_{\widehat{N_{{\text {f}}}}= 2} ). \end{aligned}$$
    (6.15)

    Now, we can use again the operator inequality (4.17) and then the pull-through formula to bound (6.15) by

    $$\begin{aligned}&2 a^*(I) R_\varepsilon ^2 a(I) \mathbb {1}_{\widehat{N_{{\text {f}}}}=2} + 2 { \varPi }^\perp a(I) R_\varepsilon ^2 \ a^*(I) { \varPi }^\perp \mathbb {1}_{\widehat{N_{{\text {f}}}}=0} \\&\quad \le 2 \left( a^*(I) R_\varepsilon ^2 a(I) \mathbb {1}_{\widehat{N_{{\text {f}}}}=2} + { \varPi }^\perp \int \frac{ I^*(u,\varSigma ) I(u,\varSigma )}{u^2 + \varepsilon ^2} {\mathrm d}(u,\varSigma ) \otimes P_\varOmega { \varPi }^\perp \right) . \end{aligned}$$

\(\square \)

Using the just proven lemmas we can now give the proof for the concrete error estimates stated in Proposition 6.3.

Proof of Proposition 6.3

  1. (a)

    First, we use Lemma 6.5 and the explicit form of \(C_1\) to obtain on \(\mathcal {D}\) for any \(\delta _1 > 0\),

    $$\begin{aligned} C_1&= (V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}) \otimes {\text {Id}}_{{\text {f}}}+ \widehat{N_{{\text {f}}}}+ \lambda W_1 \\&\ge ( V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* \hat{{\mathsf {k} }}^2 V_{{\mathsf {c} }}) \otimes {\text {Id}}_{{\text {f}}}+ (1 - \delta _1) { \widehat{P_\varOmega } }^\perp - \frac{\lambda ^2}{\delta _1} w_1 . \end{aligned}$$

    Next, note that the operator inequality (4.17) yields

    $$\begin{aligned} w_1 \le 2 ( w^{(\mathsf {l} )}_1 \otimes {\text {Id}}_{\mathrm p}+ {\text {Id}}_{\mathrm p}\otimes w^{(\mathsf {r} )}_1 ). \end{aligned}$$

    Then, using (4.17) again, and subsequently Lemma 6.4, a decomposition into \({\text {ran}}P_{\mathsf {ess} }\) and \({\text {ran}}P_{{\text {disc}}}\) gives, for \(\alpha = \mathsf {l} , \mathsf {r} \),

    $$\begin{aligned} w_1^{(\alpha )}&\le 2(P_{\mathsf {ess} }w_1^{(\alpha )} P_{\mathsf {ess} }+ P_{{\text {disc}}}w_1^{(\alpha )} P_{{\text {disc}}}) \\&\le C (1+\beta ^{-1}) ( V_{{\mathsf {c} }}^* \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2} V_{{\mathsf {c} }}+ P_{{\text {disc}}}), \end{aligned}$$

    where \(C > 0\) is a constant not depending on \(\beta \). Choosing any \(0< \delta _1 < 1\) yields (6.7) on \(\mathcal {D}\). As \(\mathcal {D}\) is a core for \(C_1\), it is also a form core for \(q_{C_1}\), so the operator inequality can be extended to the corresponding forms in the form sense on \(\mathcal {D}(q_{C_1})\).

  2. (b)

    We have for all \(\delta _1 \delta _2 > 0\) by Lemma 6.6,

    $$\begin{aligned} \mathrm {i}[L_\lambda , A_0]&= \varPi \mathrm {i}[ L_\lambda , A_0 ] \varPi + { \varPi }^\perp \mathrm {i}[ L_\lambda , A_0 ] { \varPi }^\perp + { \varPi }^\perp \mathrm {i}[ L_\lambda , A_0 ] \varPi + \varPi \mathrm {i}[ L_\lambda , A_0 ] { \varPi }^\perp \\&\ge (1- (\left| \lambda \right| \delta _1 + \delta _2 \lambda ^2)) \varPi \mathrm {i}[ L_\lambda , A_0 ] \varPi - 2 \frac{{\lambda }^2}{\varepsilon ^2} \left\| I \right\| ^2 \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} \\&\quad - \frac{ \left| \lambda \right| }{\delta _1} \mathbb {1}_{\widehat{N_{{\text {f}}}}=1} - 2 \frac{ \lambda ^2}{\delta _2} \left( a^*(I) R_\varepsilon ^2 a(I) \mathbb {1}_{\widehat{N_{{\text {f}}}}=2} + \frac{ 1}{\varepsilon ^2} { \varPi }^\perp w \otimes P_\varOmega { \varPi }^\perp \right) . \end{aligned}$$

    Notice that the operator \( a^*(I) R_\varepsilon ^2 a(I) \mathbb {1}_{\widehat{N_{{\text {f}}}}=2}\) is in fact bounded by Lemma 6.4, and \(\widehat{P_\varOmega }{ \varPi }^\perp = \mathbb {1}_{L_{\mathrm p}\not = 0} \otimes P_\varOmega \). The last term can be decomposed and estimated as above in the proof of (a).

  3. (c)

    We have

    $$\begin{aligned} { W (L_{\mathrm p}^{-1} Q \otimes P_\varOmega ) = a^*(I) (L_{\mathrm p}^{-1} Q \otimes P_\varOmega ) = a^*(I L_{\mathrm p}^{-1} Q ) \widehat{P_\varOmega }. } \end{aligned}$$

    Thus, by the standard estimates for creation and annihilation operators, we obtain for all \(\delta > 0\) on \(\mathcal {D}(\widehat{N_{{\text {f}}}})\),

    $$\begin{aligned}&W ( L_{\mathrm p}^{-1} Q \otimes P_\varOmega ) + ( W ( L_{\mathrm p}^{-1} Q \otimes P_\varOmega ) )^* \\&\quad \le \delta \widehat{N_{{\text {f}}}}+ \delta ^{-1} \left( \int L_{\mathrm p}^{-1} Q I(u,\varSigma )^* I(u,\varSigma ) L_{\mathrm p}^{-1} Q {\mathrm d}(u,\varSigma ) \right) \otimes P_\varOmega . \end{aligned}$$

    Hence, we get

    $$\begin{aligned} C_{ Q }&\ge Q \otimes P_\varOmega - \left| \lambda \right| \delta \widehat{N_{{\text {f}}}}- \left| \lambda \right| \left\| w \right\| \delta ^{-1} ( L_{\mathrm p}^{-1} Q )^2 \otimes P_\varOmega \nonumber \\&\ge (1 - 2 \left| \lambda \right| (1+\beta ^{-1}) \delta ^{-1}) Q \otimes P_\varOmega - \left| \lambda \right| \delta \widehat{N_{{\text {f}}}}, \end{aligned}$$
    (6.16)

    where we used Lemma 6.4 and the fact that the concrete choice of Q as in (6.5) yields

    $$\begin{aligned} ( L_{\mathrm p}^{-1} Q )^2 = ( L_p^2 \mathbb {1}_{[-1,1]}(L_{\mathrm p}) + L_{\mathrm p}^{-2} \mathbb {1}_{(-\infty ,-1) \cup (1, \infty )}(L_{\mathrm p})) \le 2 Q. \end{aligned}$$

After the preparations we are now able to put all the estimates of this section together in order to prove positivity of \(q_{{\text {tot}}}\).

Proof of Proposition 6.2

First choose \(\varepsilon > 0\) such that Proposition B.2 holds. Using Proposition 6.3, we obtain for the quadratic form defined in (6.6) in the sense of forms on \(\mathcal {D}(q_{C_1})\) for all \(\theta > 0\),

$$\begin{aligned} q_{{\text {tot}}}&\ge V_{{\mathsf {c} }}^* \left( \hat{{\mathsf {k} }}^2 - c_1 \lambda ^2( 1+\theta ) (1+\beta ^{-1})\left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2}\right) V_{{\mathsf {c} }}\otimes {\text {Id}}_{\mathrm p}\otimes {\text {Id}}_{{\text {f}}}\nonumber \\&\qquad + {\text {Id}}_{\mathrm p}\otimes V_{{\mathsf {c} }}^* \left( \hat{{\mathsf {k} }}^2 - c_1 \lambda ^2( 1+\theta )(1+\beta ^{-1}) \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2} \right) V_{{\mathsf {c} }}\otimes {\text {Id}}_{{\text {f}}}\nonumber \\&\qquad + \left( c_2 - c_3 (1+\theta ) \left| \lambda \right| (1+\beta ^{-1}) \right) { \widehat{P_\varOmega } }^\perp \end{aligned}$$
(6.17)
$$\begin{aligned}&\qquad + 2 \theta \lambda ^2 (1- c_4 \left| \lambda \right| ) \gamma _\beta (\varepsilon ) \varPi - c_5 \lambda ^2(1+\beta ^{-1}) \varPi \end{aligned}$$
(6.18)
$$\begin{aligned}&\qquad + \big [ \left( 1-c_6 \left| \lambda \right| (1+\beta ^{-1}) \right) Q\nonumber \\&\qquad - c_7 \lambda ^2 (1 + \theta )(1+\beta ^{-1}) { ( P_{\mathsf {ess} }\otimes P_{\mathsf {ess} }) }^\perp \mathbb {1}_{L_{\mathrm p}\not = 0} \big ] \otimes P_\varOmega , \end{aligned}$$
(6.19)

for constants \(c_i > 0\), \(i \in \{1,\ldots ,7\}\), independent of \(\lambda \) and \(\beta \), where we made use of the identity \((P_{\mathsf {ess} }\otimes P_{\mathsf {ess} })^\perp \otimes P_\varOmega = (P_{\mathsf {ess} }\otimes P_{\mathsf {ess} })^\perp \mathbb {1}_{L_{\mathrm p}\ne 0 } \otimes P_\varOmega + \varPi \). Now we set \(\theta = \left| \lambda \right| ^{-1/2}\) in order to have a positive term in (6.18) of higher order. Next, we make \(\left| \lambda \right| > 0\) sufficiently small in the following sense: First we make it so small such that, by the uncertainty principle lemma (cf. [26, X.2]),

$$\begin{aligned} \hat{{\mathsf {k} }}^2 - c_1 \lambda ^2( 1+ \left| \lambda \right| ^{-\frac{1}{2}} )(1+\beta ^{-1}) \left\langle {\hat{{\mathsf {q} }}} \right\rangle ^{-2} > 0. \end{aligned}$$

Furthermore, we can make it small enough such that we get strictly positive operators in (6.17) and (6.18) on \({\text {ran}}{ P_\varOmega }^\perp \) and \({\text {ran}}\varPi \), respectively. Note that for (6.18) to be positive we have to choose \(\left| \lambda \right| \) small compared to \(\gamma _\beta (\varepsilon )^2\). For the last term, (6.19), we first observe that

$$\begin{aligned} { ( P_{\mathsf {ess} }\otimes P_{\mathsf {ess} }) }^\perp \mathbb {1}_{L_{\mathrm p}\not = 0} \le \mathbb {1}_{[\varXi ,\infty )}(L_{\mathrm p}^2), \end{aligned}$$
(6.20)

where

$$\begin{aligned} \varXi := \inf _{\begin{array}{c} \lambda \in \sigma _{\mathrm d}(H_p) , \mu \in \sigma (H_p) \\ \lambda \ne \mu \end{array}} (\lambda - \mu )^2> 0. \end{aligned}$$

Furthermore, using the definition (6.5), \(Q = L_{\mathrm p}^2 \mathbb {1}_{[0,1]}(L_{\mathrm p}^2) + \mathbb {1}_{(1,\infty )}(L_{\mathrm p}^2)\), we see that

$$\begin{aligned} Q - \delta \mathbb {1}_{[\varXi ,\infty )}(L_{\mathrm p}^2)&= L_{\mathrm p}^2 \mathbb {1}_{[0,\varXi )} (L_{\mathrm p}^2) + (L_{\mathrm p}^2 - \delta ) \mathbb {1}_{[\varXi ,1]}(L_{\mathrm p}^2) + (1- \delta ) \mathbb {1}_{(1,\infty )}(L_{\mathrm p}^2) \nonumber \\&> 0 \end{aligned}$$
(6.21)

on \({\text {ran}}\mathbb {1}_{L_{\mathrm p}\not = 0}\) whenever \(\delta < {\min \{\varXi ,1\}}\). Now using (6.20) and (6.21) we can achieve that (6.19) is positive on

$$\begin{aligned} {\text {ran}}(\mathbb {1}_{L_{\mathrm p}\not = 0} \otimes P_\varOmega ) \end{aligned}$$

for \(\left| \lambda \right| \) small enough. The claim now follows in view of the decomposition (2.13). \(\square \)