1 Introduction

In this paper, we study the spin boson model with external magnetic field. This model describes the interaction of a two-level quantum mechanical system with a boson field in presence of a constant external magnetic field. We derive a Feynman–Kac–Nelson (FKN) formula for this model, which relates expectation values of the semigroup generated by the Hamilton operator to the expectation value of a Poisson-driven jump process and a Gaussian random process indexed by a real Hilbert space obtained by an Euclidean extension of the dispersion relation of the bosons. Especially, when calculating expectation values with respect to the ground state of the free Hamiltonian, one can explicitly integrate out the boson field and obtain expectations only with respect to the jump process. This allows us to express the ground state energy and its derivatives in terms of correlation functions of a continuous Ising model, provided a gap assumption is satisfied. As an application, we show the existence of ground states for the spin boson model in the case of massless bosons for infrared singular interactions, using a recent correlation bound and a regularization procedure.

The history of FKN-type theorems dates back to the work of Feynman and Kac [14, 30]. Such functional integral respresentations were used to study the spectral properties of models in quantum field theory by Nelson [37]. Since then, many authors have used this approach to study models of non-relativistic quantum field theory, see, for example, [6, 7, 11, 15, 19, 20, 27, 28, 47] and references therein. The spin boson model without an external magnetic field has been investigated using this approach in [1, 16, 44] and recently in [24]. In [48], path measures for the spin boson model with magnetic field were studied by means of Gibbs measures. In this paper, we extend the FKN formula for the spin boson model to external magnetic fields.

This paper is structured as follows. Section 2 is devoted to the definition of the spin boson model and the statement of our main results. We start out with a rigorous definition of the spin boson Hamiltonian with external magnetic field as a self-adjoint lower-semibounded operator in Sect. 2.1. In Sect. 2.2, we then describe its probabilistic description through the FKN formula stated in Theorem 2.4 and reduce the degrees of freedom to study expectation values with respect to the ground state of the free operator as expectation values of a continuous Ising model in Corollary 2.6. In Sect. 2.3, we then use the well-known connection between expectation values of the semigroup and the ground state energy to express the derivatives of the ground state energy with respect to the magnetic field strength as correlation functions of this continuous Ising model, under the assumption of massive bosons. The proofs of the results presented in Sect. 2 are given in Sect. 3.

In Sect. 4, we then apply our results and prove Theorem 4.1. Explicitly, we use the recent result from [26] to prove the existence of ground states of the spin boson Hamiltonian with vanishing external magnetic field. Our proof especially includes the case of massless bosons with infrared-singular coupling.

The article is accompanied by a series of appendices. In Appendices A to C, we present some essential technical requirements for our proofs, including standard Fock space properties in Appendix C.1 and a construction of the so-called \({\mathcal Q}\)-space in Appendix C.2. In Appendix D, we give a proof for the existence of ground states at arbitrary external magnetic field in the case of massive bosons, a case which to our knowledge is not covered in the literature.

1.1 General Notation

\(L^2\)-spaces: For a measure space \(({\mathcal M},\mathrm{d}\mu )\) and a real or complex Hilbert space \({\mathfrak h}\), we denote by \(L^2({\mathcal M},\mathrm{d}\mu ;{\mathfrak h})\) the real or complex Hilbert space of square-integrable \({\mathfrak h}\)-valued measurable functions on \({\mathcal M}\), respectively. If \({\mathfrak h}={\mathbb C}\), we write for simplicity \(L^2({\mathcal M},\mathrm{d}\mu )=L^2({\mathcal M},\mathrm{d}\mu ;{\mathbb C})\). Further, we assume \({\mathbb R}^d\) for any \(d\in {\mathbb N}\) to be equipped with the Lebesgue measure without further mention.

Characteristic functions: For \(A\subset X\), we define the function \(\mathbf {1}_{A}:X\rightarrow {\mathbb R}\) with \(\mathbf {1}_{A}(x)=1\), if \(x\in A\), and \(\mathbf {1}_{A}(x)=0\), if \(x\notin A\).

2 Model and Results

2.1 Spin Boson Model with External Magnetic Field

In this section, we give a precise definition of the spin boson Hamiltonian with external magnetic field and prove that it defines a self-adjoint lower-semibounded operator.

Let us recall the standard Fock space construction from the Hilbert space perspective. Textbook expositions on the topic can, for example, be found in [4, 10, 38, 42].

Throughout, we assume \({\mathfrak h}\) to be a complex Hilbert space. Then, we define the bosonic Fock space over \({\mathfrak h}\) as

$$\begin{aligned} {\mathcal F}({\mathfrak h}) = {\mathbb C}\oplus \bigoplus _{n\in {\mathbb N}} \otimes ^n_{\mathrm {s}}{\mathfrak h}, \end{aligned}$$
(2.1)

where \(\otimes ^n_{\mathrm {s}}{\mathfrak h}\) denotes the n-fold symmetric tensor product of Hilbert spaces. We write Fock space vectors as sequences \(\psi =\left( \psi ^{(n)}\right) _{n\in {\mathbb N}_0}\) with \(\psi ^{(0)}\in {\mathbb C}\) and \(\psi ^{(n)}\in \otimes ^n_{\mathrm {sym}}{\mathfrak h}\). Especially, we define the Fock space vacuum \(\Omega =(1,0,0,\ldots )\).

For a self-adjoint operator A, let the (differential) second quantization operator \({\mathsf d}\Gamma (A)\) on \({\mathcal F}({\mathfrak h})\) be the operator

$$\begin{aligned} {\mathsf d}\Gamma (A) = \overline{0\oplus \bigoplus _{n\in {\mathbb N}}\sum _{k=1}^{n}(\otimes ^{k-1}{\mathbb {1}})\otimes A \otimes (\otimes ^{n-k-1}{\mathbb {1}})}, \end{aligned}$$
(2.2)

where \(\bar{(\cdot )}\) denotes the operator closure. Next, if \({\mathfrak v}\) is another complex Hilbert space and \(B:{\mathfrak h}\rightarrow {\mathfrak v}\) is a contraction operator (i.e., \(\Vert B\Vert \le 1\)), the second quantization operator \(\Gamma (B):{\mathcal F}({\mathfrak h})\rightarrow {\mathcal F}({\mathfrak v})\) is given as

$$\begin{aligned} \Gamma (B) = 1\oplus \bigoplus _{n\in {\mathbb N}}\otimes ^nB. \end{aligned}$$
(2.3)

Furthermore, for \(f\in {\mathfrak h}\), we define the creation and annihilation operators \(a^\dag (f)\) and a(f) as the closed linear operators acting on pure tensors as

(2.4)

where \(\otimes _{{\mathsf {s}}}\) denotes the symmetric tensor product and \(\ {\widehat{\cdot }}\ \) in the first line means that the corresponding entry is omitted. Note that \(a^\dagger (f)\) is the adjoint of a(f). We introduce the field operator as

$$\begin{aligned} \varphi (f) = \frac{1}{\sqrt{2}} \overline{a(f)+a^\dag (f)}. \end{aligned}$$
(2.5)

In Appendix C.1, we provide a variety of well-known properties of the operators defined above, which will be used throughout this article. From now on, we also write \({\mathcal F}={\mathcal F}(L^2({\mathbb R}^d))\).

To define the spin boson Hamiltonian with external magnetic field, let \(\sigma _x,\sigma _y,\sigma _z\) denote the \(2\times 2\)-Pauli matrices

$$\begin{aligned} \sigma _x = \begin{pmatrix} 0 &{} 1 \\ 1 &{} 0\end{pmatrix}, \qquad \sigma _y = \begin{pmatrix} 0 &{} -{\mathsf {i}}\\ {\mathsf {i}}&{} 0\end{pmatrix} \qquad \sigma _z = \begin{pmatrix} 1 &{} 0 \\ 0 &{} -1\end{pmatrix}. \end{aligned}$$
(2.6)

We consider the Hamilton operator

$$\begin{aligned} H(\lambda ,\mu ) = \sigma _z\!\otimes \!{\mathbb {1}}+ {\mathbb {1}}\!\otimes \! {\mathsf d}\Gamma (\omega ) + \sigma _x\!\otimes \!(\lambda \varphi (v)+\mu {\mathbb {1}})\! ~~~\text{ acting } \text{ on } \ {\mathcal H}={\mathbb C}^2\!\otimes \! {\mathcal F}.\quad \,\,\,\, \end{aligned}$$
(2.7)

To prove that the expression (2.7) defines a self-adjoint lower-semibounded operator, we need the following assumptions.

Hypothesis A

  1. (i)

    \(\omega :{\mathbb R}^d\rightarrow [0,\infty )\) is measurable and has positive values almost everywhere.

  2. (ii)

    \(v\in L^2({\mathbb R}^d)\) satisfies \(\omega ^{-1/2}v\in L^2({\mathbb R}^d)\).

Lemma 2.1

Assume Hypothesis A holds. Then, the operator \(H(\lambda ,\mu )\) given by (2.7) is self-adjoint and lower-semibounded on the domain \({\mathcal D}({\mathbb {1}}\otimes {\mathsf d}\Gamma (\omega ))\) for all values of \(\lambda ,\mu \in {\mathbb R}\).

Proof

As a sum of strongly commuting self-adjoint and lower-semibounded operators H(0, 0) is self-adjoint and lower-semibounded, cf. Lemma C.1 (ii). Further, by Lemma C.1 (vii), with \(A=\omega \), and the boundedness of \(\sigma _x\), the operator \(\sigma _x\otimes (\lambda \varphi (v)+\mu {\mathbb {1}})\) is infinitesimally bounded with respect to H(0, 0). Hence, the statement follows from the Kato–Rellich theorem ([42, Theorem X.12]). \(\square \)

2.2 Feynman–Kac–Nelson Formula

In this section, we move to a probabilistic description of the spin boson model. Except for Lemma 2.2, all statements are proved in Sect. 3.1.

The spin part can be described by a jump process, which we construct here explicitly. To that end, let \((N_t)_{t\ge 0}\) be a Poisson process with unit intensity, i.e., a stochastic process with state space \({\mathbb N}_0\), stationary independent increments, and satisfying

$$\begin{aligned} {\mathbb {P}}[N_t=k] = e^{-t}\frac{t^k}{k!} \qquad \text{ for }\ k\in {\mathbb N}_0,\ t\ge 0, \end{aligned}$$
(2.8)

realized on some measurable space \(\Omega \). We refer the reader to [8] for a concrete realization of \(\Omega \). Moreover, we can choose \(\Omega \) such that \(N_t(\omega )\) is right-continuous for all \(\omega \in \Omega \), see for example [9, Section 23]. Further, let B be a Bernoulli random variable with \({\mathbb {P}}[B=1]={\mathbb {P}}[B=-1]=\frac{1}{2}\), which we realize on the space \(\{-1, 1\}\). Then, we define the jump process \(( {\widetilde{X}}_t)_{t\ge 0}\) on the product space \(\Omega \times \{-1 , 1\}\) (equipped with the product measure) by

$$\begin{aligned} {\widetilde{X}}_t(\omega ,b) = B(b)(-1)^{N_t(\omega )} , \quad (\omega , b) \in \Omega \times \{ - 1 , 1 \} . \end{aligned}$$
(2.9)

To fix a suitable measure space to work with, we use the law of the process \(({\widetilde{X}}_t)_{t \ge 0}\). That is, we realize the stochastic process on the space

$$\begin{aligned} {\mathscr {D}}= \{x:[0,\infty )\rightarrow \{\pm 1\} : x\ \text{ is } \text{ right-continuous }\} , \end{aligned}$$
(2.10)

where we equip \({\mathscr {D}}\) with the \(\sigma \)-algebra generated by the projections \(\pi _t(x)=x_t\), \(t \ge 0\). The measure, \(\mu _X\), on \({\mathscr {D}}\) is then given by the pushforward with respect to the map

$$\begin{aligned} {\widetilde{X}} : \ \Omega \times \{ -1 , 1 \} \rightarrow {\mathscr {D}},\qquad (\omega , b) \mapsto ( t \mapsto {\widetilde{X}}_t(\omega , b) ) , \end{aligned}$$

for which it is straightforward to see that it is measurable. We define the process \(X_t(x) = x_t\) for \(x \in {\mathscr {D}}\), \(t \ge 0\). It follows by construction that the stochastic processes \(X_t\) and \({\widetilde{X}}_t\) are equivalent, in the sense that they have the same finite-dimensional distributions. For random variables Y on the measure space \(({\mathscr {D}},\mu _X)\), we define

$$\begin{aligned} {\mathbb {E}}_X[Y] = \int _{{\mathscr {D}}} Y \mathrm{d}\mu _X. \end{aligned}$$

We note that by the construction (2.8), the paths of X \(\mu _X\)-almost surely have only finitely many jumps in any compact interval. We denote the set of all such paths by \({\mathscr {D}}_{\mathsf f}\). The property \(\mu _X({\mathscr {D}}_{\mathsf f})=1\) can alternatively also be deferred from the theory of continuous-time Markov processes, cf. [33, 40].

We now want to give a probabilistic description of the bosonic field. To that end, we define the Euclidean dispersion relation \({\omega _{{\mathsf {E}}}}:{\mathbb R}^{d+1}\rightarrow [0,\infty )\) as \({\omega _{{\mathsf {E}}}}(k,t)=\omega ^2(k)+t^2\) and the Hilbert space of the Euclidean field as

$$\begin{aligned} {{\mathcal {E}}}= L^2({\mathbb R}^{d+1},{\omega _{{\mathsf {E}}}}^{-1}(k,t)\mathrm{d}(k,t)). \end{aligned}$$
(2.11)

Let \({\phi _{{\mathsf {E}}}}\) be the Gaussian random variable indexed by the real Hilbert space

$$\begin{aligned} {\mathcal R}=\{f\in {{\mathcal {E}}}: f(k,t)=\overline{f(-k,-t)}\} \end{aligned}$$
(2.12)

on the (up to isomorphisms unique) probability space \(({{\mathcal Q}_{{\mathsf {E}}}},{\Sigma _{{\mathsf {E}}}},{\mu _{{\mathsf {E}}}})\) and denote expectation values w.r.t. \({\mu _{{\mathsf {E}}}}\) as \({\mathbb {E}}_{{\mathsf {E}}}\). For the convenience of the reader, we have described a possible explicit construction in Appendix C.2. We note that the complexification \({\mathcal R}_{\mathbb C}\) is unitarily equivalent to \({\mathcal E}\), by the map \((f,g)\mapsto f+{\mathsf {i}}g\), and hence \({\mathcal F}({\mathcal E})\) and \(L^2({{\mathcal Q}_{{\mathsf {E}}}})\) are unitarily equivalent, by Proposition C.3.

For \(t\in {\mathbb R}\), we define

$$\begin{aligned} j_tf(k,s) = \frac{e^{-{\mathsf {i}}ts}}{\sqrt{\pi }}\omega ^{1/2}(k)f(k). \end{aligned}$$
(2.13)

Lemma 2.2

  1. (i)

    (2.13) defines an isometry \(j_t:L^2({\mathbb R}^d)\rightarrow {{\mathcal {E}}}\) for any \(t\in {\mathbb R}\).

  2. (ii)

    If, for almost all \(k\in {\mathbb R}^d\), \(f\in L^2({\mathbb R}^d)\) satisfies \(f(k)=\overline{f(-k)}\) and \(\omega (k)=\omega (-k)\), then \(j_tf\in {\mathcal R}\).

  3. (iii)

    \(j_s^*j_t = e^{-|t-s|\omega }\) for all \(s,t\in {\mathbb R}\).

Proof

The statements follow by the direct calculation

\(\square \)

Remark 2.3

In the literature (2.13) is often defined via the Fourier transform .

We set

$$\begin{aligned} \widetilde{I}_t:{\mathcal F}\rightarrow L^2({{\mathcal Q}_{{\mathsf {E}}}}), \qquad \psi \mapsto \Theta _{{\mathcal R}}\Gamma (j_t)\psi , \end{aligned}$$
(2.14)

where \(\Theta _{{\mathcal R}}\) denotes the Wiener–Itô–Segal isomorphism introduced in Proposition C.3 and \(\Gamma \) is the second quantization of the contraction operator \(j_t\), as defined in (2.3). Further, we define the isometry \(\iota : {\mathbb C}^2 \rightarrow L^2( \{ \pm 1 \} , \mathrm{d}\mu _{1/2} )\), with \(\mu _{1/2}(\{ s \} ) = \frac{1}{2}\) for \(s \in \{\pm 1\}\), by

$$\begin{aligned} (\iota \alpha )(+1)=\sqrt{2} \alpha _1 \text { and } (\iota \alpha )(-1)=\sqrt{2} \alpha _2 , \end{aligned}$$
(2.15)

where \(\alpha _i\) denotes the i-th entry of the vector \(\alpha \in {\mathbb C}^2\). We define the map \(I_t := \iota \otimes \widetilde{ I}_t\), where

$$\begin{aligned} I_t : {\mathcal H}= {\mathbb C}^2 \otimes {\mathcal F}\rightarrow L^2( \{ \pm 1 \} , \mathrm{d}\mu _{1/2} ) \otimes L^2({{\mathcal Q}_{{\mathsf {E}}}}) \cong L^2( \{ \pm 1 \} , \mathrm{d}\mu _{1/2} ; L^2({{\mathcal Q}_{{\mathsf {E}}}}) ) . \end{aligned}$$

To formulate the Feynman–Kac–Nelson (FKN) formula, it will be suitable to work with the following transformed Hamilton operator, which is unitary equivalent to \(H(\lambda ,\mu )\) up to a constant multiple of the identity. Explicitly, we apply the unitary

$$\begin{aligned} U = e^{{\mathsf {i}}\frac{\pi }{4}\sigma _y} = \frac{1}{\sqrt{2}}\begin{pmatrix}1 &{} 1\\ -1 &{} 1\end{pmatrix} \end{aligned}$$
(2.16)

and define the transformed Hamilton operator

$$\begin{aligned} {\widetilde{H}}(\lambda ,\mu )= & {} {\mathbb {1}}+ (U\otimes {\mathbb {1}})H(\lambda ,\mu )(U\otimes {\mathbb {1}})^* \nonumber \\&\quad = ({\mathbb {1}}-\sigma _x)\otimes {\mathbb {1}}+ {\mathbb {1}}\otimes {\mathsf d}\Gamma (\omega )+\sigma _z\otimes (\lambda \varphi (v)+\mu {\mathbb {1}}), \end{aligned}$$
(2.17)

where we used \(U\sigma _zU^*=-\sigma _x\) and \(U\sigma _x U^*=\sigma _z\).

Our result holds under the following assumptions.

Hypothesis B

Assume Hypothesis A and the following:

  1. (i)

    \(\omega (k)=\omega (-k)\) for almost all \(k\in {\mathbb R}^d\).

  2. (ii)

    v has real Fourier transform, i.e., \(v(k)=\overline{v(-k)}\) for almost all \(k\in {\mathbb R}^d\).

We are now ready to state the FKN formula for the spin boson model with external magnetic field.

Theorem 2.4

(FKN Formula) Assume Hypothesis B holds. Then, for all \(\Phi ,\Psi \in {\mathcal H}\) and \(\lambda ,\mu \in {\mathbb R}\), we have

We note that the integrability of the right hand side in above theorem follows from the identity

$$\begin{aligned} {\mathbb {E}}\left[ \exp (Z)\right] = \exp \left( \frac{1}{2}{\mathbb {E}}[Z^2] \right) , \end{aligned}$$
(2.18)

which holds for any Gaussian random variable Z (see, for example, [45, (I.17)]). We outline the argument in the remark below.

Remark 2.5

By (2.13), the map \([0,T]\rightarrow {\mathcal E},t\mapsto j_tv\) is strongly continuous. Hence, by (C.1), the map \({\mathbb R}\rightarrow L^2({{\mathcal Q}_{{\mathsf {E}}}}),t\mapsto {\phi _{{\mathsf {E}}}}(j_tv)\) is continuous. Thus, for \((x_t)_{t\ge 0}\in {\mathscr {D}}_{\mathsf f}\), the function

\(t \mapsto \phi _E(j_t v) x_t\) is a piecewise continuous \(L^2({{\mathcal Q}_{{\mathsf {E}}}})\)-valued function on compact intervals of \([0,\infty )\). Thus, the integral over t exists as an \(L^2({{\mathcal Q}_{{\mathsf {E}}}})\)-valued Riemann integral \(\mu _X\)-almost surely. Since Riemann integrals are given as limits of sums, the measurability with respect to the product measure \(\mu _X \otimes {\mu _{{\mathsf {E}}}}\) follows. In fact, again fixing \(x\in {\mathscr {D}}_{\mathsf f}\) and using Fubini’s theorem as well as Hölder’s inequality, one can prove that the integral \(\int _0^{ T} {\phi _{{\mathsf {E}}}}(j_tv)x_t\mathrm{d}t\) can also be calculated as Lebesgue-integral evaluated \({\mu _{{\mathsf {E}}}}\)-almost everywhere pointwise in \({{\mathcal Q}_{{\mathsf {E}}}}\) with the same result. This is outlined in Appendix A. Furthermore, \(\int _0^\mathrm{T} {\phi _{{\mathsf {E}}}}(j_tv)x_t\mathrm{d}t\) is a Gaussian random variable, since \(L^2\)-limits of linear combinations of Gaussians are Gaussian. We conclude that the right hand side of the FKN formula is finite, since exponentials of Gaussian random variables are integrable, cf. (2.18).

We now want to describe the expectation value of the semigroup associated with \(H(\lambda ,\mu )\) (cf. (2.7)) with respect to the ground state of the free operator H(0, 0), by integrating out the field contribution in the expectation value. To that end, let

$$\begin{aligned} {\Omega _\downarrow }= \begin{pmatrix}0\\ 1\end{pmatrix}\otimes \Omega \end{aligned}$$
(2.19)

and define

$$\begin{aligned} W(t) = \frac{1}{4}\int _{{\mathbb R}^d}|v(k)|^2e^{-|t|\omega (k)}dk. \end{aligned}$$
(2.20)

Corollary 2.6

Assume Hypothesis B holds. Then, for all \(\lambda ,\mu \in {\mathbb R}\), we have

Remark 2.7

For \((x_t)_{t\ge 0}\in {\mathscr {D}}_{\mathsf f}\), the functions \((s,t)\mapsto W(t-s)x_tx_s\) and \(t\mapsto x_t\) are Riemann-integrable, since W is continuous. Further, the continuity also implies that the expression on the right hand side is uniformly bounded in the paths x and hence the expectation value exists and is finite by the dominated convergence theorem.

Remark 2.8

The expectation value on the right hand side can be interpreted as the partition function of a long-range continuous Ising model on \({\mathbb R}\) with coupling functions W. This model can be obtained as a limit of a discrete Ising model with long-range interactions, see [25, 44, 48].

2.3 Ground State Energy

We are especially interested in studying the ground state energy of the spin boson model

$$\begin{aligned} E(\lambda ,\mu ) = \inf \sigma (H(\lambda ,\mu )) . \end{aligned}$$
(2.21)

In this section, we want to use the FKN formula from the previous section to express derivatives of the ground state energy.

Starting point of this investigation is the following well-known formula, sometimes referred to as Bloch’s formula, expressing the ground state energy as expectation value of the semigroup, see for example [46]. We verify it in Sect. 3.2 using a positivity argument.

Lemma 2.9

Assume Hypothesis A holds. Then, for all \(\lambda ,\mu \in {\mathbb R}\),

The central statement of this section is that above equation carries over to the derivatives with respect to \(\mu \), provided that the ground state energy of \(H(\lambda ,\mu )\) is in the discrete spectrum, i.e., \(E(\lambda ,\mu ) \in \sigma _\mathsf{disc}(H(\lambda ,\mu ))\). We note that this spectral assumption has been shown in [2, Theorem 1.2] for \(\mu =0\) if \({{\,\mathrm{ess\,inf}\,}}_{k\in {\mathbb R}^d}\omega (k)>0\) and we extend the result to arbitrary choices of \(\mu \) in Appendix D.

Theorem 2.10

Assume Hypothesis A holds. Let \(\lambda ,\mu _0 \in {\mathbb R}\) and suppose \(E(\lambda ,\mu _0) \in \sigma _\mathsf{disc}(H(\lambda ,\mu _0))\). Then, for all \(n \in {\mathbb N}\), the following derivatives exist and satisfy

We now want to combine this observation with the FKN formula from Theorem 2.4. To that end, we define

$$\begin{aligned} Z_T(\lambda ,\mu ) = {\mathbb {E}}_X\left[ \exp \left( \lambda ^2\int _0^{ T}\int _0^{ T} W(t-s)X_tX_s\mathrm{d}s\mathrm{d}t {-} \mu \int _0^{ T}X_t\mathrm{d}t\right) \right] ,\nonumber \\ \end{aligned}$$
(2.22)

with W as defined in (2.20) and note that

(2.23)

by Corollary 2.6. Thus, Lemma 2.9 gives

$$\begin{aligned} E(\lambda ,\mu )= - \lim _{ T \rightarrow \infty }\left( \frac{1}{T}\ln Z_{T}(\lambda ,\mu ) +1 \right) . \end{aligned}$$
(2.24)

We note that the stochastic integral in (2.24) was used in [1] to show analyticity of \( \lambda \mapsto E(\lambda ,0)\) in a neighborhood of zero. The next two statements express the derivatives of the ground state energy in terms of a stochastic integral. To that end, for a random variable Y on \(({\mathscr {D}},\mu _X)\), we define the expectation

(2.25)

Further, we denote by \({\mathcal P}_n\) the set of all partitions of the set \(\{1,\ldots ,n\}\) and by |M| the cardinality of a finite set M.

Theorem 2.11

Assume Hypothesis B holds. Let \(\lambda , \mu \in {\mathbb R}\) and suppose \(E(\lambda ,\mu ) \in \sigma _\mathsf{disc}(H(\lambda ,\mu ))\). Then, for all \(n \in {\mathbb N}\), the following derivatives exist and satisfy

In addition, we can express derivatives of the ground state energy in terms of the so-called Ursell functions [39] or cumulants. This allows us to use correlation inequalities to prove bounds on derivatives. In fact, we will use this in Corollary 2.14 below to estimate the second derivative with respect to the magnetic field at zero. Given random variables \(Y_1,\ldots ,Y_n\) on \(({\mathscr {D}},\mu _X)\), we define the Ursell function

(2.26)

Corollary 2.12

Assume Hypothesis B holds. Let \(\lambda , \mu \in {\mathbb R}\) and suppose \(E(\lambda ,\mu ) \in \sigma _\mathsf{disc}(H(\lambda ,\mu ))\). Then, for all \(n \in {\mathbb N}\), the following derivatives exist and satisfy

Next, we show how the formulas in Theorem 2.11 and Corollary 2.12, respectively, can be used to obtain bounds on derivatives of the ground state energy. For this, we will use the following correlation bound of a continuous long-range Ising model, cf. Remark 2.7. Approximating this model by a discrete Ising model, we proved a bound on these correlation functions in [25].

Theorem 2.13

([25]) There exist \(\varepsilon >0\) and \(C>0\) such that for all \(h\in L^1({\mathbb R})\) which are even, continuous and satisfy \(\Vert h\Vert _{L^1({\mathbb R})}\le \varepsilon \), we have

$$\begin{aligned} 0\le \limsup _{T\rightarrow \infty } \frac{\displaystyle \frac{1}{T}{\mathbb {E}}_X \left[ \left( \int _0^{ T} X_t\mathrm{d}t\right) ^2\exp \left( \int _0^{ T}\int _0^{ T}h(t-s)X_tX_s\mathrm{d}t\mathrm{d}s\right) \right] }{\displaystyle {\mathbb {E}}_X\left[ \exp \left( \int _0^{ T}\int _0^{ T}h(t-s)X_tX_s\mathrm{d}t\mathrm{d}s\right) \right] } \le C . \end{aligned}$$

As an application of Theorems 2.11 and 2.13, we obtain the following result, giving us a bound on the second derivative of the ground state energy which is uniform in the size of the spectral gap. Since the proof only demonstrates the application of these theorems, we state it here directly.

Corollary 2.14

Let \(\nu \) be a measurable function on \({\mathbb R}^d\) satisfying \(\nu > 0\) a.e. and \(\nu (-k) = \nu (k) \). Let \(v \in L^2({\mathbb R}^d)\) have real Fourier transform and \(\nu ^{-1/2}v\in L^2({\mathbb R}^d)\). Let \(m > 0\) and \(\omega = \sqrt{ \nu ^2 + m^2}\). Then, for every \(\lambda \in {\mathbb R}\), the function \( \mu \mapsto E(\lambda ,\mu )\) is twice differentiable in a neighborhood of zero and, choosing W as defined in (2.20),

$$\begin{aligned} \partial _\mu ^2 E(\lambda , 0 )&= - \lim _{ T \rightarrow \infty }\frac{\displaystyle \frac{1}{T} {\mathbb {E}}_X\left[ \left( \int _0^\mathrm{T} X_t\mathrm{d}t\right) ^2\exp \left( \lambda ^2\int _0^{ T}\int _0^{ T}W(t-s)X_tX_s\mathrm{d}t\mathrm{d}s\right) \right] }{\displaystyle {\mathbb {E}}_X\left[ \exp \left( \lambda ^2\int _0^{ T}\int _0^{ T}W(t-s)X_tX_s\mathrm{d}t\mathrm{d}s\right) \right] } . \end{aligned}$$

Further, there exists a \(\lambda _{\mathsf c}> 0\) such that for all \(\lambda \in (- \lambda _{\mathsf c}, \lambda _{\mathsf c})\) the second derivative satisfies

$$\begin{aligned} \limsup _{ m \downarrow 0}| \partial _\mu ^2 E(\lambda , 0 )| < \infty . \end{aligned}$$

Proof

Due to the definition, we have \({{\,\mathrm{ess\,inf}\,}}_{k\in {\mathbb R}^d}\omega (k)\ge m>0\) and hence \(E(\lambda ,0)\in \sigma _\mathsf{disc}(H(\lambda ,0))\), by Theorem D.1. Thus, Theorem 2.11 is applicable.

Due to the so-called spin-flip-symmetry of the model, i.e., X and \(-X\) being equivalent stochastic processes in the sense of their finite-dimensional distributions by the choice of the Bernoulli random variable in (2.9), we have

and hence

Thus, by Theorem 2.11

By the definition in (2.20), the interaction function \(W\in L^1({\mathbb R})\) satisfies

$$\begin{aligned} \Vert W \Vert _{L^1({\mathbb R})} = \frac{1}{2} \Vert \omega ^{-1/2} v \Vert _2^2 \le \frac{1}{2} \Vert \nu ^{-1/2} v \Vert _2^2. \end{aligned}$$

Setting \(\lambda _{\mathsf c}= (\frac{1}{2} \varepsilon )^{1/2} /\Vert \nu ^{-1/2} v \Vert _2\) with \(\varepsilon \) given as in Proposition 2.13, we can apply Proposition 2.13 with \(h=\lambda ^2 W\) for all \(\lambda \in (-\lambda _{\mathsf c},\lambda _{\mathsf c})\), which proves \(|\partial _\mu ^2 E(\lambda ,0)|\le C\) for all \(m>0\). This concludes the proof. \(\square \)

3 Proofs

In this section, we prove the results presented in Sect. 2.2 and 2.3

3.1 The FKN Formula

We start with the proof of Theorem 2.4. To that end, we first derive a FKN formula for the spin part, which is described by the jump process. For the statement, we recall the definition of \(\iota :{\mathbb C}^2\rightarrow L^2({\pm 1},\mu _{1/2})\) in (2.15).

Lemma 3.1

Let \(n\in {\mathbb N}\) and \(t_1,\ldots ,t_n\ge 0\). We set \(s_k = \sum \limits _{i=1}^{k}t_k\) for \(k=1,\ldots ,n\). Then, for all \(\alpha ,\beta \in {\mathbb C}^2\) and \(f_0,f_1,\ldots ,f_n:\{\pm 1\}\rightarrow {\mathbb C}\), we have

Proof

Since any function \(f:\{\pm 1\}\rightarrow {\mathbb C}\) is a linear combination of the identity and the constant function 1, it suffices to consider the case \(f_0=f_1=\cdots = f_n= {\text {id}}\). Further, due to bilinearity, it suffices to choose \(\alpha \) and \(\beta \) to be arbitrary basis vectors. We hereby use the basis consisting of eigenvectors of \(\sigma _x\), i.e., \(e_1=\frac{1}{\sqrt{2}}(1,1)\) and \(e_2=\frac{1}{\sqrt{2}}(1,-1)\). Then,

$$\begin{aligned} \sigma _x e_1 = e_1,\quad \sigma _x e_2 = -e_2, \quad \sigma _z e_1 = e_2, \quad \text{ and }\quad \sigma _z e_2 = e_1 \end{aligned}$$

and hence

(3.1)

Now, observe that \(X_0\), \(X_tX_s\) and \(X_uX_v\) are independent random variables if \(0\le t\le s\le u\le v\), by construction. For \(i,k\in {\mathbb N}\), this yields the identities

$$\begin{aligned}&{\mathbb {E}}_X[X_{s_i}\cdots X_{s_{i+k}}] = {\mathbb {E}}_X[ (X_0)^k ]{\mathbb {E}}_X[(X_0X_{s_i})^k]\prod _{j=1}^{k}{\mathbb {E}}_X[(X_{s_{i+j-1}}X_{s_{i+j}})^{k-j+1}],\\&{\mathbb {E}}_X\left[ X_{s_i}\cdots X_{s_{i+2k}} \right] = {\mathbb {E}}_X[ X_{s_i}X_{s_{i+1}}]{\mathbb {E}}_X[ X_{s_i+2}X_{s_{i+3}}] \cdots {\mathbb {E}}_X[X_{s_{i+2k-1}}X_{s_{i+2k}}]. \end{aligned}$$

If k is odd, then \({\mathbb {E}}_X[ (X_0)^k ]=0\), by (2.9) and the definition of B. Further, by (2.8), we find

$$\begin{aligned} {\mathbb {E}}_X[X_tX_s] = \sum _{k=0}^{\infty }({\mathbb {P}}[N_{t-s}=2k]-{\mathbb {P}}[N_{t-s}=2k+1]) = e^{-2(t-s)} \qquad \text{ for }\ 0\le s\le t. \end{aligned}$$

Hence, setting \(s_0=0\), we arrive at

$$\begin{aligned} {\mathbb {E}}_X\left[ X_{s_j}\cdots X_{s_k} \right] = {\left\{ \begin{array}{ll} 0 &{}{} \text { if }\ k-j\ \text { is } \text { even, }\\ e^{-2(t_{j+1}+t_{j+3}+\cdots +t_{k-2}+t_{k})} &{}{} \text { if }\ k-j\ \text { is } \text { odd } \end{array}\right. } \quad \text { for }\ 0\le j\le k\le n.\nonumber \\ \end{aligned}$$
(3.2)

Combining (3.2) and (3.1), we have

Observing that \(\iota e_1(x)=1\) and \( \iota e_2(x)=x\) for \(x=\pm 1\) finishes the proof. \(\square \)

We now move to proving the FKN formula for the field part. We recall the definition of the isometry \(j_t:L^2({\mathbb R}^d)\rightarrow {\mathcal E}\) in (2.13). For \(I\subset {\mathbb R}\), let \(e_I\) denote the projection onto the closed subspace \(\overline{\mathrm{lin}\{f\in {{\mathcal {E}}}:f\in {\text {Ran}}(j_t)\ \text{ for } \text{ some }\ t\in I\}}\). Further, set \(e_t=e_{\{t \}} \) for any \(t \in {\mathbb R}\).

Lemma 3.2

Assume \(a\le b\le t\le c\le d\). Then

  1. (i)

    \(e_t = j_t j_t^*\),

  2. (ii)

    \(e_ae_be_c = e_ae_c\),

  3. (iii)

    \(e_{[a,b]}e_te_{[c,d]}=e_{[a,b]}e_{[c,d]}\).

Proof

Lemma 2.2 (i) and the definition of \(e_{\{t\}}\) directly imply (i). Further, (ii) follows from Lemma 2.2 (iii) by

$$\begin{aligned} e_ae_be_c=j_aj_a^*j_bj_b^*j_cj_c^* = j_ae^{-(b-a)\omega }e^{-(c-b)\omega }j_c^* = j_ae^{-(c-a)\omega }j_c^* = j_aj_a^*j_cj_c^*=e_ae_c. \end{aligned}$$

To prove (iii), let \(f,g\in {{\mathcal {E}}}\). By the definition, there exist sequences of times \((t_k)_{k\in {\mathbb N}}\subset [a,b]\) and \((s_m)_{m\in {\mathbb N}}\subset [c,d]\) and functions \(f_k\in {\text {Ran}}(j_{t_k})\), \(g_m\in {\text {Ran}}(j_{s_m})\) such that

$$\begin{aligned} e_{[a,b]}f = \sum _{k=1}^{\infty }f_k \qquad \text{ and }\qquad e_{[c,d]}g=\sum _{m=1}^{\infty }g_m. \end{aligned}$$

Furthermore, again by definition, \( {\text {Ran}}(j_{t}) = {\text {Ran}}(e_{t})\) for any \(t \in {\mathbb R}\). Hence, we can apply (ii) and obtain

Since f and g were arbitrary, this proves the statement. \(\square \)

Now, for \(t\in {\mathbb R}\) and \(I\subset {\mathbb R}\), let

$$\begin{aligned} J_t = \Gamma (j_t), \qquad E_t=\Gamma (e_t) \qquad \text{ and }\qquad E_I = \Gamma (e_I). \end{aligned}$$
(3.3)

Then the next statement in large parts follows directly from Lemma 2.2, 3.2 and C.1.

Lemma 3.3

Assume \(a\le b\le t\le c\le d\) and \(I\subset {\mathbb R}\). Then,

  1. (i)

    \(E_I\) is the orthogonal projection onto \(\overline{\mathrm{lin}\{f\in {\mathcal F}({{\mathcal {E}}}):f{\in }{\text {Ran}}(J_t)\ \text{ for } \text{ some }\ t{\in } I\}}\).

  2. (ii)

    \(E_t=J_tJ_t^*\),

  3. (iii)

    \(E_aE_bE_c=E_aE_c\),

  4. (iv)

    \(E_{[a,b]}E_tE_{[c,d]}=E_{[a,b]}E_{[c,d]}\).

  5. (v)

    For all \(F\in {\text {Ran}}(E_{[a,b]}) \) and \(G\in {\text {Ran}}(E_{[c,d]})\), we have .

  6. (vi)

    \(J_s^*J_t = e^{-|t-s|{\mathsf d}\Gamma (\omega )}\) for all \(s\in {\mathbb R}\),

  7. (vii)

    \(J_t\varphi (f)J_t^*=E_t\varphi (j_tf)E_t = \varphi (j_tf)E_t\) for all \(f\in L^2({\mathbb R}^d)\).

  8. (viii)

    \(J_tG(\varphi (f))J_t^*=E_t G(\varphi (j_tf))E_t = G(\varphi (j_tf))E_t\) for all \(f\in L^2({\mathbb R}^d)\) and bounded measurable functions G on \({\mathbb R}\).

Proof

All statements except for (v)–(vii) follow trivially from Lemma 3.2 and C.1 and the definitions. (v) follows from (iv), by the simple calculation

Finally, (vi) and (vii) follow by combining Lemmas 2.2 (iii) and C.1. Repeated application of (vii) shows that (viii) holds for G a polynomial. That it holds for arbitrary bounded measurable G follows from the measurable functional calculus [41]. \(\square \)

We can now prove the full FKN formula.

Proof of Theorem 2.4

Throughout this proof, we drop tensor products with the identity in our notation. Further, for the convenience of the reader, we explicitly state in which Hilbert space the inner product is taken.

Let \(\chi _K(x) = \min \{ x , K \} \) if \(x \ge 0\), and \(\chi _K(x) = \max \{x , -K\}\) if \(x < 0\). Further, let \(\varphi _K(v)=\chi _{K}(\varphi (v))\), \({\phi _{{\mathsf {E}},K}}(j_tv)=\chi _{K}({\phi _{{\mathsf {E}}}}(j_tv))\) and \({\widetilde{H}}_K(\lambda ,\mu )\) as in (2.17) with \(\varphi \) replaced by \(\varphi _K\). Since \({\widetilde{H}}_K(\lambda ,\mu )\) is lower-semibounded and \(\varphi _K\) is bounded, we can use the Trotter product formula (cf. [41, Theorem VIII.31]) and Lemma 3.3 Parts (vi) and (viii) (where the exponential is considered on the eigenspaces of \(\sigma _x\)) to obtain

Now we make iterated use of Lemma 3.3 (v). Explicitly, by Lemma 3.3 (viii), the vector to the left of any \(E_{k\frac{T}{N}}\), i.e.,

$$\begin{aligned}&\prod _{j=0}^{k-1} \left( E_{j\frac{T}{N}}e^{-\frac{T}{N}\sigma _z}e^{-\frac{T}{N} \sigma _x\otimes (\lambda \varphi _K(j_{k\frac{T}{N}}v)+\mu )}E_{j\frac{T}{N}}\right) J_0 \Phi \in {\text {Ran}}(E_{(k-1)\frac{T}{N}}), \\&\quad e^{-\frac{T}{N}\sigma _z}e^{-\frac{T}{N} \sigma _x\otimes (\lambda \varphi _K(j_{k\frac{T}{N}}v)+\mu )}\prod _{j=0}^{k-1} \left( E_{j\frac{T}{N}}e^{-\frac{T}{N}\sigma _z}e^{-\frac{T}{N} \sigma _x\otimes (\lambda \varphi _K(j_{k\frac{T}{N}}v)+\mu )}E_{j\frac{T}{N}}\right) \\&\qquad J_0 \Phi \in {\text {Ran}}(E_{k\frac{T}{N}}) \end{aligned}$$

is an element of \({\text {Ran}}(E_{[0,k\frac{T}{N}]})\). Equivalently, the vector to the right is an element of \({\text {Ran}}(E_{[k\frac{T}{N},T]})\). Hence, we can drop all the factors \(E_{k\frac{T}{N}}\). Then, using Proposition C.3 and (2.14), we derive

Hence, we can apply Lemma 3.1 to obtain

(3.4)

Since \(\chi _K\) is Lipschitz continuous, it follows that \( t \mapsto {\phi _{{\mathsf {E}},K}}(j_{t} v)\) is an \(L^2({{\mathcal Q}_{{\mathsf {E}}}})\)-valued continuous function. Thus, the sum in the exponential in (3.4) converges to an \(L^2({{\mathcal Q}_{{\mathsf {E}}}})\)-valued Riemann integral. By possibly going over to a subsequence the Riemann sum converges \(\mu _X \otimes {\mu _{{\mathsf {E}}}}\)-almost everywhere. Thus, it follows by dominated convergence that

(3.5)

(Alternatively, the convergence could also be deduced by estimating the expectation.) Since \(\varphi (v)\) is bounded with respect to \({\mathsf d}\Gamma (\omega )\) (cf. Lemma C.1), the spectral theorem implies that \({\widetilde{H}}_K(\lambda ,\mu )\) converges to \({\widetilde{H}}(\lambda ,\mu )\) in the strong resolvent sense and hence the left hand side of above equation converges to as \(K\rightarrow \infty \). On the other hand, using that for \(\mu _X \otimes {\mu _{{\mathsf {E}}}}\)-almost every \((x,q) \in {\mathscr {D}}\times {{\mathcal Q}_{{\mathsf {E}}}}\) the function \(t \mapsto ({\phi _{{\mathsf {E}},K}}(j_tv))(q)x_t\) is Lebesgue integrable, see Remark 2.5, it follows that \(\int _0^{ T} {\phi _{{\mathsf {E}},K}}(j_tv) x_t \mathrm{d}t \) converges to \(\int _0^{ T} {\phi _{{\mathsf {E}}}}(j_tv) x_t \mathrm{d}t \) almost everywhere. Hence, the right hand side of (3.5) converges to

$$\begin{aligned} {\mathbb {E}}_X{\mathbb {E}}_{{\mathsf {E}}}\left[ \overline{I_0\Phi (X_0)}e^{{-}\lambda \int _0^{ T}{\phi _{{\mathsf {E}}}}\left( j_t v\right) X_t\mathrm{d}t {-} \mu \int _0^{ T} X_t\mathrm{d}t}I_T\Psi (X_T)\right] \end{aligned}$$

as \(K\rightarrow \infty \), by the dominated convergence theorem. For the majorant, we use that by Jensen’s inequality

$$\begin{aligned} \exp ( {-}\lambda \int _0^{ T}{\phi _{{\mathsf {E}},K}}\left( j_t v\right) X_t\mathrm{d}t )&\le \frac{1}{T} \int _0^{ T} \exp ( {-}\lambda T {\phi _{{\mathsf {E}},K}}\left( j_t v\right) X_t ) \mathrm{d}t \\&\le \frac{1}{T} \int _0^{ T} [ \exp ( {-}\lambda T {\phi _{{\mathsf {E}}}}\left( j_t v\right) ) + \exp ( \lambda T {\phi _{{\mathsf {E}}}}\left( j_t v\right) ) ] \mathrm{d}t , \end{aligned}$$

where in the second line we used \(\max \{ e^x , 1 \} \le e^x + e^{-x}\). Now the right hand side is integrable over \({{\mathcal Q}_{{\mathsf {E}}}}\)-space by (2.18). This proves the statement. \(\square \)

We end this section with the following:

Proof of Corollary 2.6

First, observe that with U as in (2.16), we have \((I_t(U^*\otimes {\mathbb {1}}){\Omega _\downarrow })(x) = 1\) for \(x=\pm 1\) and \(t\in {\mathbb R}\) (cf. (2.3) and (C.3)). Hence, Theorem 2.4 implies

Now, let \(x\in {\mathscr {D}}_{\mathsf f}\). Then, Fubini’s theorem and the identity (2.18) yield

$$\begin{aligned} {\mathbb {E}}_{{\mathsf {E}}}\left[ \left( \int _0^{ T}{\phi _{{\mathsf {E}}}}\left( j_t v\right) x_t\mathrm{d}t\right) ^2\right] = \int _0^{ T}\int _0^{ T} {\mathbb {E}}_{{\mathsf {E}}}\left[ {\phi _{{\mathsf {E}}}}(j_tv){\phi _{{\mathsf {E}}}}(j_sv)\right] x_tx_s\mathrm{d}t\mathrm{d}s \end{aligned}$$

Now, the definitions of the \({\mathcal R}\)-indexed Gaussian process (cf. (C.1)) and W (2.20) imply

where we used \(j_s^*j_t = e^{-|t-s|\omega }\) (cf. Lemma 2.2). This proves the statement. \(\square \)

3.2 Derivatives of the Ground State Energy

To prove our results on derivatives of the ground state energy, we start out with the proof of Lemma 2.9. Let us first state our version of Bloch’s formula. For the convenience of the reader, we provide the simple proof. Similar arguments are, for example, used in [3, 34].

Since the notion of positivity is essential therein, we recall it here for the convenience of the reader. For an arbitrary measure space \(({\mathcal M},\mu )\), we call a function \(f\in L^2({\mathcal M},\mathrm{d}\mu )\) (strictly) positive if it satisfies \(f(x)\ge 0\) (\(f(x)>0\)) for almost all \(x\in {\mathcal M}\). If A is a bounded operator on \(L^2({\mathcal M},\mathrm{d}\mu )\), we say A is positivity preserving (improving) if Af is (strictly) positive for all nonzero positive \(f\in L^2({\mathcal M},\mathrm{d}\mu )\).

Lemma 3.4

Let \(({\mathcal M},\mu )\) be a probability space and let H be a self-adjoint and lower-semibounded operator on \(L^2({\mathcal M},\mathrm{d}\mu )\). If \(e^{-TH}\) is positivity preserving for all \(T\ge 0\) and \(f\in L^2({\mathcal M},\mathrm{d}\mu )\) is strictly positive, then

Proof

First, we note that by the spectral theorem

(3.6)

where \(\nu _g\) denotes the spectral measure of H associated with g. This easily follows from the inequality

Now, for \(h\in L^2({\mathcal M},\mathrm{d}\mu )\) satisfying

$$\begin{aligned} c_1 f \le h \le c_2 f \quad \text{ for } \text{ some } \ c_1 , c_2 > 0 , \end{aligned}$$
(3.7)

it follows from \(e^{-TH}\) being positivity preserving that

Combined with (3.6), it follows that \(\inf \sigma (H)\le E_f=E_h\). Since f is strictly positive, the linear span of the set \({\mathcal X}\) of functions satisfying (3.7) is dense in \(L^2({\mathcal M},\mathrm{d}\mu )\). It follows that \(E_f = \sigma (H)\), since otherwise \(E_f > \inf \sigma (H)\) and \(\chi _{(-\infty , E_f)}(H) L^2({\mathcal M}, \mathrm{d}\mu )\) would contain a nonzero vector which is orthogonal to \({\mathcal X}\). Thus, the statement follows from (3.6). \(\square \)

We now prove that the transformed spin boson Hamiltonian from (2.17) is positivity improving in an appropriate \(L^2\)-representation.

Lemma 3.5

Let \(\vartheta \) be the natural isomorphism \({\mathcal H}={\mathbb C}^2\otimes {\mathcal F}\rightarrow L^2(\{1,2\}\times {\mathcal Q}_{L^2 ({\mathbb R}^d;{\mathbb R})})\) which is determined by \( \alpha \otimes \psi \mapsto \left( (i,x)\mapsto \alpha _i (\Theta _{L^2 ({\mathbb R}^d;{\mathbb R})} \psi ) (x) \right) , \) where \(\Theta _{L^2 ({\mathbb R}^d;{\mathbb R})}:{\mathcal F}\rightarrow L^2({\mathcal Q}_{L^2 ({\mathbb R}^d;{\mathbb R})})\) is the unitary from Proposition C.3. Then, the operator \( \vartheta e^{-T{\widetilde{H}}(\lambda ,\mu )} \vartheta ^* \) is positivity improving for all \(T>0\).

Proof

First observe that

$$\begin{aligned} \vartheta e^{-T{\widetilde{H}}(\lambda ,\mu )} \vartheta ^* = e^{ - T \vartheta {\tilde{H}}(\lambda ,\mu )\vartheta ^*} = e^{ - T ( \vartheta {\widetilde{H}}(0,\mu )\vartheta ^* + \vartheta (\sigma _x\otimes \varphi (v))\vartheta ^* ) } \end{aligned}$$
(3.8)

To prove that (3.8) is positivity improving, we use a perturbative argument which can be found in [43, Theorem XIII.45]. Explicitly, by the definition of \(\vartheta \) and Proposition C.3,

$$\begin{aligned}V_n := \vartheta (\sigma _x\otimes \varphi (v)\mathbf {1}_{(-n,n)}(\varphi (v)))\vartheta ^*\end{aligned}$$

is a bounded multiplication operator in \(L^2(\{1,2\}\times {\mathcal Q}_{L^2 ({\mathbb R}^d;{\mathbb R})})\) for all \(n \in {\mathbb N}\). Furthermore, by the boundedness of \(\varphi (v)\) w.r.t. \({\mathsf d}\Gamma (\omega )\) (cf. Lemma C.1), we find \(\vartheta {\widetilde{H}}(0,\mu ) \vartheta ^*+\lambda V_n\) converges to \(\vartheta {\widetilde{H}}(\lambda ,\mu ) \vartheta ^*\) in strong resolvent sense and \(\vartheta {\widetilde{H}}(\lambda ,\mu ) \vartheta ^* - \lambda V_n\) converges to \(\vartheta {\widetilde{H}}(0,\mu )\vartheta ^*\) in strong resolvent sense. Hence, by [43, Theorems XIII.43,XIII.45] it follows that (3.8) is positivity improving if and only if

$$\begin{aligned} e^{-T \vartheta {\widetilde{H}}(0,\mu )\vartheta ^*} = e^{ -T( ( 1- \sigma _x)+ \mu \sigma _z) } e^{-T ( \Theta _{L^2 ({\mathbb R}^d;{\mathbb R})} {\mathsf d}\Gamma (\omega )\Theta _{L^2 ({\mathbb R}^d;{\mathbb R})}^* ) } \end{aligned}$$
(3.9)

is. Note that in (3.9) the first factor only acts on the variables \(\{1,2\}\), and the second factor only acts on the variables in \({\mathcal Q}_{L^2({\mathbb R}^d;{\mathbb R})}\). It is well known (cf. [45, Theorem I.16]) that the second factor on the right hand side of (3.9) is positivity improving on \(L^2({\mathcal Q}_{L^2({\mathbb R}^d;{\mathbb R})})\). Further, by explicit computation, we have that the first factor on the right hand side of (3.9)

$$\begin{aligned} \exp \left( -T( ( 1- \sigma _x)+ \mu \sigma _z) \right) = \begin{pmatrix} e^{-(\mu +1) T} &{} e^{T} \\ e^{(\mu +1) T} &{} e^{-T} \end{pmatrix} \end{aligned}$$

is positivity improving on \(L^2(\{1,2\})\), since all matrix elements are strictly positive. This finishes the proof. \(\square \)

We now obtain Lemma 2.9 as an easy corollary of Lemma 3.5.

Proof of Lemma 2.9

Let \(\vartheta \) be defined as in Lemma 3.5. By the definitions (2.7) and (2.17), we have

By (2.16), (2.19), and (C.3), we see that \(\vartheta (U\otimes {\mathbb {1}})^*{\Omega _\downarrow }= 2^{-1/2}\), which is a constant strictly positive function. Hence, the statement follows from Lemma 3.5 and 3.4, since \(\inf \sigma (H(\lambda ,\mu )) = \inf \sigma ({\widetilde{H}}(\lambda ,\mu )- {\mathbb {1}})\). \(\square \)

Further, the following statement also is a direct consequence of Lemma 3.5. It will be a useful ingredient to our proof of Proposition 2.10.

Proposition 3.6

If \(E(\lambda ,\mu )\) is an eigenvalue of \(H(\lambda ,\mu )\), then the corresponding eigenspace is non-degenerate. In this case, if \(\psi _{\lambda ,\mu }\) is a ground state of \(H(\lambda ,\mu )\), then .

Proof

By the Perron–Frobenius–Faris theorem [43, Theorem XIII.44] and Lemma 3.5, if \(E(\lambda ,\mu )\) is an eigenvalue of \(H(\lambda ,\mu )\), then there exists a strictly positive \(\phi _{\lambda ,\mu }\in L^2(\{1,2\}\times {\mathcal Q}_{L^2 ({\mathbb R}^d;{\mathbb R})})\) such that the eigenspace corresponding to \(E(\lambda ,\mu )\) is spanned by \(\vartheta (U\otimes {\mathbb {1}})^*\phi _{\lambda ,\mu }\), where \(\vartheta \) again is the defined as in Lemma 3.5. Since \(\vartheta (U\otimes {\mathbb {1}})^* {\Omega _\downarrow }\) is (strictly) positive, this proves the statement. \(\square \)

We can now prove the Bloch formula for the derivatives.

Proof of Proposition 2.10

Throughout this proof, we fix \(\lambda ,\mu _0\) as in the statement of the theorem. Further, for compact notation, we write

Hence, we want to prove

$$\begin{aligned} \mathbf {e}^{(n)}(\mu _0) = \lim _{ T \rightarrow \infty }\mathbf {e}_T^{(n)}(\mu _0) \qquad \text{ for } \text{ all }\ n\in {\mathbb N}, \end{aligned}$$

where \((\cdot )^{(n)}\) as usually denotes the n-th derivative.

We observe that the ground state energy \(\mathbf {e}(\mu _0)\) is a simple eigenvalue of \(\mathbf {h}(\mu _0)\), by Lemma 3.6. Further, by view of (2.7), it is obvious that the operator valued family is an analytic family of type (A), cf. [31, 43]. Then, by the Kato–Rellich theorem [43, Theorem XII.8], it follows that \(\mu \mapsto \mathbf {e}(\mu )\) is analytic and \(\mathbf {e}(\mu )\) is an isolated simple eigenvalue of \(\mathbf {h}(\mu )\) in a neighborhood of \(\mu _0\).

We introduce the distance of \(\mathbf {e}(\mu _0)\) to the rest of the spectrum by

$$\begin{aligned} \delta = {\text {dist}}(\mathbf {e}(\mu _0),\sigma (\mathbf {h}(\mu _0))\setminus \{\mathbf {e}(\mu _0)\}). \end{aligned}$$

By the Kato–Rellich theorem [43, Theorem XII.8], we can choose an \(\varepsilon >0\) such that

$$\begin{aligned}&\left| \mathbf {e}(\mu )-\mathbf {e}(\mu _0)\right| \le \frac{\delta }{4} \qquad \text{ and } \nonumber \\&\quad \inf (\sigma (\mathbf {h}(\mu ))\setminus \mathbf {e}(\mu ))\quad \ge \mathbf {e}(\mu _0)+ \frac{3}{4} \delta \qquad \text{ for }\ \mu \in (\mu _0-\varepsilon ,\mu _0+\varepsilon ), \end{aligned}$$
(3.10)

where the second inequality can be obtained using a Neumann series, cf. (3.12), or alternatively it can be obtained from the lower boundedness of Lemma 2.1 and a compactness argument involving that the set of \((\mu ,z)\), for which \(\mathbf {h}(\mu )-z\) is invertible, is open, see [43, Theorem XII.7]. Henceforth, we assume \(\mu \in (\mu _0-\varepsilon ,\mu _0+\varepsilon )\). Then, by (3.10), we can write the ground state projection \(P(\mu )\) of \(\mathbf {h}(\mu )\) as

$$\begin{aligned} P(\mu ) = \frac{1}{2\pi {\mathsf {i}}} \int _{\Gamma _0} \frac{1}{z- \mathbf {h}(\mu )} dz , \end{aligned}$$

where \(\Gamma _0\) is a curve counterclockwise encircling the point \(\mathbf {e}(\mu _0)\) at a distance \(\delta /2\) . Further, let

$$\begin{aligned}&\gamma _0 : [-1,+1] \rightarrow {\mathbb C}, \quad t \mapsto \mathbf {e}(\mu _0) + \frac{\delta }{2} - {\mathsf {i}}t, \\&\gamma _\pm : [0,\infty ) \rightarrow {\mathbb C}, \quad t \mapsto \mathbf {e}(\mu _0) + \frac{\delta }{2} \pm {\mathsf {i}}+ t \end{aligned}$$

and define the curve \(\Gamma _1 = - \gamma _+ + \gamma _0 + \gamma _- \) surrounding the set \(\sigma (\mathbf {h}(\mu _0)) \setminus \{ \mathbf {e}(\mu _0) \}\) (see Fig. 1).

Fig. 1
figure 1

Illustration of the curves \(\Gamma _0\) and \(\Gamma _1\). The spectrum of \(\mathbf {h}(\mu _0)\) is displayed in

In view of (3.10), we can define

$$\begin{aligned} Q_T(\mu ) := \frac{1}{2\pi {\mathsf {i}}} \int _{\Gamma _1} \frac{e^{ T ( \mathbf {e}(\mu ) - z )}}{z- \mathbf {h}(\mu )} dz , \end{aligned}$$

where the integral is understood as a Riemann integral with respect to the operator topology. The spectral theorem for the self-adjoint operator \(\mathbf {h}(\mu )\) and Cauchy’s integral formula yield

$$\begin{aligned} e^{-T(\mathbf {h}(\mu )-\mathbf {e}(\mu ))} = P(\mu ) + Q_T(\mu ). \end{aligned}$$
(3.11)

For \(z \in \rho (\mathbf {h}(\mu _0))\) and \(\mu \) in a neighborhood of \(\mu _0\) we have

(3.12)

Using this expansion and the following bounds obtained from (3.10)

$$\begin{aligned} \begin{aligned}&\Vert (z-\mathbf {h}(\mu _0))^{-1}\Vert \le \frac{2}{\delta }\quad \text{ for }\ z\in {\text {ran}}\Gamma _0 \cup {\text {ran}}\Gamma _1 , \\&| e^{T(\mathbf {e}(\mu )-z)}| \le {\left\{ \begin{array}{ll} e^{-\frac{\delta }{4} T} &{} \text{ for }\ z\in {\text {ran}}\gamma _0, \\ e^{-\frac{\delta }{4} T} e^{-Tt} &{} \text{ for }\ z=\gamma _\pm (t), \ t \in [0, \infty ) , \end{array}\right. } \end{aligned} \end{aligned}$$
(3.13)

we see that \(P(\mu )\) and \(Q_T(\mu )\) are real analytic for \(\mu \) in a neighborhood of \(\mu _0\) and, moreover, that the integrals and derivatives with respect to \(\mu \) can be interchanged due to the uniform convergence of the integrand on the curves \(\Gamma _0\) and \(\Gamma _1\). Hence, by virtue of (3.11), we see that the function is real analytic on \((\mu _0-\widetilde{\varepsilon },\mu _0+\widetilde{\varepsilon })\) for \(\widetilde{\varepsilon }\in (0,\varepsilon )\) small enough.

Let \(\psi _\mu \) be a normalized ground state of \(\mathbf {h}(\mu )\). Then, by Lemma 3.6, we find

(3.14)

Further, by the spectral theorem and (3.10)

(3.15)

where \(\nu _{{\Omega _\downarrow }}\) denotes the spectral measure of \(\mathbf{h} (\mu )\) associated with \({\Omega _\downarrow }\), cf. [41, Section VII.2].

By (3.11) and the definition of \(\mathbf {e}_T(\mu )\), for \(\mu \in (\mu _0-\widetilde{\varepsilon },\mu _0+\widetilde{\varepsilon })\) we have

Hence, we can calculate the n-th derivative of the expression on the left hand side at \(\mu =\mu _0\), by taking the n-th derivative on the right hand side. Using the Faà di Bruno formula (Lemma B.1) and recalling the notation from Theorem 2.11, we find

By (3.14) and (3.15), the first factor is uniformly bounded in T. Hence, it remains to prove that is uniformly bounded in T for all \(k=1,\ldots ,n\). Therefore, we explicitly calculate the derivative of \(Q_T(\mu )\) at \(\mu =\mu _0\). This is done by interchanging the integral with the derivative, which we justified above.

Note that, by the series expansion (3.12), we have

$$\begin{aligned} \partial _\mu ^k(z-\mathbf {h}(\mu ))^{-1}= \frac{k!}{z-\mathbf {h}(\mu )}\left( \sigma _x \frac{1}{z-\mathbf {h}(\mu )}\right) ^k \qquad \text{ for }\ k\in {\mathbb N}_0. \end{aligned}$$

Again using Faà di Bruno’s formula (Lemma B.1) and the Leibniz rule, this yields

$$\begin{aligned}&Q_T^{(k)}(\mu _0)\\&\quad = \frac{1}{2\pi {\mathsf {i}}} \sum _{\ell =0}^k \left( {\begin{array}{c}k\\ \ell \end{array}}\right) \int _{\Gamma _1} \partial _\mu ^\ell ( e^{ T ( \mathbf {e}( \mu ) - z )} ) \partial _\mu ^{k-\ell } (z- \mathbf {h}( \mu ))^{-1} dz {\bigg |_{\mu = \mu _0}} \nonumber \\&\quad = \frac{1}{2\pi {\mathsf {i}}} \sum _{\ell =0}^k \left( {\begin{array}{c}k\\ \ell \end{array}}\right) (k-\ell )! \underbrace{\left( \sum _{{\mathfrak P}\in \mathcal {P}_\ell } \prod _{B \in {\mathfrak P}} ( T \mathbf {e}^{(|B|)}(\mu _0) )\right) }_{=: P_{k,\ell }(T)}\nonumber \\&\qquad \times \underbrace{\int _{\Gamma _1} e^{ T ( \mathbf {e}(\mu _0) - z )} { \frac{1}{z- \mathbf {h}(\mu _0)} } \left( \sigma _x \frac{1}{z- \mathbf {h}(\mu _0)} \right) ^{k-\ell } dz}_{=: I_{k,\ell }(T)}. \end{aligned}$$

Applying the bounds (3.13), we find

$$\begin{aligned} \Vert I_{k,\ell }(T)\Vert \le \left( \frac{2}{\delta }\right) ^{k-\ell }e^{-\frac{\delta }{4}T} \left[ \int _{-1}^11\mathrm{d}t + 2\int _0^\infty e^{-Tt}\mathrm{d}t \right] . \end{aligned}$$

Since \(P_{k,\ell }(T)\) only grows polynomially in T, this implies \(\Vert Q_T^{(k)}(\mu _0)\Vert \xrightarrow {T\rightarrow \infty }0\) and especially proves is uniformly bounded in T. \(\square \)

We now combine Bloch’s formula for derivatives of the ground state energy with the FKN formula.

Proof of Theorem 2.11

First, we recall the definition of \(Z_T(\lambda ,\mu )\) in (2.22) and the notation from (2.25). By the dominated convergence theorem, one sees that \(Z_T\) is infinitely often differentiable in \(\mu \) and has the derivatives

(3.16)

Further, first using Proposition 2.10 and the Faà di Bruno formula Lemma B.1 to calculate the derivatives of the logarithm yields

(3.17)

where in the last line we inserted the identity (2.23) (which in turn follows from Corollary2.6). Combining (3.16) and (3.17) proves Theorem 2.11.

Proof of Corollary 2.12

By the definition of the Ursell functions (2.26), we find using the Faà die Bruno formula

Now, the Ursell functions are multilinear, cf. [39, Section 11], and by the dominated convergence theorem we can hence exchange the integrals with the expectation value, i.e.,

$$\begin{aligned} u_n\left( \int _0^{T} X_{s_1}\mathrm{d}s_1,\ldots , \int _0^{ T} X_{s_n}\mathrm{d}s_n\right) = \int _0^{ T} \cdots \int _0^{ T} u_n\left( X_{s_1},\ldots , X_{s_n}\right) \mathrm{d}s_1 \cdots \mathrm{d}s_n . \end{aligned}$$

Inserting this into Theorem 2.11 finishes the proof of Corollary 2.12. \(\square \)

4 Existence of Ground States

In this section, we use the bound on the second derivative of the ground state energy as function of the magnetic coupling from Corollary 2.14 to obtain the result that the spin boson Hamiltonian with massless bosons has a ground state for couplings which exhibit strong infrared singularities. This result is non-trivial, since the massless bosons imply that there is no spectral gap.

Our main result needs the following assumptions.

Hypothesis C

  1. (i)

    \(\nu :{\mathbb R}^d\rightarrow [0,\infty )\) is locally Hölder continuous, positive a.e., and \(\nu (k)=\nu (-k)\).

  2. (ii)

    \(\lim \limits _{|k|\rightarrow \infty }\nu (k)=\infty \).

  3. (iii)

    \(v\in L^2({\mathbb R}^d)\) has real Fourier transform and there exists \(\varepsilon >0\), such that \(\nu ^{-1/2}v\in L^{2+\varepsilon }({\mathbb R}^d) \cap L^2({\mathbb R}^d)\).

  4. (iv)

    \(\displaystyle \sup _{|p|\le 1}\int _{{\mathbb R}^d}\frac{|v(k)|}{\sqrt{\nu (k)}\nu (k+p)}\mathrm{d}k<\infty \) and \(\displaystyle \sup _{|p|\le 1}\int _{{\mathbb R}^d}\frac{|v(k+p)-v(k)|}{\sqrt{\nu (k)}|p|^\alpha }\mathrm{d}k<\infty \) for some \(\alpha > 0\) .

We can now state the main result of this section.

Theorem 4.1

Assume Hypothesis C holds. Then there exists \(\lambda _{\mathsf c}>0\) such that for all \(\lambda \in (-\lambda _{\mathsf c},\lambda _{\mathsf c})\) the spin boson Hamiltonian

$$\begin{aligned} H_\lambda = \sigma _z\otimes {\mathbb {1}}+ {\mathbb {1}}\otimes {\mathsf d}\Gamma (\nu ) +\lambda \sigma _x\otimes \varphi (v) \end{aligned}$$
(4.1)

acting on \({\mathbb C}^2 \otimes {\mathcal F}\) has a ground state, i.e., the infimum of the spectrum is an eigenvalue.

Remark 4.2

This improves the previously known results on ground state existence [5, 23]. We also remark that in principal our method of proof not only gives existence of a small \(\lambda _{\mathsf c}\), but could in fact be used to estimate the critical coupling constant, due to its non-perturbative nature.

Example 4.3

Let us consider the case

$$\begin{aligned} d=3, \qquad \nu (k)= |k| \qquad \text{ and }\qquad v(k)=\chi (k)|k|^\delta , \end{aligned}$$
(4.2)

where \(\chi :{\mathbb R}^d\rightarrow {\mathbb R}\) is the characteristic function of an arbitrary ball around \(k=0\). Obviously the assumptions on \(\omega \) in Hypothesis C are satisfied. Further, (iii) holds for any \(\delta >-1\) as is easily verified by integration in polar coordinates. The finiteness conditions (vi) of Hypothesis C also hold in this case by simple estimates. We remark that the previous results [5, 23] covered the situation (4.2) with \(\delta =-\frac{1}{2}\).

The method of proof relies on the approximation of the photon dispersion relation \(\nu \) by the infrared-regularized versions \(\nu _m= \sqrt{\nu ^2+m^2}\) with \(m>0\). We denote by \(H_m\) and \(E_m\) the definitions (2.7) and (2.21) with \(\omega \) replaced by \(\nu _m\). Since \(\inf _{k\in {\mathbb R}^d}\nu _m(k)\ge m >0\), the operator \(H_m(\lambda ,0)\) has a spectral gap for any \(m>0\) and hence also a ground state, cf. Theorem D.1. In the recent paper [26], we showed the following result, which together with Corollary 2.14 give a proof of Theorem 4.1.

Theorem 4.4

( [26]) Assume Hypothesis C holds and let \(\lambda \in {\mathbb R}\). If for small \(m > 0\) the function \(\mu \mapsto E_m(\lambda ,\mu )\) is twice differentiable at zero and \(\limsup _{m\downarrow 0} | \partial _\mu ^2 E_m(\lambda ,0)|<\infty \), then \(H_\lambda \) as defined in (4.1) has a ground state.

We conclude with the proof for existence of ground states.

Proof of Theorem 4.1

Applying Corollary 2.14 to the function \(\nu \), we see that the assumptions of Proposition 4.4 are satisfied. This proves the theorem.