1 Introduction

Adiabatic quantum dynamics. The adiabatic theorem is a fundamental result in quantum mechanics, dating back to the work of Born and Fock [14] and Kato [37]. Let us review its basic statement. Let H(s) be a family of time-dependent Hamiltonians, depending smoothly on the time parameter s for \(s \in [-1,0]\). We shall suppose that H(s) has a unique ground state \(\varphi _{s}\), and that the energy of the ground state is separated from the rest of the spectrum by a positive gap. We are interested in the adiabatic regime, defined as follows. Let \(\eta > 0\) and consider the time-dependent Schrödinger equation:

$$\begin{aligned} i\partial _{t} \psi (t) = H(\eta t) \psi (t),\qquad t\in [-1/\eta , 0]. \end{aligned}$$
(1.1)

Suppose that at the initial time the system is prepared in the ground state of the Hamiltonian, \(\psi (-1/\eta ) = \varphi _{-1}\). We are interested in the evolution of such initial datum under (1.1), in the adiabatic limit \(\eta \rightarrow 0^{+}\). The adiabatic theorem states that:

$$\begin{aligned} \Big \Vert \psi (t) - \langle \psi (t), \varphi _{\eta t} \rangle \varphi _{\eta t} \Big \Vert \le C\eta \qquad \text {for all } t\in [-1/\eta , 0]. \end{aligned}$$
(1.2)

This implies that, at all times t and for \(\eta \) small enough, the solution of the time-dependent Schrödinger Eq. (1.1) is approximated by the instantaneous ground state, possibly up to a phase. This important result has been applied to study a wide class of physical systems, see [47] for a monograph on the topic. It has been generalized to a class of contracting evolutions that includes the Schrödinger Eq. (1.1) as a special case, see [6] and references therein.

The constant C in Eq. (1.2) depends on the details of the model, in particular on the regularity of H(s). The way in which the regularity of H(s) is quantified is typically via an estimate for \(\Vert {\dot{H}}(s) \Vert \). This quantity is badly behaved in situations in which the Hamiltonian describes a many-body system, say an interacting Fermi gas on a lattice \(\Lambda _{L} = [0,L]^{d} \cap {\mathbb {Z}}^{d}\), due to the fact that the norm of the Hamiltonian and of its derivatives grow linearly with the size of the system. Thus, the standard adiabatic theorem fails in describing the evolution of many-body quantum systems for \(\eta \) small uniformly in L.

This is not a technical point. In fact, it turns out that the notion of convergence in Eq. (1.2) is not the natural one for many-body systems: one cannot expect norm convergence for extensive many-body quantum systems uniformly in their size, see for instance the discussion in [8]. Instead, a much more natural notion of convergence involves the expectation value of local observables, which only probe a finite region in space. In this setting, a many-body adiabatic theorem for quantum spin systems has been recently proved in [7]. The result has then been extended to Fermi gases in [43]. Specifically, let \({\mathscr {H}}(s)\) be a time-dependent Hamiltonian for a quantum spin systems, or for lattice fermions, on a large but finite lattice \(\Lambda _{L} \subset {\mathbb {Z}}^{d}\). Suppose that \({\mathscr {H}}(s)\) has a spectral gap for all times \(s \in [-1, 0]\), and let \(\Pi _{L}(s)\) be the projector associated with the ground state of \({\mathscr {H}}(s)\) on \(\Lambda _{L}\). Let \(P_{L}(t)\) be the solution of the evolution equation:

$$\begin{aligned} i\partial _{t} P_{L}(t) = [ {\mathscr {H}}(\eta t), P_{L}(t)],\qquad P_{L}(-1/\eta ) = \Pi _{L}(-1). \end{aligned}$$
(1.3)

Consider the expectation value of a local operator on the time-dependent state, \({\text {Tr}}{\mathscr {O}}_{X} P_{L}(t)\). Then, under reasonable regularity and locality assumptions on the Hamiltonian, the many-body adiabatic theorem states that [7, 43]:

$$\begin{aligned} \Big | {\text {Tr}}{\mathscr {O}}_{X} P_{L}(t) - {\text {Tr}}{\mathscr {O}}_{X} \Pi _{L}(\eta t)\Big | \le C\eta \qquad \text {for all } t\in [-1/\eta , 0], \end{aligned}$$
(1.4)

where the constant C depends on the observable \({\mathscr {O}}_{X}\), but it is independent of L. An important application of this result is the proof of validity of linear response for extended, many-body quantum systems. To introduce the notion of linear response, let us further assume that the many-body Hamiltonian has the form:

$$\begin{aligned} {\mathscr {H}}(\eta t) = {\mathscr {H}} + \varepsilon g(\eta t) {\mathscr {P}} \end{aligned}$$
(1.5)

where \({\mathscr {H}}\) and \({\mathscr {P}}\) are given by sums of local operators, and g(t) is a switch function, that is a bounded function that decays fast enough at negative times. A standard choice is the exponential switch function, \(g(t) = e^{t}\). Consider the dynamics generated by (1.5) for \(t\in (-\infty , 0]\). Let \(P_{L}(t)\) be the solution of (1.3) with initial datum \(P_{L}(-\infty ) = \Pi _{L}(-\infty )\). Then, [7, 43] proved that, see also the reviews [8, 32]:

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \lim _{\eta \rightarrow 0^{+}} \lim _{L\rightarrow \infty } \frac{1}{\varepsilon } \Big [ {\text {Tr}}{\mathscr {O}}_{X} P_{L}(0) - {\text {Tr}}{\mathscr {O}}_{X} P_{L}(-\infty ) \Big ] = \chi _{{\mathscr {O}},{\mathscr {P}}} \end{aligned}$$
(1.6)

where \(\chi _{{\mathscr {O}},{\mathscr {P}}}\) agrees with the well-known Kubo formula for linear response. The statement (1.6) holds provided the thermodynamic limit of \(P_{L}(-\infty )\) exists. In general, a similar result holds for \(\varepsilon , \eta , L\) fixed, where \(\varepsilon , \eta \) are small uniformly in L and where \(\varepsilon \) is small uniformly in \(\eta \). The Kubo formula is equivalent to the first order term in the Duhamel expansion for the non-autonomous evolution:

$$\begin{aligned} \chi _{{\mathscr {O}},{\mathscr {P}}} = -i\lim _{\eta \rightarrow 0^{+}} \lim _{L\rightarrow \infty }\int _{-\infty }^{0} dt\, g(\eta t) {\text {Tr}}\, \big [ {\mathscr {O}}_{X}, e^{i{\mathscr {H}}t}{\mathscr {P}}e^{-i{\mathscr {H}}t} \big ] P_{L}(-\infty ). \end{aligned}$$
(1.7)

These results have interesting applications in condensed matter physics. In particular, combined with [10, 26, 31], they allow to prove the quantization of the Hall conductivity for gapped interacting fermions starting from the fundamental many-body Schrödinger equation. Among other extensions that have been obtained in the last few years we mention: the construction of non-equilibrium almost-stationary states and the application to the proof of validity of linear response for a class of perturbations that might close the spectral gap [48]; the proof of exactness of linear response for the quantum Hall effect [9]; the extension of the many-body adiabatic theorem to infinite systems with a bulk gap [33].

Despite all this progress, an important limitation of the existing approaches is that they do not allow to study many-body quantum systems at positive temperature. In particular, the zero temperature limit is taken before the thermodynamic limit. It is of obvious physical relevance to consider the situation in which the thermodynamic limit is taken at fixed positive temperature, to make contact with experimental settings in which the temperature is possibly small but necessarily non-zero. In what follows, we will focus on interacting lattice fermionic models, which we shall describe in the grand-canonical Fock space formalism. We are interested in the following evolution equation:

$$\begin{aligned} i\partial _{t} \rho (t) = [ {\mathscr {H}}(\eta t), \rho (t) ],\qquad \rho (-\infty ) = \rho _{\beta , \mu , L}, \end{aligned}$$
(1.8)

with \(\rho _{\beta , \mu , L} = e^{-\beta ({\mathscr {H}} - \mu {\mathscr {N}})} / {\mathscr {Z}}_{\beta , \mu , L}\) the grand-canonical equilibrium Gibbs state of the Hamiltonian \({\mathscr {H}}\) at temperature \(T = 1/\beta \) and chemical potential \(\mu \). A natural question is to understand under which conditions the many-body evolution of the equilibrium state can be approximated by an instantaneous Gibbs state, in the sense of the expectation of local observables. For instance, one would like to understand under which conditions

$$\begin{aligned} {\text {Tr}}{\mathscr {O}}_{X} \rho (t) = \frac{{\text {Tr}}\, e^{-\beta ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})} {\mathscr {O}}_{X}}{ {\text {Tr}}\, e^{-\beta ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}}) }} + o(1) \end{aligned}$$
(1.9)

with o(1) a quantity that vanishes as \(\eta \rightarrow 0^{+}\), uniformly in L (and possibly with a different temperature T than the one used to define the initial datum).

Our result. In this work, we introduce a different approach to study many-body quantum dynamics in the adiabatic regime, which applies to weakly interacting many-body systems at small positive temperature. We consider finite-range, time-dependent Hamiltonians of the form (1.5), under suitable assumptions discussed below. In our main result, Theorem 3.7, we derive a representation of \({\text {Tr}}{\mathscr {O}}_{X} \rho (t)\) via a convergent expansion in \(\varepsilon \), uniformly in \(\eta \) and in L, for small temperatures. “Small” means that the temperature parametrizing the initial Gibbs state is such that

$$\begin{aligned} T \ll |\varepsilon |^{-1} \eta ^{d+2}, \end{aligned}$$
(1.10)

uniformly in the size of the system. Under suitable assumptions on the decay of correlations of \({\mathscr {H}}\), the range of allowed \(\varepsilon \) for which convergence holds is also uniform in \(\beta \). These assumptions hold for example for finite-range Hamiltonians of the form

$$\begin{aligned} {\mathscr {H}}^{\lambda } = {\mathscr {H}}^{0} + \lambda {\mathscr {V}} \end{aligned}$$
(1.11)

with \({\mathscr {H}}^{0}\) the second-quantization of a gapped Hamiltonian, \({\mathscr {V}}\) a bounded local many-body interaction and \(|\lambda |\) small. This is the type of models considered e.g. in [26], where the universality of the Hall conductivity for weakly interacting Fermi systems has been proved. This class of weakly interacting systems can be analyzed via fermionic cluster expansion techniques, which make it possible to prove essentially optimal estimates for the decay of the Euclidean correlation functions. For these models, the assumptions on the equilibrium Euclidean correlations required by Theorem 3.7 actually hold at positive temperature even without a gap condition on \({\mathscr {H}}^{0}\), however in this case one is forced to consider a range of \(\lambda , \varepsilon \) that shrinks as \(T \rightarrow 0\) (but still uniformly in L).

Our method then allows us to prove the validity of an adiabatic theorem for local observables, in the form of Eq. (1.9), for small temperatures in the sense of (1.10). In particular, the zero-temperature many-body adiabatic theorem (1.4) is recovered by taking the limit \(\beta \rightarrow \infty \) at finite L.Footnote 1 Furthermore, the method can also be used to prove the validity of linear response, and more generally to compute all higher-order response coefficients in terms of equilibrium correlations, see Corollary 3.11.

The proof is based on a rigorous Wick rotation, which makes it possible to rewrite the Duhamel expansion for the quantum evolution of the system in terms of time-ordered, Euclidean (or imaginary-time) connected correlation functions. Previously, this idea has been used to rigorously study the linear and quadratic response in a number of interacting gapped or gapless systems [5, 26, 29, 42]. Here, we extend this strategy at all orders in the Duhamel expansion for the time-evolution of the state, and we use it to prove convergence of the Duhamel series for the real-time dynamics.

The method applies to a class of switch functions \(g(\eta t)\) that can be approximated, for \(\beta \) large, by functions \(g_{\beta , \eta }(t)\), decaying rapidly for \(t \rightarrow -\infty \), such that \(g_{\beta ,\eta }(t) = g_{\beta ,\eta }(t-i\beta )\). This periodicity plays a key role in the proof of the Wick rotation. This requirement of course restricts the class of switch functions that we are able to consider; however, let us anticipate that this assumption holds for the standard exponential switch function, and more generally for the Laplace transform of suitable integrable functions.

Our method is completely different from that used in previous works on adiabatic theorems [7, 43], and it allows to access small positive temperatures. With respect to the existing results, however, we assume that the time-dependent perturbation is slowly-varying and weak, since our method is ultimately based on a convergent expansion in \(\varepsilon \), whereas in the previous works [7, 43] it is only assumed that the time-dependent Hamiltonian is slowly varying. The work [7, 43] further assume that the ground state of the time-dependent Hamiltonian \({\mathscr {H}}(\eta t)\) is separated by the rest of the spectrum by a uniform spectral gap, for all times. While we do not make this assumption, for the aforementioned example (1.11) it can also be proved for small \(|\lambda |\) and small \(|\varepsilon |\) [20].

Besides the result itself, we believe that a relevant contribution of the present work is to import methods developed for interacting fermionic models at equilibrium to the study of real-time quantum dynamics. In perspective, if combined with rigorous renormalization group techniques (see [12, 40, 46] for reviews) we think that the approach of this paper could be extended to study the evolution of the Gibbs state of metallic or semimetallic systems, where the Fermi energy of the initial datum is not separated from the spectrum of the Hamiltonian uniformly in the size of the system. There, one does not expect an adiabatic theorem to hold; however, one might still have a convergent series expansion for the expectation of local observables in terms of Euclidean correlations, in a physically relevant range of parameters. This would be useful to establish the validity of linear response for gapless systems, widely used in applications.

Specifically, the combination of cluster expansion with rigorous renormalization group recently allowed to study the low temperature properties of a wide class of interacting gapless systems, and in particular to access their transport coefficients defined in the framework of linear response. Among the recent works, we mention the construction of the ground state of the two-dimensional Hubbard model on the honeycomb lattice [24] and the proof of universality of the longitudinal conductivity of graphene [25]; the construction of the topological phase diagram of the Haldane-Hubbard model [27, 28]; the proof of the non-renormalization for the chiral anomaly of Weyl semimetals [29]; the proof of Luttinger liquid behavior for interacting edge modes of two-dimensional topological insulators and the proof of universality of edge conductance [5, 41, 42]. It would be very interesting to prove the validity of linear response in the setting considered in these works, starting from many-body quantum dynamics. Renormalization group methods rely on translation-invariance, and on periodic boundary conditions. It would be interesting to consider a larger class of boundary conditions, for example adapting the methods of [4].

Furthermore, it would be interesting to extend the methods presented in this work in the direction of studying spin transport, and prove the validity of linear response for spin-noncommuting many-body Hamiltonians. For non-interacting models, recent progress has been obtained in [38, 39].

The adiabatic evolution of positive temperature quantum systems has been studied in the last years, e.g. in [1, 2, 34]. The setting considered in these works is however different from the one of the present paper. The authors of [1, 2, 34] consider a small system coupled to reservoirs, and study the dynamics of the small system when the coupling with the reservoirs is introduced slowly in time. The key technical tool introduced in [1, 2, 34] is an isothermal adiabatic theorem, that proves norm-convergence of the evolved equilibrium state to the instantaneous equilibrium state of the perturbed system, in the adiabatic limit. The result holds under a suitable ergodicity assumption, which, as far as we know, has not been proved for the class of extended, interacting Fermi systems considered here. Finally, we mention the recent works [35, 36] showing that the validity of a many-body adiabatic theorem for quantum spin systems in the thermodynamic limit at fixed positive temperature and as \(\eta \rightarrow 0^{+}\) is incompatible with the general notion of approach to equilibrium. We plan to further investigate the connection of our work with [35, 36] in the future.

Ideas of the proof. Let us give a few more details about the method introduced in this paper. The proof starts by approximating the real-time dynamics generated by \({\mathscr {H}}(\eta t)\) by a suitable auxiliary dynamics, obtained from \({\mathscr {H}}(\eta t)\) by replacing the switch function \(g(\eta t)\) by a function \(g_{\beta ,\eta }(t)\) such that \(\lim _{\beta \rightarrow \infty } g_{\beta ,\eta }(t) = g(\eta t)\) and \(g_{\beta ,\eta }(t) = g_{\beta ,\eta }(t-i\beta )\). This approximation of course introduces an error, whose influence on the expectation values of local observables is estimated via Lieb-Robinson bounds for non-autonomous quantum dynamics [13, 16]. This error is responsible for the main limitation in the range of temperatures that we are able to consider. The advantage of replacing \(g(\eta t)\) with \(g_{\beta , \eta }(t)\) is that it lets us write the Duhamel series in \(\varepsilon \) for the auxiliary evolution exactly in terms of Euclidean correlations, implementing a Wick rotation. This is made possible mainly because the periodicity of \(g_{\beta ,\eta }\) implies that the Kubo-Martin-Schwinger (KMS) identity remains true for the thermal expectation of modified observables of the form \(g_{\beta , \eta }(t) e^{i{\mathscr {H}}t}{\mathscr {O}}_{X}e^{-i{\mathscr {H}}t}\). Once the Duhamel series is represented in terms of Euclidean correlations, the convergence of the series follows from the good decay properties of Euclidean correlations. For weakly perturbed gapped models we use cluster expansion techniques to verify these assumptions. Finally, the connection with the instantaneous Gibbs state of \({\mathscr {H}}(\eta t)\) is obtained by noticing that, for \(\eta \) small, the Wick-rotated Duhamel series agrees with the equilibrium perturbation theory in \(\varepsilon \) for the Gibbs state of the Hamiltonian \({\mathscr {H}} + \varepsilon g(\eta t) {\mathscr {P}}\).

An important ingredient of our proof is the complex deformation argument of Propositions 4.4, 4.5, which allows us to prove the Wick rotation at all orders in the Duhamel series. Propositions 4.4, 4.5 are the adaptation of Propositions 5.4.12, 5.4.13 of [15] to our adiabatic setting. The main difference with respect to [15] is that in our case the observables involved in the correlations are “damped” in time by \(g_{\beta ,\eta }(t)\): this allows to rule out the presence of spurious boundary terms at infinity in the complex deformation argument. In [15], these boundary terms are controlled by a suitable clustering assumption on the real-time correlation functions of the equilibrium state, which are very hard to prove for interacting models in the infinite volume limit. We are not aware of any result in this direction for the class of many-body lattice models considered in the present work.

Structure of the paper. The paper is organized as follows. In Sect. 2 we introduce the class of models considered in this work, we introduce their Gibbs state and Euclidean correlation functions, and we define the quantum dynamics. In Sect. 3 we state our main result, Theorem 3.7, which provides a representation for the average of the real-time evolution of local observables via a convergent expansion in \(\varepsilon \). As an application, this representation establishes a many-body adiabatic theorem for the evolution of thermal states at low temperature. A relevant consequence of the proof of our main result is the validity of linear response, Corollary 3.11. The proof of the main result will be given in Sect. 4. In Appendix A we further discuss the class of switch functions considered in the present work; in Appendix B we discuss some well-known properties about time-ordered Euclidean correlations, which we include for completeness; and in Appendix C we review the verification of our Assumption 3.1, which is known to hold for many-body systems at zero temperature and with a spectral gap, or at positive temperature. This is done using fermionic cluster expansion, whose convergence is guaranteed by the Brydges-Battle-Federbush-Kennedy formula for cumulants.

2 The Model

In this section we define the class of models we shall consider in this paper. We will focus on lattice fermionic systems, with finite-range interactions. We will then define the time-evolution of such systems, after introducing a time-dependent perturbation.

Remark 2.1

Unless otherwise specified, the constants CK etc. appearing in the bounds do not depend on \(\beta , L, \eta , \varepsilon \) and on time. Their values might change from line to line. Also, it will be understood that the natural numbers \({\mathbb {N}}\) include zero.

2.1 Lattice fermions

Let \(\Gamma \) be a d-dimensional lattice, namely

$$\begin{aligned} \Gamma =\text {Span}_{{\mathbb {Z}}}\{a_1,\dots , a_d\} \cong {\mathbb {Z}}^{d}, \end{aligned}$$

where \(a_1,\dots , a_d\) are d linearly independent vectors in \({\mathbb {R}}^{d}\). Let \(L\in {\mathbb {N}}\), \(L>0\). We define the lattice dilated by L as \(L\Gamma := \text {Span}_{L{\mathbb {Z}}}\{a_1,\dots , a_d\} \cong L {\mathbb {Z}}^{d}\). The finite torus of side L is defined as \(\Gamma _L:=\Gamma /(L\Gamma )\), that is:

$$\begin{aligned} \Gamma _L\cong \left\{ \sum _{i=1}^d n_i a_i\, \Big |\, n_i\in {\mathbb {Z}},\; 0\le n_i<L \right\} \end{aligned}$$

with periodic boundary conditions. The Euclidean distance on the torus \(\Gamma _L\) is given by

$$\begin{aligned} \Vert x-y\Vert _{L}:=\min _{v\in (L\Gamma )}\Vert x - y+v\Vert ,\qquad \forall \,x,y\in \Gamma _L. \end{aligned}$$

We shall denote by \(M\in {\mathbb {N}}\), \(M>0\) the total number of internal degrees of freedom of a particle. This might take into account the spin degrees of freedom, or sublattice labels. Setting \(S_M:=\{1,\dots ,M\}\), we define:

$$\begin{aligned} \Lambda _L:=\Gamma _L\times S_M. \end{aligned}$$

We equip \(\Lambda _L\) with the following distance, tracing only the space coordinates. For any \(\textbf{x}=(x,\sigma ), \textbf{y}=(y,\sigma ')\in \Lambda _L\), we define:

$$\begin{aligned} \Vert \textbf{x} - \textbf{y} \Vert _{L}:= \Vert x- y \Vert _{L}. \end{aligned}$$
(2.1)

We shall describe fermionic particles on \(\Lambda _{L}\), in a grand-canonical setting. To this end, we introduce the fermionic Fock space, as follows. Let the one-particle Hilbert space be \({\mathfrak {h}}_L:=\ell ^2(\Lambda _L)\). The corresponding N-particle Hilbert space is its N-fold anti-symmetric tensor product \({\mathfrak {H}}_{L,N}:= {\mathfrak {h}}_{L}^{\wedge N}\); notice that the antisymmetric tensor product is trivial whenever \(N > M L^{d}\). The fermionic Fock space is defined as usual:

$$\begin{aligned} {\mathscr {F}}_L:=\bigoplus _{N=0}^{M L^d}{\mathfrak {H}}_{L,N},\qquad \text {where } {\mathfrak {h}}_{L,0}:={\mathbb {C}}. \end{aligned}$$

For finite L, the fermionic Fock space is a finite-dimensional vector space. Thus, any linear operator on \({\mathscr {F}}_L\) into itself is automatically bounded, and can be represented as a matrix. For any \(\textbf{x} \in \Lambda _{L}\), let \(a_{\textbf{x}}\) and \(a^*_{\textbf{x}}\) be the standard fermionic annihilation and creation operators, satisfying the canonical anti-commutation relations:

$$\begin{aligned} \{a_{\textbf{x}},a^*_{\textbf{y}}\}=\delta _{\textbf{x}, \textbf{y}}\mathbb {1}\quad \text { and }\quad \{ a_{\textbf{x}},a_{\textbf{y}} \}=0=\{a^*_{\textbf{x}},a^*_{\textbf{y}}\}. \end{aligned}$$

For any subset \(X\subseteq \Lambda _L\), we denote by \({\mathscr {A}}_X\) the algebra of polynomials over \({\mathbb {C}}\) generated by the fermionic operators restricted to X, \(\{a_{\textbf{x}}, a^*_{\textbf{x}}\,:\, \textbf{x} \in X \}\). An example of operator in \({\mathscr {A}}_{\Lambda _{L}}\) is the number operator, defined as:

$$\begin{aligned} {\mathscr {N}}:= \sum _{\textbf{x}\in \Lambda _L} a_{\textbf{x}}^* a_{\textbf{x}}. \end{aligned}$$

The operator \({\mathscr {N}}\) counts how many particles are present in a given sector of the Fock space: given \(\psi \in {\mathscr {F}}\), it acts as

$$\begin{aligned} {\mathscr {N}} \psi = (0 \psi ^{(0)}, 1 \psi ^{(1)}, \ldots , n \psi ^{(n)},\ldots ). \end{aligned}$$

We shall denote by \({\mathscr {A}}_X^{\mathscr {N}}\) the subset of \({\mathscr {A}}_X\) consisting of operators commuting with \({\mathscr {N}}\), also called gauge-invariant operators. Equivalently, these operators consist of polynomials in the creations and annihilation operators where the number of creation operators equals the number of annihilation operators.

It is clear that any self-adjoint operator \({\mathscr {O}} \in {\mathscr {A}}_{\Lambda _{L}}\) can be represented as

$$\begin{aligned} {\mathscr {O}} = \sum _{X \subseteq \Lambda _{L}} {\mathscr {O}}_{X}, \end{aligned}$$
(2.2)

where \({\mathscr {O}}_{X} \in {\mathscr {A}}_{X}\) and \({\mathscr {O}}_{X} = {\mathscr {O}}_{X}^{*}\). As L varies, the operator \({\mathscr {O}}\) actually denotes a sequence of operators. In particular, the operators \({\mathscr {O}}_{X}\) in (2.2) might depend on L. With a slight abuse of notation, we will not display explicitly such dependence. Notice that if \(X\cap Y = \emptyset \), and if \({\mathscr {O}}_{X}\) and \({\mathscr {O}}_{Y}\) are even in the number of fermionic creation and annihilation operators,

$$\begin{aligned} {[}{\mathscr {O}}_{X}, {\mathscr {O}}_{Y}] = 0. \end{aligned}$$
(2.3)

Finally, let us define the notion of finite-range operators. Given \(X\subseteq \Lambda _{L}\), the diameter of X is defined as:

$$\begin{aligned} {\text {diam}}(X):=\max _{\textbf{x}, \textbf{y} \in X} \Vert \textbf{x} - \textbf{y} \Vert _{L}. \end{aligned}$$

Definition 2.2

(Finite-range operators). We say that \({\mathscr {O}} \in {\mathscr {A}}_{\Lambda _{L}}\) is a finite-range operator if the following holds true. There exists \(R>0\) independent of L such that \({\mathscr {O}}_{X} = 0\) whenever \(\text {diam}(X) > R\). Furthermore, there exists a constant \(S>0\) independent of L such that, for all \(X\subseteq \Lambda _{L}\):

$$\begin{aligned} \Vert {\mathscr {O}}_{X} \Vert \le S. \end{aligned}$$

Examples of finite range operators introduced below are the Hamiltonian \({\mathscr {H}}\) and the perturbation \({\mathscr {P}}\).

2.2 Dynamics

Hamiltonian and Gibbs state. The Hamiltonian \({\mathscr {H}}\) is a self-adjoint, finite-range operator in \({\mathscr {A}}_{\Lambda _{L}}^{\mathscr {N}}\). The Heisenberg time-evolution of an observable \({\mathscr {O}} \in {\mathscr {A}}_{\Lambda _{L}}\) generated by \({\mathscr {H}}\) is, for \(t\in {\mathbb {R}}\):

$$\begin{aligned} \tau _{t}({\mathscr {O}}):= e^{i{\mathscr {H}} t} {\mathscr {O}} e^{-i{\mathscr {H}} t}. \end{aligned}$$
(2.4)

Later, we will also consider the Heisenberg evolution for complex times t, whose definition poses no problem due to the finite-dimensionality of the Hilbert space.

An example of a Hamiltonian which will play an important role in this work is

$$\begin{aligned} \sum _{\textbf{x},\textbf{y} \in \Lambda _{L}} a^{*}_{\textbf{x}} H(\textbf{x}; \textbf{y}) a_{\textbf{y}} + \sum _{\textbf{x},\textbf{y} \in \Lambda _{L}} a^{*}_{\textbf{x}} a^{*}_{\textbf{y}} v(\textbf{x};\textbf{y}) a_{\textbf{y}} a_{\textbf{x}}, \end{aligned}$$
(2.5)

with \(H(\textbf{x}; \textbf{y})\) and \(v(\textbf{x};\textbf{y})\) finite-range, that is both \(H(\textbf{x}; \textbf{y})\) and \(v(\textbf{x};\textbf{y})\) are vanishing if \(\Vert \textbf{x} - \textbf{y}\Vert _{L} > R\). More generally, we shall say that \({\mathscr {H}}\) is the Hamiltonian for a weakly interacting lattice model if it has the form:

$$\begin{aligned} \sum _{\textbf{x},\textbf{y} \in \Lambda _{L}} a^{*}_{\textbf{x}} H(\textbf{x}; \textbf{y}) a_{\textbf{y}} + \lambda {\mathscr {V}} \end{aligned}$$
(2.6)

with \(\lambda \in {\mathbb {R}}\), \(|\lambda |\) small in a sense to be made precise, and \({\mathscr {V}}\) finite-range and of degree higher than two in the fermionic operators. We shall say that the non-interacting Hamiltonian \({\mathscr {H}}^{0} = \sum _{\textbf{x},\textbf{y} \in \Lambda _{L}} a^{*}_{\textbf{x}} H(\textbf{x}; \textbf{y}) a_{\textbf{y}}\) is gapped if the spectrum of H has a spectral gap uniformly in L.

Given \(\beta > 0\), \(\mu \in {\mathbb {R}}\), the grand-canonical equilibrium state \(\langle \cdot \rangle _{\beta , \mu , L}\) associated with the Hamiltonian \({\mathscr {H}}\), also called equilibrium Gibbs state, is defined as:

$$\begin{aligned} \langle \cdot \rangle _{\beta , \mu , L}:= {\text {Tr}}\cdot \rho _{\beta , \mu , L},\quad \rho _{\beta , \mu , L}:= \frac{e^{-\beta ({\mathscr {H}} - \mu {\mathscr {N}})}}{{\mathscr {Z}}_{\beta , \mu , L}},\quad {\mathscr {Z}}_{\beta , \mu , L}:= {\text {Tr}}\, e^{-\beta ({\mathscr {H}} - \mu {\mathscr {N}})}, \end{aligned}$$

where the trace is over the fermionic Fock space \({\mathscr {F}}_{L}\). Obviously, the Gibbs state is invariant under time evolution:

$$\begin{aligned} \langle {\mathscr {O}} \rangle _{\beta , \mu , L} = \langle \tau _{t}({\mathscr {O}}) \rangle _{\beta , \mu , L}\qquad \forall t\in {\mathbb {C}}. \end{aligned}$$

It will also be convenient to define the imaginary-time, or Euclidean, evolution of \({\mathscr {O}}\) as:

$$\begin{aligned} \gamma _{t}({\mathscr {O}}):= e^{t({\mathscr {H}} - \mu {\mathscr {N}})} {\mathscr {O}} e^{-t({\mathscr {H}} - \mu {\mathscr {N}})}\qquad t\in {\mathbb {R}}. \end{aligned}$$
(2.7)

For \({\mathscr {O}} \in {\mathscr {A}}^{{\mathscr {N}}}_{\Lambda _{L}}\), one has

$$\begin{aligned} \gamma _{t}({\mathscr {O}}) = \tau _{-it}({\mathscr {O}}) \end{aligned}$$
(2.8)

(the restriction to \( {\mathscr {A}}^{{\mathscr {N}}}_{\Lambda _{L}}\) is needed because \(\gamma \), unlike \(\tau \), includes a chemical potential term). Notice that the imaginary-time evolution is no longer unitary, and the norm of \(\gamma _{t}({\mathscr {O}})\) might grow in time. Finally, the following property, also called KMS identity, holds:

$$\begin{aligned} \langle \gamma _{t_{1}}({\mathscr {O}}_{1}) \gamma _{t_{2}}({\mathscr {O}}_{2}) \rangle _{\beta , \mu , L} = \langle \gamma _{t_{2} + \beta }({\mathscr {O}}_{2})\gamma _{t_{1}}({\mathscr {O}}_{1}) \rangle _{\beta , \mu , L} \end{aligned}$$
(2.9)

for any \({\mathscr {O}}_{1}\) and \({\mathscr {O}}_{2}\) in \({\mathscr {A}}_{\Lambda _{L}}\). For finite L, which is our case, this identity simply follows from the definition of Gibbs state, and from the cyclicity of the trace. In order for (2.9) to hold, it is crucial that the generator of the Euclidean dynamics \(\gamma _{t}\) includes the chemical potential term \(-\mu {\mathscr {N}}\) in its definition. Notice that the dynamics \(\gamma _{t}\) in (2.8) trivially extends to all complex times t; thus, the identity (2.9) actually holds replacing \(t_{1}, t_{2}\) by any two complex numbers \(z_{1}, z_{2}\). Equation (2.9) will play a fundamental role in our analysis.

Time ordering. Let \(t_1,\ldots ,t_n\) in \([0,\beta )\), and let \(a^{\sharp }_{\textbf{x}}\) be either \(a_{\textbf{x}}\) or \(a^{*}_{\textbf{x}}\). We define the time-ordering of the monomial \(\gamma _{t_1}(a^{\sharp _1}_{x_1})\cdots \gamma _{t_n}(a_{x_n}^{\sharp _n})\) as:

$$\begin{aligned} \textbf{T}\gamma _{t_1}(a^{\sharp _{1}}_{\textbf{x}_{1}}) \cdots \gamma _{t_n}(a^{\sharp _{n}}_{\textbf{x}_{n}}) = (-1)^{\pi } \mathbb {1}({t_{\pi {(1)}}} \ge \cdots \ge t_{\pi (n)})\gamma _{t_{\pi {(1)}}} (a^{\sharp _{\pi (1)}}_{\textbf{x}_{\pi (1)}}) \cdots \gamma _{t_{\pi (n)}}(a^{\sharp _{\pi (n)}}_{\textbf{x}_{\pi (n)}}),\nonumber \\ \end{aligned}$$
(2.10)

where \(\pi \) is the permutation needed in order to bring the times in a decreasing order, from the left, with sign \((-1)^{\pi }\), and \(\mathbb {1}(\text {condition})\) is equal to 1 if the condition is true or 0 otherwise. In case two or more times are equal, the ambiguity is solved by putting the fermionic operators into normal order. Other solutions of the ambiguity are of course possible; it is worth anticipating that in our applications this arbitrariness will play no role, since it involves a zero measure set of times. The above definition extends to operators in \({\mathscr {A}}_{\Lambda _{L}}\) by linearity. In particular, for \({\mathscr {O}}_{1}, \ldots , {\mathscr {O}}_{n}\) even in the number of creation and annihilation operators, we have:

$$\begin{aligned} \begin{aligned}&\textbf{T} \gamma _{t_{1}}({\mathscr {O}}_{1}) \cdots \gamma _{t_{n}}({\mathscr {O}}_{n}) \\&\quad = \mathbb {1}(t_{\pi (1)} \ge t_{\pi (2)} \ge \cdots \ge t_{\pi (n)}) \gamma _{t_{\pi (1)}}({\mathscr {O}}_{\pi (1)}) \cdots \gamma _{t_{\pi (n)}}({\mathscr {O}}_{\pi (n)}). \end{aligned} \end{aligned}$$
(2.11)

The lack of the overall sign is due to the fact that the observables involve an even number of creation and annihilation operators.

Euclidean correlation functions. Let \(t_{i} \in [0,\beta )\), for \(i=1,\ldots , n\). Given operators \({\mathscr {O}}_{1}, \ldots , {\mathscr {O}}_{n}\) in \({\mathscr {A}}_{\Lambda _{L}}\), we define the time-ordered Euclidean correlation function as:

$$\begin{aligned} \langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}_{1}) \cdots \gamma _{t_{n}}({\mathscr {O}}_{n}) \rangle _{\beta , \mu , L}. \end{aligned}$$
(2.12)

From the definition of fermionic time-ordering, and from the KMS identity, it is not difficult to check that:

$$\begin{aligned} \begin{aligned}&\langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}_{1}) \cdots \gamma _{\beta }({\mathscr {O}}_{k}) \cdots \gamma _{t_{n}}({\mathscr {O}}_{n}) \rangle _{\beta , \mu , L} \\&\quad = (\pm 1) \langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}_{1}) \cdots \gamma _{0}({\mathscr {O}}_{k}) \cdots \gamma _{t_{n}}({\mathscr {O}}_{n}) \rangle _{\beta , \mu , L}; \end{aligned} \end{aligned}$$
(2.13)

in the special case in which the operators involve an even number of creation and annihilation operators, which will be particularly relevant for our analysis, the overall sign is \(+1\). The property (2.13) allows to extend in a periodic (sign \(+1\)) or antiperiodic (sign \(-1\)) way the correlation functions to all times \(t_{i} \in {\mathbb {R}}\). From now on, when discussing time-ordered correlations we shall always assume that this extension has been taken, unless otherwise specified.

Next, we define the connected time-ordered Euclidean correlation functions, or time-ordered Euclidean cumulants, as:

$$\begin{aligned} \begin{aligned}&\langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}_{1}); \cdots ; \gamma _{t_{n}}({\mathscr {O}}_{n}) \rangle _{\beta ,\mu ,L} \\&\quad := \frac{\partial ^{n}}{\partial \lambda _{1} \cdots \partial \lambda _{n}} \log \Big \{ 1 + \sum _{I \subseteq \{1, 2,\ldots , n\}} \lambda (I) \langle \textbf{T} {\mathscr {O}}(I) \rangle _{\beta ,\mu ,L} \Big \}\Big |_{\lambda _{i} = 0} \end{aligned} \end{aligned}$$
(2.14)

where I is a non-empty ordered subset of \(\{1, 2,\ldots , n\}\), \(\lambda (I) = \prod _{i\in I} \lambda _{i}\) and \({\mathscr {O}}(I) = \prod _{i\in I} \gamma _{t_{i}}({\mathscr {O}}_{i})\). For \(n=1\), this definition reduces to \(\langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}_{1}) \rangle \equiv \langle \gamma _{t_{1}}({\mathscr {O}}_{1}) \rangle = \langle {\mathscr {O}}_{1} \rangle \), while for \(n=2\) one gets \(\langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}_{1}); \gamma _{t_{2}}({\mathscr {O}}_{2}) \rangle = \langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}_{1}) \gamma _{t_{2}}({\mathscr {O}}_{2}) \rangle - \langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}_{1}) \rangle \langle \textbf{T} \gamma _{t_{2}}({\mathscr {O}}_{2}) \rangle \). More generally, the following relation between correlation functions and connected correlation function holds true:

$$\begin{aligned} \langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}_{1}) \cdots \gamma _{t_{n}}({\mathscr {O}}_{n}) \rangle _{\beta ,\mu ,L} = \sum _{P} \prod _{J\in P} \langle \textbf{T} \gamma _{t_{j_{1}}}({\mathscr {O}}_{j_{1}}); \cdots ; \gamma _{t_{j_{|J|}}}({\mathscr {O}}_{j_{|J|}}) \rangle _{\beta ,\mu ,L}, \end{aligned}$$

where P is the set of all partitions of \(\{1, 2, \ldots , n\}\) into ordered subsets, and J is an element of the partition P, \(J = \{ j_{1}, \ldots , j_{|J|} \}\).

Driving the system out of equilibrium. We are interested in driving the system out of its initial equilibrium, by adding a slowly varying time-dependent perturbation to the Hamiltonian \({\mathscr {H}}\). We define, for \(t\le 0\):

$$\begin{aligned} {\mathscr {H}}(\eta t):= {\mathscr {H}} + g(\eta t) \varepsilon {\mathscr {P}}, \end{aligned}$$
(2.15)

where: \(\eta > 0\), \(\varepsilon \in {\mathbb {R}}\); \(g(\cdot )\) is a smooth function vanishing at \(-\infty \), whose further properties will be specified later on; and \({\mathscr {P}} \in {\mathscr {A}}_{\Lambda _{L}}^{{\mathscr {N}}}\) is a self-adjoint and finite-range operator. As an example, we might consider:

$$\begin{aligned} {\mathscr {P}} = \sum _{\textbf{x}\in \Lambda _{L}} \mu (\textbf{x}) a^{*}_{\textbf{x}} a_{\textbf{x}}, \end{aligned}$$

with \(\mu (\textbf{x})\) bounded uniformly in L. More generally, we will not require \({\mathscr {P}}\) to be quadratic in the fermionic operators.

The Hamiltonian \({\mathscr {H}}(\eta t)\) generates the following the Schrödinger-von Neumann non-autonomous evolution:

$$\begin{aligned} i\partial _{t} \rho (t) = [ {\mathscr {H}}(\eta t), \rho (t) ],\qquad \rho (-\infty ) = \rho _{\beta , \mu , L},\qquad t\le 0. \end{aligned}$$
(2.16)

We shall denote by \({\mathscr {U}}(t;s)\) the unitary group generated by \({\mathscr {H}}(\eta t)\):

$$\begin{aligned} i\partial _{t} {\mathscr {U}}(t;s) = {\mathscr {H}}(\eta t) {\mathscr {U}}(t;s),\qquad {\mathscr {U}}(s;s) = \mathbb {1}. \end{aligned}$$
(2.17)

Using this unitary group, the solution of Eq. (2.16) can be written as

$$\begin{aligned} \rho (t) = {\mathscr {U}}(t;-\infty ) \rho _{\beta , \mu , L} {\mathscr {U}}(t;-\infty )^{*}. \end{aligned}$$
(2.18)

Let \({\mathscr {O}}_{X} \in {\mathscr {A}}_{X}^{{\mathscr {N}}}\) be a local operator. We will be interested in studying its expectation value in the time-dependent state \({\text {Tr}}{\mathscr {O}}_{X} \rho (t)\). In particular, we will be interested in understanding the dependence of this quantity on the external perturbation, and in establishing the validity of linear response, uniformly in the size of the system.

3 Main Result

In what follows, we will consider Hamiltonians \({\mathscr {H}}(\eta t) = {\mathscr {H}} + \varepsilon g(\eta t) {\mathscr {P}}\) of the form introduced above. We shall denote by \(\langle \cdot \rangle _{t}\) the instantaneous Gibbs state of \({\mathscr {H}}(\eta t)\),

$$\begin{aligned} \langle {\mathscr {O}}_{X} \rangle _{t}:= \frac{{\text {Tr}}\, e^{-\beta ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})} {\mathscr {O}}_{X}}{{\text {Tr}}\, e^{-\beta ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})}}. \end{aligned}$$
(3.1)

Our main result holds under the following assumptions on the Hamiltonian \({\mathscr {H}}\) (through its Gibbs state) and on the switch function g(t).

Assumption 3.1

(Integrability of time-ordered cumulants) Let \(S>0\), \(R>0\). For \(n\ge 1\) and for \(i = 1, \ldots , n+1\), let \({\mathscr {O}}^{(i)}\) be finite-range operators, such that \(\Vert {\mathscr {O}}^{(i)}_{X_{i}} \Vert \le S\) and \({\mathscr {O}}^{(i)}_{X_{i}} = 0\) for \(\text {diam}(X_{i}) > R\), uniformly in L. For all \(\beta > 0\), there exists a constant \({\mathfrak {c}} \equiv {\mathfrak {c}}(\beta , S, R) > 0\) such that the following holds, for all \(L \in {\mathbb {N}}\) and for all \(n\in {\mathbb {N}}\) and for all \(X \subseteq \Lambda _L\):

$$\begin{aligned} \int _{[0,\beta ]^{n}} d{\underline{t}}\, (1+|{\underline{t}}|_{\beta }) \sum _{X_{i} \subseteq \Lambda _{L}} \big | \big \langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}^{(1)}_{X_1}); \cdots ; \gamma _{t_{n}}({\mathscr {O}}^{(n)}_{X_n}); {\mathscr {O}}^{(n+1)}_{X} \big \rangle _{\beta , \mu , L} \big | \le {\mathfrak {c}}^{n} n! \end{aligned}$$
(3.2)

where:

$$\begin{aligned} |{\underline{t}}|_{\beta }:= \sum _{i=1}^{n} \min _{m \in {\mathbb {Z}}} |t_{i} - m\beta |. \end{aligned}$$
(3.3)

Remark 3.2

  1. (i)

    For weakly interacting fermionic lattice models, recall Eq. (2.6), Assumption 3.1 can be proved via cluster expansion techniques for \(|\lambda |\) small enough: the bound (3.2) holds true for all finite \(\beta \) and L, with a constant \({\mathfrak {c}}\) that might grow with \(\beta \) but it is independent of L. Moreover, if the non-interacting Hamiltonian in Eq. (2.6) is gapped, and if the chemical potential \(\mu \) is chosen in the spectral gap, the bound (3.2) holds for \(|\lambda |\) small uniformly in \(\beta \), with a constant \({\mathfrak {c}}\) that is independent of \(\beta \). We shall review these facts in Appendix C. There, we shall focus on the case of local, quartic interactions; however, the method could also be applied to cover a larger class of local interactions. The same methods can actually be used to prove the stability of the spectral gap for many-body Hamiltonians [20].

  2. (ii)

    It is known that the existence of a spectral gap for the many-body Hamiltonian implies the spatial exponential decay of correlations [30]. In general, it would be interesting to understand whether the bound (3.2) can be established under the assumption of locality and of a spectral gap for the many-body Hamiltonian, with the same n-dependence as in the right-hand side of (3.2). We are not aware of any result in this direction, at zero or at positive temperature.

The next assumption specifies the class of switch functions g(t) that we are able to consider.

Assumption 3.3

(Properties of the switch function) We assume that g(t) has the form, for all \(t\le 0\):

$$\begin{aligned} g(t) = \int _{0}^{\infty } d\xi \, e^{\xi t} h(\xi )\qquad \text {with } h(\xi )\in L^{1}({\mathbb {R}}_{+}), \end{aligned}$$
(3.4)

and where h is a function such that

$$\begin{aligned} \int _0^{1} d \xi \,\frac{|h(\xi )|}{\xi ^{d+2}}< \infty ,\qquad \int _1^\infty d \xi \, \xi |h(\xi )| < \infty . \end{aligned}$$
(3.5)

Alternatively, the function \(h(\xi )\) can be replaced by a finite linear combination of Dirac delta distributions supported on \({\mathbb {R}}_{+}\).

Remark 3.4

Thus, g is the Laplace transform of the function h. As discussed in Appendix A, the properties (3.5) are implied by suitable decay properties of the function g(z) for complex times. Our setting allows us to include the function \(g(t) = e^{t}\), a widely used switch function in applications, by choosing \(h(\xi ) = \delta (\xi - 1)\).

Next, we introduce a suitable approximation of the switch function, which will play an important role in our analysis.

Definition 3.5

(Approximation of the switch function). Let \(\eta > 0\) and suppose that g(t) satisfies Assumption 3.3. We define:

$$\begin{aligned} \begin{aligned} g_{\beta ,\eta }(t)&:= \sum _{m = 0}^\infty \int _{\frac{2\pi }{\beta \eta } m}^{\frac{2\pi }{\beta \eta }(m+1)} d\xi \, h(\xi ) e^{\frac{2\pi }{\beta \eta }(m+1) \eta t} \\&\equiv \sum _{\omega \in \frac{2\pi }{\beta } {\mathbb {N}}} \tilde{g}_{\beta ,\eta }(\omega ) e^{\omega t}, \end{aligned} \end{aligned}$$
(3.6)

where \({\tilde{g}}_{\beta ,\eta }(0):= 0\) and for \(\omega \ge \frac{2\pi }{\beta }\):

$$\begin{aligned} {\tilde{g}}_{\beta ,\eta }(\omega ):= \int _{\frac{\omega }{\eta } - \frac{2\pi }{\beta \eta }}^{\frac{\omega }{\eta }} d\xi \, h(\xi ). \end{aligned}$$
(3.7)

Remark 3.6

  1. (i)

    The approximation of the switch function satisfies the following key identity:

    $$\begin{aligned} g_{\beta ,\eta }(t) = g_{\beta ,\eta }(t - i\beta ). \end{aligned}$$
    (3.8)
  2. (ii)

    The following estimate holds:

    $$\begin{aligned} \sum _{\omega \in \frac{2\pi }{\beta } {\mathbb {N}}} | \tilde{g}_{\beta ,\eta }(\omega ) | \le \sum _{m = 0}^\infty \int _{\frac{2\pi }{\beta \eta } m}^{\frac{2\pi }{\beta \eta }(m+1)} d\xi \, |h(\xi )| = \Vert h \Vert _{1}, \end{aligned}$$
    (3.9)

    where \(\Vert h\Vert _{1} \equiv \Vert h\Vert _{L^{1}({\mathbb {R}}_{+})}\).

  3. (iii)

    Using that, for \(\xi _{1} > \xi _{2}\) and \(t\le 0\),

    $$\begin{aligned} \Big |e^{\xi _1\eta t}-e^{\xi _2\eta t}\Big | = \eta |t| \Big |\int _{\xi _2}^{\xi _1}d \xi \, e^{\xi \eta t} \Big |\le \eta |t|(\xi _1-\xi _2)e^{\xi _2\eta t}, \end{aligned}$$
    (3.10)

    we have:

    $$\begin{aligned} \begin{aligned} | g_{\beta ,\eta }(t) - g(\eta t) |&\le \sum _{m = 0}^\infty \int _{\frac{2\pi }{\beta \eta } m}^{\frac{2\pi }{\beta \eta }(m+1)} d\xi \, |h(\xi )| \Big | e^{\xi \eta t} - e^{\frac{2\pi }{\beta \eta }(m+1) \eta t}\Big | \\&\le \sum _{m = 0}^\infty \int _{\frac{2\pi }{\beta \eta } m}^{\frac{2\pi }{\beta \eta }(m+1)} d \xi \, |h(\xi )| \eta |t|\frac{2\pi }{\beta \eta }e^{\xi \eta t}\\&= \frac{2\pi |t|}{\beta } \int _{0}^{\infty } d \xi \, |h(\xi )| e^{\xi \eta t}. \end{aligned} \end{aligned}$$
    (3.11)

    Therefore,

    $$\begin{aligned} \begin{aligned} | g_{\beta ,\eta }(t) - g(\eta t) |&\le \frac{2\pi }{\beta \eta } \int _{0}^{\infty }d \xi \, \frac{|h(\xi )|}{\xi } \xi \eta |t| e^{\xi \eta t} \\&\le \frac{2\pi }{e \beta \eta } \Big \Vert \frac{h}{\xi } \Big \Vert _{1}, \end{aligned} \end{aligned}$$
    (3.12)

    and the right-hand side is finite, thanks to Assumption 3.3.

We are now ready to state our main result.

Theorem 3.7

(Main result). Let \(\rho (t)\) be the solution of Eq. (2.16), with time-dependent Hamiltonian (2.15). Suppose that for some \(S>0\), \(R>0\) independent of L, the Hamiltonian \({\mathscr {H}}\) and the perturbation \({\mathscr {P}}\) satisfy

$$\begin{aligned} \Vert {\mathscr {H}}_{X}\Vert \le S,\; \Vert {\mathscr {P}}_{X} \Vert \le S,\qquad {\mathscr {H}}_{X} =0,\; {\mathscr {P}}_{X} = 0\quad \text {for } \text {diam}(X) > R \end{aligned}$$
(3.13)

for all \(X\subseteq \Lambda _{L}\). Suppose that the Gibbs state \(\langle \cdot \rangle _{\beta , \mu , L}\) of \({\mathscr {H}}\) satisfies Assumption 3.1 with \({\mathfrak {c}} \equiv {\mathfrak {c}}(\beta , S, R)\), and that g(t) satisfies Assumption 3.3. Let \({\mathscr {O}}_{X} \in {\mathscr {A}}_{X}\) with \(\text {diam}(X) \le R\) and \(\Vert {\mathscr {O}}_{X} \Vert \le S\). Then there exists \(\varepsilon _{0} \equiv \varepsilon _{0}({\mathfrak {c}}, h)\) such that for \(|\varepsilon | < \varepsilon _{0}\) the following holds:

$$\begin{aligned} \begin{aligned} {\text {Tr}}{\mathscr {O}}_{X}\rho (t)&= \langle {\mathscr {O}}_{X} \rangle _{\beta ,\mu ,L} + \sum _{n\ge 1} \frac{(-\varepsilon )^{n}}{n!} I^{(n)}_{\beta ,\mu ,L}(\eta ,t) + R_{\beta , \mu , L}(\varepsilon ,\eta ,t) \end{aligned} \end{aligned}$$
(3.14)

where the functions \(I^{(n)}_{\beta ,\mu ,L}(\eta ,t)\) are given by

$$\begin{aligned} \begin{aligned}&I^{(n)}_{\beta ,\mu ,L}(\eta ,t) \\&\quad = \int _{[0,\beta )^{n}} d{\underline{s}}\, \Big [ \prod _{j=1}^{n} g_{\beta ,\eta }(t-is_{j}) \Big ] \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \gamma _{s_{2}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle _{\beta ,\mu ,L} \end{aligned} \end{aligned}$$
(3.15)

and satisfy the estimate

$$\begin{aligned} |I^{(n)}_{\beta ,\mu ,L}(\eta ,t)|\le \Vert h \Vert _{1}^{n} {\mathfrak {c}}^{n} n!. \end{aligned}$$
(3.16)

The error term \(R_{\beta , \mu , L}(\varepsilon ,\eta ,t)\) in (3.14) is bounded as:

$$\begin{aligned} |R_{\beta , \mu , L}(\varepsilon ,\eta ,t)| \le \frac{K |\varepsilon |}{\eta ^{d+2} \beta }, \end{aligned}$$
(3.17)

where the constant \(K \equiv K(S, R, h) >0\) does not depend on \({\mathfrak {c}}\). Furthermore, we also have:

$$\begin{aligned} \Big | {\text {Tr}}{\mathscr {O}}_{X} \rho (t) - \langle {\mathscr {O}}_{X} \rangle _{t}\Big |\le \frac{K|\varepsilon |}{\eta ^{d+2}\beta } + C_{1}|\varepsilon | \Big ( \eta + \frac{1}{\beta }\Big ) + \frac{C_{2}|\varepsilon |}{\beta \eta }, \end{aligned}$$
(3.18)

where \(\langle \cdot \rangle _{t}\) is the instantaneous Gibbs state of \({\mathscr {H}}(\eta t)\), Eq. (3.1), and \(C_{i}\equiv C_{i}({\mathfrak {c}}, h)\) for \(i=1,2\).

Remark 3.8

 

  1. (i)

    The series in (3.14) turns out to be equal to the Duhamel series for the quantum dynamics generated by the Hamiltonian

    $$\begin{aligned} {\mathscr {H}}_{\beta ,\eta }(t) = {\mathscr {H}} + \varepsilon g_{\beta ,\eta }(t) {\mathscr {P}}, \end{aligned}$$
    (3.19)

    after a complex deformation from real time to imaginary times (Wick rotation). Thus, our result in particular includes the statement that the Duhamel series for the dynamics generated by (3.19) is convergent in \(\varepsilon \) uniformly in L and in \(\eta \), under the Assumption 3.1. This information is very useful because, as proved later in Proposition 4.1, the dynamics generated by \({\mathscr {H}}_{\beta ,\eta }(t)\) is close to the dynamics generated by the original Hamiltonian \({\mathscr {H}}(\eta t)\), in the sense of evolution of local operators, for \(\beta \) large enough.

  2. (ii)

    If \({\mathfrak {c}}\) can be taken to be independent of \(\beta \), then the radius of convergence in \(\varepsilon \) is independent of \(\beta \) as well and we can use the results to describe the \(\beta \rightarrow \infty \) limit. As commented after Assumption 3.1, this is the case for many-body perturbations of non-interacting gapped lattice models, with Hamiltonian of the form (2.6) and for \(|\lambda |\) small enough.

  3. (iii)

    If \({\mathfrak {c}}\) does not depend on \(\beta \), our result allows to take the zero temperature limit \(\beta \rightarrow \infty \) after the thermodynamic limit \(L\rightarrow \infty \). By Eqs. (3.14), (3.15), the existence of these limits is implied by the existence of the same limits for the equilibrium Gibbs state \(\langle \cdot \rangle _{\beta ,\mu ,L}\). To the best of our knowledge, all previous works on many-body adiabatic dynamics considered the case in which the temperature is sent to zero before the thermodynamic limit.

  4. (iv)

    For finite L, the Hilbert space is finite dimensional and the spectrum of \({\mathscr {H}}(\eta t)\) is discrete. Thus, it is straightforward to prove that as \(\beta \rightarrow \infty \) and for fixed L the average over the instantaneous Gibbs state in (3.18) converges to the average over the ground state projector, which a priori might have a nontrivial degeneracy (we do not know whether Assumption 3.1 has implications for the multiplicity of the ground state). This allows to recover the zero temperature many-body adiabatic theorem, for the class of systems satisfying the assumptions of Theorem 3.7.

  5. (v)

    To illustrate how the adiabatic theorem (3.18) is implied by (3.14), we observe that the first two terms in the right-hand side of (3.14) reconstruct the average over the instantaneous Gibbs state of \({\mathscr {H}}(\eta t)\), after replacing the functions \(g_{\beta ,\eta }(t - is_{j})\) in Eq. (3.15) with \(g(\eta t)\). To see this, we use the representation of the instantaneous Gibbs state of \({\mathscr {H}}(\eta t)\) in terms of a convergent cumulant expansion in \(\varepsilon \); see Eq. (4.98) below.

    Let us now discuss the origin of the various error terms in the right-hand side of (3.18). The first term is due to \(R_{\beta , \mu , L}\), which arises from the approximation of the real-time dynamics generated by \({\mathscr {H}}(\eta t)\) with the real-time dynamics generated by \({\mathscr {H}}_{\beta ,\eta }(t)\). This error term is estimated via Lieb-Robinson bounds, and its estimate (3.17) does not use any information about the state. This bound introduces the strongest constraint on the range of temperatures that we are able to consider. The second error term in (3.18) arises from the replacement of \(g_{\beta ,\eta }(t-is_{j})\) with \(g_{\beta ,\eta }(t)\); the bound for the difference introduces a factor \((\eta + (1/\beta )) |s_{j}|_{\beta }\), and the factor \(|s_{j}|_{\beta }\) is controlled using the assumption on the Euclidean correlations (3.2). Finally, the last error term in (3.18) arises from the replacement of \(g_{\beta ,\eta }(t)\) with \(g(\eta t)\), and it relies on the estimate (3.12).

  6. (vi)

    If one restricts the attention to the switch functions \(g_{\beta ,\eta }(t)\) of the type (3.6) the first and the last error terms in (3.18) are absent. Thus, for this special class of switch functions it is possible to prove that:

    $$\begin{aligned} \Big | {\text {Tr}}{\mathscr {O}}_{X} \rho (t) - \frac{{\text {Tr}}{\mathscr {O}}_{X} e^{-\beta ({\mathscr {H}}_{\beta ,\eta }(t) - \mu {\mathscr {N}})}}{{\text {Tr}}\,e^{-\beta ({\mathscr {H}}_{\beta ,\eta }(t) - \mu {\mathscr {N}})}} \Big | \le C |\varepsilon | \Big ( \eta + \frac{1}{\beta } \Big ). \end{aligned}$$
    (3.20)

    Referring to the proof of the main result in Sect. 4, the n-th order contribution in \(\varepsilon \) to the difference in the left-hand side of (3.20) is only due to the term \(R^{(n)}_{2,1}(t)\) defined in (4.111), which is estimated in (4.116). Notice that the special switch function \(g_{\beta ,\eta }(t)\) are superpositions of exponentials \(e^{\frac{2\pi }{\beta } (m+1)t}\) for \(m \in {\mathbb {N}}\); thus for fixed \(\beta \), the dependence of \(g_{\beta ,\eta }\) on \(\eta \) is in general not a rescaling of time. The smallest abiabatic parameter that can be reached with this type of switch functions is \(2\pi / \beta \).

  7. (vii)

    In Eq. (3.18), we compare the time-evolved state with the instantaneous Gibbs state, defined with the same temperature as the initial datum: in the small temperature regime we are considering, we cannot resolve the heating of the system due to the perturbation. A better approximation should be obtained introducing a suitable, time-dependent, renormalization of the instantaneous Gibbs state. We plan to come back to this point in the future.

The many-body adiabatic theorem (3.18) can be improved, under the additional assumption that the first m derivatives of the switch function vanish at zero.

Corollary 3.9

(Improved adiabatic convergence). Under the same assumptions of Theorem 3.7, the following is true. Suppose that \(\partial _{t}^{j} g(0) = 0\) for all \(1\le j \le m\). Furthermore, suppose that

$$\begin{aligned} \int _{[0,\beta ]^{n}} d{\underline{t}}\, (1+|{\underline{t}}|^{m+1}_{\beta }) \sum _{X_{i} \subseteq \Lambda _{L}} \big | \big \langle \textbf{T} \gamma _{t_{1}}({\mathscr {O}}^{(1)}_{X_1}); \cdots ; \gamma _{t_{n}}({\mathscr {O}}^{(n)}_{X_n}); {\mathscr {O}}^{(n+1)}_{X} \big \rangle _{\beta , \mu , L} \big | \le D_{m+1} {\mathfrak {c}}^{n} n!\nonumber \\ \end{aligned}$$
(3.21)

with \(D_{m+1}>0\) only dependent on m, and that

$$\begin{aligned} \int _1^\infty d \xi \, \xi ^{m+1}\left| h(\xi )\right| < \infty . \end{aligned}$$
(3.22)

Then, the following improved many-body adiabatic theorem holds:

$$\begin{aligned} \Big | {\text {Tr}}{\mathscr {O}}_{X} \rho (0) - \langle {\mathscr {O}}_{X} \rangle _{0}\Big |\le \frac{K|\varepsilon |}{\eta ^{d+2}\beta } + C_{1,m+1}|\varepsilon | \Big ( \eta ^{m+1}+\frac{1}{\beta }\Big ) + \frac{C_{2}|\varepsilon |}{\beta \eta }, \end{aligned}$$
(3.23)

where \(K\equiv K(S,R,h),C_{1,m+1}\equiv C_{1,m+1} (\mathfrak {c},h)\) and \(C_2\equiv C_2(\mathfrak {c},h)\).

Remark 3.10

  1. (i)

    These switch functions are allowed by our setting; for example, we might consider \(g(t) = 1-(1-e^t)^m\).

  2. (ii)

    As \(\beta \rightarrow \infty \), a similar result has been first obtained by [7]. We observe that, with respect to [7], here we show that the improved convergence (Corollary 3.9) holds under the assumption that the first m derivatives at zero of the switch function are vanishing, while in [7] it is assumed that the first \(m+d+1\) derivatives vanish (with d the spatial dimension of the system).

  3. (iii)

    The assumption (3.21) holds true for many-body perturbations of gapped lattice models, and it can be proved via the analysis of Appendix C.

Combined with a few straightforward estimates [Eqs. (4.129) to (4.131)], we also have the following result.

Corollary 3.11

(Validity of linear response). Under the same assumptions as Theorem 3.7,

$$\begin{aligned} \begin{aligned}&{\text {Tr}}{\mathscr {O}}_{X} \rho (t) - \langle {\mathscr {O}}_{X} \rangle _{\beta , \mu , L} \\&\quad = -\varepsilon \int _{0}^{\beta } ds\, g_{\beta ,\eta }(t - is) \langle \gamma _{s}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle _{\beta , \mu , L} + R_{\beta , \mu , L}(\varepsilon ,\eta ,t) \end{aligned} \end{aligned}$$
(3.24)

where the error term \(R_{\beta , \mu , L}(\varepsilon ,\eta ,t)\) is bounded as:

$$\begin{aligned} |R_{\beta , \mu , L}(\varepsilon ,\eta ,t)| \le \frac{K |\varepsilon |}{\eta ^{d+2} \beta } + C |\varepsilon |^{2} \end{aligned}$$
(3.25)

with K is as in (3.17) and C depends on \({\mathfrak {c}}\). In Eq. (3.24), the function \(g_{\beta ,\eta }(t - is)\) can be replaced by \(g(\eta t)\), up to replacing the error term \(R_{\beta , \mu , L}\) by \({\widetilde{R}}_{\beta , \mu , L}\), such that:

$$\begin{aligned} |{\widetilde{R}}_{\beta , \mu , L}(\varepsilon ,\eta ,t)| \le C|\varepsilon | \Big ( \eta + \frac{1}{\beta \eta } \Big ) + \frac{K |\varepsilon |}{\eta ^{d+2} \beta } + C|\varepsilon |^{2}. \end{aligned}$$
(3.26)

Furthermore, the main term in Eq. (3.24) is equal to the first order term in the Duhamel expansion, up to small errors:

$$\begin{aligned} \begin{aligned}&\Big | \int _{0}^{\beta } ds\, g_{\beta ,\eta }(t - is) \langle \gamma _{s}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle _{\beta , \mu , L} - i\int _{-\infty }^{t} d s\, g(\eta s) \langle \left[ \tau _{t}({\mathscr {O}}_{X}), \tau _{s}({\mathscr {P}}) \right] \rangle _{\beta , \mu , L}\Big |\quad \quad \\&\quad \le \frac{K}{ \eta ^{d+2}\beta }. \end{aligned} \end{aligned}$$
(3.27)

Remark 3.12

Eq. (3.26) shows that, up to an error term vanishing as \(\beta \rightarrow \infty \) and \(\eta \rightarrow 0^{+}\), the first order in \(\varepsilon \) in the Duhamel expansion for the real-time dynamics is equal to the first order in \(\varepsilon \) in the expansion for the instantaneous Gibbs state \(\langle \cdot \rangle _{t}\). To see this, we rely on the cumulant expansion in \(\varepsilon \) for the instantaneous Gibbs state, Eq. (4.98). More generally, the argument can be extended to show that the n-th order term in \(\varepsilon \) in the real-time Duhamel expansion for the dynamics generated by \({\mathscr {H}}(\eta t)\) is equal to the n-th order term in \(\varepsilon \) in the expansion of the instantaneous Gibbs state of \({\mathscr {H}}(\eta t)\), up to vanishing errors as \(\beta \rightarrow \infty \) and as \(\eta \rightarrow 0^{+}\).

The proof of the main result will be given in Sect. 4, and it is organized as follows. In Sect. 4.1 we recall how to derive the Duhamel expansion for the many-body evolution, in a finite volume. In Sect. 4.2 we introduce the auxiliary dynamics, obtained after replacing \(g(\eta t)\) with \(g_{\beta ,\eta }(t)\) in \({\mathscr {H}}(\eta t)\), and we prove the closeness of the two dynamics for \(\beta \) large enough, in the sense of expectation of local observables using Lieb-Robinson bounds. In Sect. 4.3 we represent the Duhamel expansion for the auxiliary dynamics in a finite volume via the Wick rotation: this allows to get an identity for every term in the Duhamel expansion in terms of time-ordered Euclidean correlations. We then use Assumption 3.1 to establish convergence of the (Wick-rotated) Duhamel series, uniformly in the size of the system. In Sect. 4.4 we recall the cumulant expansion in \(\varepsilon \) for the instantaneous Gibbs state of \({\mathscr {H}}(\eta t)\). Finally, in Sect. 4.5 we put everything together, and we prove Theorem 3.7.

4 Proof of Theorem 3.7

4.1 Duhamel expansion

We start by recalling how to derive the well-known Duhamel series for the expectation of local observables. Given a time-dependent Hamiltonian \({\mathscr {H}}(\eta t) = {\mathscr {H}} + \varepsilon g(\eta t) {\mathscr {P}}\), let us consider the associated unitary evolution:

$$\begin{aligned} \begin{aligned} i\partial _{t} {\mathscr {U}}(t;s)&= {\mathscr {H}}(\eta t) {\mathscr {U}}(t;s) \\ {\mathscr {U}}(s;s)&= \mathbb {1}. \end{aligned} \end{aligned}$$
(4.1)

For \(\varepsilon = 0\) one trivially has \({\mathscr {U}}(t;s) = e^{-i(t-s){\mathscr {H}}}\). We are interested in deriving a perturbative expansion around the evolution generated by \({\mathscr {H}}\). To this end, we define the unitary evolution in the interaction picture as:

$$\begin{aligned} {\mathscr {U}}_{\text {I}}(t;s):= e^{i{\mathscr {H}}t} {\mathscr {U}}(t;s) e^{-i{\mathscr {H}}s}. \end{aligned}$$
(4.2)

Clearly, \({\mathscr {U}}_{\text {I}}(s;s) = \mathbb {1}\), and:

$$\begin{aligned} \begin{aligned} i\partial _{t} {\mathscr {U}}_{\text {I}}(t;s)&= e^{i{\mathscr {H}}t} (-{\mathscr {H}} + {\mathscr {H}}(\eta t)) {\mathscr {U}}(t;s) e^{-i{\mathscr {H}}s} \\&= \varepsilon g(\eta t) \tau _{t}({\mathscr {P}}) {\mathscr {U}}_{\text {I}}(t;s). \end{aligned} \end{aligned}$$
(4.3)

Next, we write, for \(T>0\) and for \(0\ge t\ge -T\):

$$\begin{aligned} \begin{aligned}&{\text {Tr}}{\mathscr {O}} {\mathscr {U}}(t;-T) \rho _{\beta , \mu , L} {\mathscr {U}}(t;-T)^{*} - {\text {Tr}}{\mathscr {O}} \rho _{\beta , \mu , L} \\&\quad = {\text {Tr}}\tau _{t}({\mathscr {O}}) {\mathscr {U}}_{\text {I}}(t;-T) \rho _{\beta , \mu , L} {\mathscr {U}}_{\text {I}}(t;-T)^{*} - {\text {Tr}}\tau _{t}({\mathscr {O}}) \rho _{\beta , \mu , L} \end{aligned} \end{aligned}$$
(4.4)

where we used the cyclicity of the trace and the invariance of \(\rho _{\beta , \mu , L}\) under the dynamics generated by \({\mathscr {H}}\). Finally, by Eq. (4.3):

$$\begin{aligned} \begin{aligned}&{\text {Tr}}{\mathscr {O}} {\mathscr {U}}(t;-T) \rho _{\beta , \mu , L} {\mathscr {U}}(t;-T)^{*} - {\text {Tr}}{\mathscr {O}} \rho _{\beta , \mu , L}\\&\quad = (-i \varepsilon ) \int _{-T}^{t} ds\, g(\eta s) {\text {Tr}}\tau _{t}({\mathscr {O}}) [ \tau _{s}({\mathscr {P}}), {\mathscr {U}}_{\text {I}}(s;-T) \rho _{\beta , \mu , L} {\mathscr {U}}_{\text {I}}(s;-T)^{*}] \\&\quad = (-i \varepsilon ) \int _{-T}^{t} ds\, g(\eta s) {\text {Tr}}[ \tau _{t}({\mathscr {O}}), \tau _{s}({\mathscr {P}})] {\mathscr {U}}_{\text {I}}(s;-T) \rho _{\beta , \mu , L} {\mathscr {U}}_{\text {I}}(s;-T)^{*}. \end{aligned} \end{aligned}$$
(4.5)

The procedure can be iterated. One gets:

$$\begin{aligned} \begin{aligned}&{\text {Tr}}{\mathscr {O}} {\mathscr {U}}(t;-T) \rho _{\beta , \mu , L} {\mathscr {U}}(t;-T)^{*} = {\text {Tr}}{\mathscr {O}} \rho _{\beta , \mu , L}\\&\quad + \sum _{n=1}^{m} (-i\varepsilon )^{n} \int _{-T \le s_{n} \le \ldots \le s_{1} \le t} d {\underline{s}}\, g(\eta s_{1}) \cdots g(\eta s_{n}) \\ {}&\qquad \cdot \langle [ \cdots [[ \tau _{t}({\mathscr {O}}), \tau _{s_{1}}({\mathscr {P}})], \tau _{s_{2}}({\mathscr {P}})] \cdots \tau _{s_{n}}({\mathscr {P}}) ] \rangle _{\beta , \mu , L} \\ {}&\quad + R_{\beta , \mu , L}^{(m+1)}(-T;t), \end{aligned} \end{aligned}$$
(4.6)

where \(R_{\beta , \mu , L}^{(m+1)}(-T;t)\) is the Taylor remainder of the expansion, given by:

$$\begin{aligned} \begin{aligned} R_{\beta , \mu , L}^{(m+1)}(-T;t)&= (-i\varepsilon )^{m+1} \int _{-T \le s_{m+1} \le \ldots \le s_{1} \le t} d {\underline{s}}\, g(\eta s_{1}) \cdots g(\eta s_{m+1}) \\&\quad \cdot {\text {Tr}}[ \cdots [ \tau _{t}({\mathscr {O}}), \tau _{s_{1}}({\mathscr {P}})], \cdots \tau _{s_{m+1}}({\mathscr {P}}) ] {\mathscr {U}}_{\text {I}}(s_{m+1};-T) \rho _{\beta , \mu , L} {\mathscr {U}}_{\text {I}}(s_{m+1};-T)^{*}. \end{aligned} \end{aligned}$$
(4.7)

On a finite lattice and for \(\eta > 0\), the series is absolutely convergent. In fact, by using the boundedness of the fermionic operators, and the unitarity of the time evolution, we have the following crude estimate:

$$\begin{aligned} \begin{aligned} \big |R_{\beta , \mu , L}^{(m+1)}(-T;t)\big | \le&|\varepsilon |^{m+1} \int _{-T \le s_{m+1} \le \ldots \le s_{1} \le t} d {\underline{s}}\, |g(\eta s_{1})| \cdots |g(\eta s_{m+1})| \\&\cdot 2^{m+1} \Vert {\mathscr {O}} \Vert \Vert {\mathscr {P}} \Vert ^{m+1} \\ \le&\Vert {\mathscr {O}} \Vert \frac{C^{m+1} |\varepsilon |^{m+1} |\Lambda _{L}|^{m+1} \eta ^{-m-1}}{(m+1)!} \Big [\int _{-\infty }^{0} ds\, |g(s)|\Big ]^{m+1}, \end{aligned} \end{aligned}$$
(4.8)

for a universal constant \(C>0\). Thus, taking m large enough, uniformly in T, the error term can be made as small as wished. Hence, we have, in the \(T\rightarrow \infty \) limit:

$$\begin{aligned} \begin{aligned}&{\text {Tr}}{\mathscr {O}} {\mathscr {U}}(t;-\infty ) \rho _{\beta , \mu , L} {\mathscr {U}}(t;-\infty )^{*} = {\text {Tr}}{\mathscr {O}} \rho _{\beta , \mu , L} \\&\quad + \sum _{n=1}^{\infty } (-i\varepsilon )^{n} \int _{-\infty \le s_{n} \le \ldots \le s_{1} \le t} d {\underline{s}}\, g(\eta s_{1}) \cdots g(\eta s_{n}) \\ {}&\qquad \cdot \langle [ \cdots [[ \tau _{t}({\mathscr {O}}), \tau _{s_{1}}({\mathscr {P}})], \tau _{s_{2}}({\mathscr {P}})] \cdots \tau _{s_{n}}({\mathscr {P}}) ] \rangle _{\beta , \mu , L}. \end{aligned} \end{aligned}$$
(4.9)

Equation (4.9) is the Duhamel expansion for the average of \({\mathscr {O}}\) on the time-dependent state \(\rho (t):= {\mathscr {U}}(t;-\infty ) \rho _{\beta , \mu , L} {\mathscr {U}}(t;-\infty )^{*}\). In order to extract useful information from this representation, we need estimates for the various terms that are uniform in the size of the system. In particular, we would like to prove that the series converges uniformly in \(\varepsilon \), as \(L\rightarrow \infty \) and for \(\eta \) small.

The main difficulty to achieve this is the control of the time-integral, uniformly in \(\eta \). For fixed \(\eta \), this problem might be approached using Lieb-Robinson bounds, see [44] for a review. This bound reads, for two operators \({\mathscr {O}}_{X}\) and \({\mathscr {O}}_{Y}\) supported on XY:

$$\begin{aligned} \Vert [ {\mathscr {O}}_{X}, \tau _{t}({\mathscr {O}}_{Y}) ] \Vert \le C e^{v|t| - c\cdot \text {dist}(X,Y)}, \end{aligned}$$
(4.10)

for suitable positive constants Ccv. Combined with the boundedness of the fermionic operators, this estimate (and its extension to multi-commutators, [16]) can be used to prove that the series in (4.9) is convergent uniformly in L, however only for \(|\varepsilon | \le \varepsilon (\eta )\) with \(\varepsilon (\eta ) \rightarrow 0^{+}\) as \(\eta \rightarrow 0^{+}\). In the next section, we shall study the time-evolution using a different approach, which gives estimates that are uniform in \(\eta \).

4.2 The auxiliary dynamics

Let

$$\begin{aligned} {\mathscr {H}}_{\beta ,\eta }( t):= {\mathscr {H}} + \varepsilon g_{\beta ,\eta }(t) {\mathscr {P}}, \end{aligned}$$
(4.11)

with \(g_{\beta ,\eta }(t)\) introduced in Definition 3.5. Here we will prove that the evolutions generated by \({\mathscr {H}}(\eta t)\) and by \({\mathscr {H}}_{\beta ,\eta }(t)\) are close, at small enough temperature, in the sense of the expectation of local observables. To compare the two evolutions, we will use Lieb-Robinson bounds for non-autonomous dynamics [13, 16].

Proposition 4.1

(Comparison of dynamics). Let \(\rho (t)\), \({\tilde{\rho }}(t)\) be the time-dependent states evolving with \({\mathscr {H}}(\eta t)\), \({\mathscr {H}}_{\beta ,\eta }(t)\) respectively, with initial data given by \(\rho (-\infty ) = {\tilde{\rho }}(-\infty ) = \rho _{\beta , \mu , L}\). Let \({\mathscr {O}}_{X}\) be a local observable. Then, there exists \(K>0\), independent of \({\mathfrak {c}}\) and dependent on h, such that for all \(\varepsilon , \eta , \beta , L\):

$$\begin{aligned} \big |{\text {Tr}}{\mathscr {O}}_{X} (\rho (t) - \tilde{\rho }(t))\big | \le \frac{K|\varepsilon |}{\eta ^{d+2}\beta },\qquad \text {for all } t\le 0. \end{aligned}$$
(4.12)

Proof

We start by writing:

$$\begin{aligned} g(\eta t) = g_{\beta ,\eta }(t) + \zeta _{\eta , \beta }(t), \end{aligned}$$
(4.13)

where \(g_{\beta ,\eta }(t)\) is defined in Eq. (3.6), and the error term satisfies the bound, by Eq. (3.11):

$$\begin{aligned} |\zeta _{\eta , \beta }(t)| \le \frac{2\pi |t|}{\beta } \int _{0}^{\infty }d \xi \, |h(\xi )| e^{\xi \eta t}. \end{aligned}$$
(4.14)

Next, we write:

$$\begin{aligned} \begin{aligned}&{\text {Tr}}{\mathscr {O}}_{X} (\rho (t) - \tilde{\rho }(t)) \\&=\lim _{T\rightarrow -\infty } {\text {Tr}}{\mathscr {O}}_{X} ({\mathscr {U}}(t;-T) \rho _{\beta , \mu , L} {\mathscr {U}}(t;-T)^{*} - \widetilde{{\mathscr {U}}}(t; -T) \rho _{\beta , \mu , L} \widetilde{{\mathscr {U}}}(t;-T)^{*}) \end{aligned}\nonumber \\ \end{aligned}$$
(4.15)

where \({\mathscr {U}}(t;s)\), \(\widetilde{{\mathscr {U}}}(t;s)\) are the unitary groups generated by \({\mathscr {H}}(\eta t)\), \({\mathscr {H}}_{\beta ,\eta }(t)\), respectively. We estimate the argument of the limit as:

$$\begin{aligned} \begin{aligned}&\Big |{\text {Tr}}\Big ({\mathscr {U}}(t;-T)^{*} {\mathscr {O}}_{X} {\mathscr {U}}(t;-T) - \widetilde{{\mathscr {U}}}(t;-T)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;-T)\Big ) \rho _{\beta , \mu , L}\Big | \\&\le \Big \Vert {\mathscr {O}}_{X} - {\mathscr {U}}(t;-T)\widetilde{{\mathscr {U}}}(t;-T)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;-T) {\mathscr {U}}(t;-T)^{*} \Big \Vert , \end{aligned} \end{aligned}$$
(4.16)

where we used that \( \rho _{\beta , \mu , L} \ge 0\), \({\text {Tr}}\rho _{\beta , \mu , L} = 1\) and the unitarity of time-evolution. Next, we rewrite the argument of the norm as:

$$\begin{aligned} \begin{aligned}&{\mathscr {O}}_{X} - {\mathscr {U}}(t;-T)\widetilde{{\mathscr {U}}}(t;-T)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;-T) {\mathscr {U}}(t;-T)^{*} \\&\quad = -i \int _{-T}^{t} ds\, i\frac{\partial }{\partial s} {\mathscr {U}}(t;s)\widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s) {\mathscr {U}}(t;s)^{*} \\&\quad = -i \int _{-T}^{t} ds\, {\mathscr {U}}(t;s)\Big [ -{\mathscr {H}}(\eta s) + {\mathscr {H}}_{\beta ,\eta }(s),\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] {\mathscr {U}}(t;s)^{*} \\&\quad \equiv i \int _{-T}^{t} ds\, \varepsilon \zeta _{\eta , \beta }(s) {\mathscr {U}}(t;s)\Big [ {\mathscr {P}},\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] {\mathscr {U}}(t;s)^{*}, \end{aligned}\nonumber \\ \end{aligned}$$
(4.17)

where in the third line we used that \({\mathscr {U}}(t;s)^{*} = {\mathscr {U}}(s;t)\), and in the last line we used Eq. (4.13). Therefore,

$$\begin{aligned} \begin{aligned}&\Big \Vert {\mathscr {O}}_{X} - {\mathscr {U}}(t;-T)\widetilde{{\mathscr {U}}}(t;-T)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;-T) {\mathscr {U}}(t;-T)^{*} \Big \Vert \\&\quad \le \int _{-T}^{t} ds\, |\varepsilon | \big |\zeta _{\eta , \beta }(s)\big | \Big \Vert \Big [ {\mathscr {P}},\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] \Big \Vert . \end{aligned} \end{aligned}$$
(4.18)

Next, we claim that:

$$\begin{aligned} \Big \Vert \Big [ {\mathscr {P}},\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] \Big \Vert \le C(|t-s|^{d} + 1). \end{aligned}$$
(4.19)

This inequality stems from the Lieb-Robinson bound for non-autonomous dynamics, see Theorem 4.6 of [13] for quantum spin systems, or Theorem 5.1 of [16] for the case of lattice fermions:

$$\begin{aligned} \Big \Vert \Big [ {\mathscr {O}}_{Y}, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s) \Big ] \Big \Vert \le C e^{v|t-s| - c \cdot \text {dist}(X,Y)} \end{aligned}$$
(4.20)

for any two local operators \({\mathscr {O}}_{X}\), \({\mathscr {O}}_{Y}\). The proof of (4.19) is standard, and we give it here for completeness. Representing the perturbation \({\mathscr {P}}\) in terms of its local potentials, we have:

$$\begin{aligned} \begin{aligned} \Big \Vert \Big [ {\mathscr {P}},\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] \Big \Vert&\le \sum _{Y \subseteq \Lambda _{L}} \Big \Vert \Big [ {\mathscr {P}}_{Y},\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] \Big \Vert \\&= \sum _{\begin{array}{c} Y \subseteq \Lambda _{L} \\ \text {dist}(X,Y) \le D |t-s| \end{array}} \Big \Vert \Big [ {\mathscr {P}}_{Y},\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] \Big \Vert \\\\&\quad + \sum _{\begin{array}{c} Y \subseteq \Lambda _{L} \\ \text {dist}(X,Y) > D |t-s| \end{array}} \Big \Vert \Big [ {\mathscr {P}}_{Y},\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] \Big \Vert \end{aligned} \end{aligned}$$
(4.21)

with D large enough to be chosen below. By the boundedness of the fermionic operators, and by the unitarity of the dynamics, the first term in the right-hand side is estimated as:

$$\begin{aligned} \sum _{\begin{array}{c} Y \subseteq \Lambda _{L} \\ \text {dist}(X,Y) \le D |t-s| \end{array}} \Big \Vert \Big [ {\mathscr {P}}_{Y},\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] \Big \Vert \le K(|t-s|^{d} + 1) \end{aligned}$$
(4.22)

where we used the fact that the sum is restricted to sets Y of bounded diameter. For the second term, we use the Lieb-Robinson bound (4.20), to get:

$$\begin{aligned} \begin{aligned}&\sum _{\begin{array}{c} Y \subseteq \Lambda _{L} \\ \text {dist}(X,Y)> D|t-s| \end{array}} \Big \Vert \Big [ {\mathscr {P}}_{Y},\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] \Big \Vert \\&\quad \le \sum _{\begin{array}{c} Y \subseteq \Lambda _{L} \\ \text {dist}(X,Y) > D|t-s|,\; \text {diam}(Y) \le R \end{array}}C e^{v|t-s| - c \cdot \text {dist}(X,Y)}. \end{aligned} \end{aligned}$$
(4.23)

Choosing D large enough, we have:

$$\begin{aligned} \begin{aligned}&\sum _{\begin{array}{c} Y \subseteq \Lambda _{L} \\ \text {dist}(X,Y)> D|t-s| \end{array}} \Big \Vert \Big [ {\mathscr {P}}_{Y},\, \widetilde{{\mathscr {U}}}(t;s)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;s)\Big ] \Big \Vert \\ {}&\qquad \qquad \le \sum _{\begin{array}{c} Y \subseteq \Lambda _{L} \\ \text {dist}(X,Y) > D|t-s|,\; \text {diam}(Y) \le R \end{array}}C e^{- (c/2) \cdot \text {dist}(X,Y)} \\&\qquad \qquad \le K. \end{aligned} \end{aligned}$$
(4.24)

This concludes the check of (4.19). Using the bound (4.19) in (4.18), we get:

$$\begin{aligned} \begin{aligned}&\Big \Vert {\mathscr {O}}_{X} - {\mathscr {U}}(t;-T)\widetilde{{\mathscr {U}}}(t;-T)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;-T) {\mathscr {U}}(t;-T)^{*} \Big \Vert \\&\quad \le C\int _{-T}^{t} ds\, |\varepsilon | \big |\zeta _{\eta , \beta }(s)\big | (|t - s|^{d} + 1) \\&\quad \le \frac{K|\varepsilon |}{\beta } \int _{0}^{\infty }d \xi \, |h(\xi )| \int _{-T}^{t} ds\, e^{\xi \eta s} (|s|^{d+1} + 1), \end{aligned} \end{aligned}$$
(4.25)

where in the last step we used the bound (4.14) and we exchanged the order of integration. Using that:

$$\begin{aligned} \int _{-T}^{t} ds\, e^{\xi \eta s} |s|^{d+1} \le \frac{C}{(\eta \xi )^{d+2}}, \end{aligned}$$
(4.26)

we finally obtain:

$$\begin{aligned} \begin{aligned}&\Big \Vert {\mathscr {O}}_{X} - {\mathscr {U}}(t;-T)\widetilde{{\mathscr {U}}}(t;-T)^{*} {\mathscr {O}}_{X} \widetilde{{\mathscr {U}}}(t;-T) {\mathscr {U}}(t;-T)^{*} \Big \Vert \\&\quad \le \frac{K|\varepsilon |}{\eta ^{d+2}\beta } \int _{0}^{\infty }d \xi \, \frac{|h(\xi )|}{\xi ^{d+2}}. \end{aligned} \end{aligned}$$
(4.27)

Equation (4.12) follows from assumption (3.5), after a redefinition of the constant K. This concludes the proof. \(\square \)

4.3 Wick rotation

Here we shall represent each coefficient in the Duhamel expansion (4.9) for the auxiliary dynamics generated by (4.11) in terms of Euclidean correlation functions, via a complex deformation argument. The advantage is that useful space-time decay estimates for Euclidean correlations can be proved using statistical mechanics tools, such as the cluster expansion. This complex deformation is known in physics as Wick rotation, and here it will be established rigorously for the auxiliary dynamics. The next lemma is the main result of the section. Its proof is based on the adaptation of ideas of Section 5.4 of [15] to our adiabatic setting.

Lemma 4.2

(Wick rotation). Let \(A \in {\mathscr {A}}_{\Lambda _{L}}\), \(B \in {\mathscr {A}}^{{\mathscr {N}}}_{\Lambda _{L}}\). Let \(n\in {\mathbb {N}}\). Let a(s) be a periodic function with period \(\beta \), such that:

$$\begin{aligned} a(s) = \sum _{\omega \in \frac{2\pi }{\beta } {\mathbb {N}}} \tilde{a}(\omega ) e^{-i\omega s},\qquad \sum _{\omega \in \frac{2\pi }{\beta } {\mathbb {N}}} |{\tilde{a}}(\omega )| \le C,\qquad {\tilde{a}}(0) = 0. \end{aligned}$$
(4.28)

Then, the following identity holds true, for all \(t\le 0\):

$$\begin{aligned} \begin{aligned}&\int _{-\infty \le s_{n} \le \ldots \le s_{1} \le t} d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(is_{j})\Big ] \langle [ \cdots [[\tau _{t}(A), \tau _{s_{1}}(B)], \tau _{s_{2}}(B)] \cdots \tau _{s_{n}}(B) ] \rangle _{\beta ,\mu ,L} \\&\quad = \frac{(-i)^{n}}{n!} \int _{[0,\beta )^{n}} d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(it+s_{j})\Big ] \langle \textbf{T} \gamma _{s_{1}}(B); \gamma _{s_{2}}(B); \cdots ; \gamma _{s_{n}}(B); A \rangle _{\beta ,\mu ,L}. \end{aligned} \end{aligned}$$
(4.29)

Remark 4.3

  1. (i)

    The function \(s\mapsto g_{\beta ,\eta }(-is)\) satisfies the properties (4.28), recall Definition 3.5.

  2. (ii)

    Notice that the function defined in (4.28) extends to a function a(z) on the lower-half complex plane, that is analytic for \(\text {Im}\, z < 0\) and continuous for \(\text {Im}\, z\le 0\).

The proof will be broken in a few intermediate steps. In what follows, it will be convenient to use the following notations. We define inductively:

$$\begin{aligned} C_{0}:= \tau _{t}(A),\qquad C_n(t_1,\dots ,t_n):= a(it_{n}) [ C_{n-1}(t_1,\dots ,t_{n-1}), \tau _{t_n}(B) ]. \end{aligned}$$
(4.30)

Moreover, we set:

$$\begin{aligned} B_{0}:= 1,\qquad B_{n}(t_{1}, \ldots , t_{n}):= \Big [\prod _{i=1}^{n} a(it_{i})\Big ] \tau _{t_{1}}(B) \cdots \tau _{t_{n}}(B). \end{aligned}$$
(4.31)

Also, we shall introduce the n-dimensional simplex of side \(\beta \) as:

$$\begin{aligned} \Delta ^n_{\beta }:=\big \{ (s_1,\dots ,s_n)\in {\mathbb {R}}^n:\, \beta>s_1>\dots>s_{n}>0 \big \}. \end{aligned}$$
(4.32)

The combination of Propositions 4.4, 4.5 below is the adaptation of Propositions 5.4.12, 5.4.13 of [15] to our adiabatic setting. Differently from [15], our results hold without clustering assumptions on the real-time correlations.

Proposition 4.4

(Basic complex deformation). Let \(B\in {\mathscr {A}}^{{\mathscr {N}}}_{\Lambda _{L}}\), \(C\in {\mathscr {A}}_{\Lambda _{L}}\). For every \(j \in {\mathbb {N}}\) and for all \(t\le 0\):

$$\begin{aligned} \begin{aligned}&\int _{-\infty }^{t} d r\, a(ir) \int _{\Delta ^j_{\beta }} d {\underline{s}}\, \langle B_j\left( r - i s_1,\dots , r - i s_j \right) \left[ \tau _{r}(B),C \right] \rangle _{\beta , \mu , L}\\&\qquad = i \int _{\Delta ^{j+1}_{\beta }} d{\underline{s}}\, \langle B_{j+1}\left( t - i s_1,\dots , t - i s_{j+1} \right) C \rangle _{\beta , \mu , L}. \end{aligned} \end{aligned}$$
(4.33)

Proof

To start, let us prove the \(j=0\) case, which reads:

$$\begin{aligned} \int _{-\infty }^{t} d r\, a(ir) \langle \left[ \tau _{r}(B),C \right] \rangle _{\beta , \mu , L} = i \int _{0}^{\beta } ds\,\langle B_{1}\left( t - i s\right) C \rangle _{\beta , \mu , L}. \end{aligned}$$
(4.34)

Let \(T > 0\). By the KMS identity, Eq. (2.9), and using that B commutes with the number operator:

$$\begin{aligned} \begin{aligned} \int _{-T}^{t} d r\, a(ir) \langle \left[ \tau _{r}(B),C \right] \rangle _{\beta , \mu , L}&= \int _{-T}^{t} d r\, a(ir) \Big [ \langle \tau _{r}(B)C \rangle _{\beta , \mu , L} - \langle C \tau _{r}(B)\rangle _{\beta , \mu , L} \Big ]\\&= \int _{-T}^{t} d r\, a(ir)\, \Big [\langle \tau _{r}(B)C \rangle _{\beta , \mu , L} - \langle \tau _{r - i\beta }(B)C \rangle _{\beta , \mu , L}\Big ]. \end{aligned}\nonumber \\ \end{aligned}$$
(4.35)

By assumption (4.28), we use the trivial but crucial fact \(a(ir) = a(ir + \beta )\) to write:

$$\begin{aligned} \begin{aligned}&\int _{-T}^{t} d r\, a(ir) \langle \left[ \tau _{r}(B),C \right] \rangle _{\beta , \mu , L} \\&\quad = \int _{-T}^{t} d r\, \Big [ a(ir) \langle \tau _{r}(B)C \rangle _{\beta , \mu , L} - a(i(r - i\beta )) \langle \tau _{r - i\beta }(B)C \rangle _{\beta , \mu , L}\Big ]. \end{aligned}\nonumber \\ \end{aligned}$$
(4.36)

Now, consider the function, for \(z\in {\mathbb {C}}\):

$$\begin{aligned} f(z) = a(iz)\langle \tau _{z}(B) C\rangle _{\beta , \mu , L}. \end{aligned}$$
(4.37)

For finite L and finite \(\beta \), this function is analytic on \(\text {Re}\, z <0\), and it is continuous on \(\text {Re}\, z \le 0\). In fact, the function \(z\mapsto \langle \tau _{z}(B) C\rangle _{\beta , \mu , L}\) is entire for finite \(L, \beta \), while a(iz) is analytic for \(\text {Re}\, z < 0\) and continuous for \(\text {Re}\, z\le 0\), recall Definition (4.28) and Remark 4.3.

For \(\varepsilon > 0\) small enough, let \(\Gamma \) be the complex path for \((\text {Re}\,z, \text {Im}\,z)\):

$$\begin{aligned} \Gamma = (-T,0) \rightarrow (t -\varepsilon , 0) \rightarrow (t - \varepsilon , -\beta ) \rightarrow (-T, -\beta ) \rightarrow (-T,0), \end{aligned}$$
(4.38)

where every arrow corresponds to an oriented straight line in the complex plane. By Cauchy’s integral theorem,

$$\begin{aligned} \int _{\Gamma }dz\, f(z) = 0. \end{aligned}$$
(4.39)

We start by writing:

$$\begin{aligned} \int _{-T}^{t} d r\, a(ir) \langle \left[ \tau _{r}(B),C \right] \rangle _{\beta , \mu , L} = \lim _{\varepsilon \rightarrow 0^{+}} \int _{-T}^{t - \varepsilon } d r\, a(i r) \langle \left[ \tau _{r}(B),C \right] \rangle _{\beta , \mu , L}, \end{aligned}$$
(4.40)

and using Eq. (4.39):

$$\begin{aligned} \int _{-T}^{t - \varepsilon } d r\, a(i r) \langle \left[ \tau _{r}(B),C \right] \rangle _{\beta , \mu , L} = i \int _0^\beta d s\, f(t - \varepsilon - i s) - i \int _0^\beta d s\, f(-T - i s).\nonumber \\ \end{aligned}$$
(4.41)

We claim that the last term vanishes as \(T\rightarrow \infty \). In fact:

$$\begin{aligned} \begin{aligned} |f(-T - i s)|&\le \Big |a(s - iT) \langle \tau _{-T - is}(B)C \rangle _{\beta , \mu , L}\Big | \\&\le \Big (\sum _{\omega } |{\tilde{a}}(\omega )|\Big ) e^{-\frac{2\pi }{\beta }T} \Vert \tau _{-T - is}(B)\Vert \Vert C \Vert \\&\le C e^{-\frac{2\pi }{\beta }T} \Vert B\Vert \Vert C\Vert e^{2s \Vert {\mathscr {H}}\Vert }, \end{aligned} \end{aligned}$$
(4.42)

where we used the unitarity of the real-time dynamics. Notice that all norms in (4.42) are finite: we are on a finite lattice with side L, and the fermionic Fock space for models on a finite lattice is finite-dimensional. Hence, the bound (4.42) shows that the \(T\rightarrow \infty \) limit of the second term in the right-hand side of (4.41) vanishes, for \(\beta \) and L finite. We thus have

$$\begin{aligned} \begin{aligned}&\lim _{T\rightarrow \infty }\int _{-T}^{t} d r\, a(ir) \langle \left[ \tau _{r}(B),C \right] \rangle _{\beta , \mu , L} \\&\quad = \lim _{\varepsilon \rightarrow 0^{+}} i \int _{0}^\beta d s\, a(i(t-\varepsilon ) + s)\langle \tau _{t - \varepsilon -is}(B) C \rangle _{\beta , \mu , L} \\&\quad = i \int _{0}^\beta d s\, a(it + s)\langle \tau _{t-is}(B) C \rangle _{\beta , \mu , L}, \end{aligned} \end{aligned}$$
(4.43)

which proves Eq. (4.34). Let us now discuss the \(j>0\) case, Eq. (4.33). By the KMS identity and \(a(ir) = a(ir + \beta )\), we get:

$$\begin{aligned} \begin{aligned} \int _{-T}^{t} d r\,&\int _{\Delta ^{j}_{\beta }} d {\underline{s}}\, a(ir) \langle B_j\left( r - i s_1,\dots , r - i s_j \right) \left[ \tau _{r}(B),C \right] \rangle _{\beta , \mu , L}\\ =&\int _{-T}^{t} d r \int _{\Delta ^{j}_{\beta }} d {\underline{s}}\, a(ir) \langle B_j\left( r - i s_1,\dots , r - i s_j \right) \tau _{r} (B) C\rangle _{\beta , \mu , L} \\ {}&- \int _{-T}^{t} d r \int _{\Delta ^{j}_{\beta }} d {\underline{s}}\, a(i (r - i\beta )) \langle \tau _{r - i\beta }(B) B_j\left( r - i s_1,\dots , r - i s_j \right) C \rangle _{\beta , \mu , L}. \end{aligned} \end{aligned}$$
(4.44)

We further rewrite this expression as, recalling the definition of \(B_{j}(\cdot )\), Eq. (4.31):

$$\begin{aligned} \begin{aligned} (4.44)&= \int _{\Delta ^{j}_{\beta }} d {\underline{s}}\int _{-T}^{t} d r\, \langle B_{j+1}\left( r - i s_1,\dots , r - i s_j, r \right) C \rangle _{\beta , \mu , L} \\ {}&\qquad - \int _{\Delta ^{j}_{\beta }} d {\underline{s}} \int _{-T}^{t} d r\, \langle B_{j+1}\left( r-i\beta , r - i s_1,\dots , r - i s_j \right) C \rangle _{\beta , \mu , L} \\ {}&=: L^{T}_1 - L^{T}_2. \end{aligned} \end{aligned}$$
(4.45)

Let us now introduce the change of variables, for \(1\le k < j\):

$$\begin{aligned} s_k' = s_{k+1}- s_1 + \beta ,\qquad \quad s_j' = \beta - s_1. \end{aligned}$$
(4.46)

Notice that \(\beta> s_{1}> s_{2}> \cdots> s_{j} > 0\), we also have \(\beta> s'_{1}> s'_{2}> \cdots> s'_{j} > 0\), that is \((s'_{1}, \ldots , s'_{n}) \in \Delta ^{j}_{\beta }\). In terms of these variables, for \(2\le k \le j\):

$$\begin{aligned} \begin{aligned} s_{1}&= \beta - s'_{j} \\ s_{k}&= s'_{k-1} + s_{1} - \beta \equiv s'_{k-1} - s'_{j}. \end{aligned} \end{aligned}$$
(4.47)

We then rewrite the term \(L_{1}^{T}\) in (4.45) as:

$$\begin{aligned} \begin{aligned} L^{T}_1&= \int _{\Delta ^{j}_{\beta }} d{\underline{s}}' \int _{-T}^{t} d r\, \\&\quad \langle B_{j+1}\left( r - i (\beta - s'_j), r - i (s'_1 - s'_j), \dots , r - i (s'_{j-1} - s'_j),r \right) C \rangle _{\beta , \mu , L}. \end{aligned} \end{aligned}$$
(4.48)

Let us now introduce the function:

$$\begin{aligned} f_{(\beta ,s_1,\dots ,s_j)}(z):= \langle B_{j+1}(z - i(\beta - s_{j}), z - i (s_1 - s_{j}),\dots , z ) C\rangle _{\beta , \mu , L}. \end{aligned}$$
(4.49)

The function \(f_{(\beta ,s_1,\dots ,s_j)}(z)\) is analytic for \(\text {Re}\, z < 0\) and continuous on \(\text {Re}\, z\le 0\). We have:

$$\begin{aligned} L^{T}_{2} = \int _{\Delta ^{j}_{\beta }} d{\underline{s}} \int _{-T}^{t} dr\, f_{(\beta ,s_1,\dots ,s_j)}(r-is_{j}); \end{aligned}$$
(4.50)

also, relabelling the \(s'\) variables in s variables in Eq. (4.48):

$$\begin{aligned} L^{T}_{1} = \int _{\Delta ^{j}_{\beta }} d{\underline{s}} \int _{-T}^{t} dr\, f_{(\beta ,s_1,\dots ,s_j)}(r). \end{aligned}$$
(4.51)

As for the \(j=0\) case, we will use a complex deformation argument to rewrite \(L_{1}^{T} - L_{2}^{T}\) in a convenient way. To this end, let us now define the complex path for \((\text {Re}\, z, \text {Im}\, z)\), for \(\varepsilon > 0\) small enough:

$$\begin{aligned} \Gamma = (-T,0) \rightarrow (t -\varepsilon , 0) \rightarrow (t - \varepsilon , -s_{j}) \rightarrow (-T, -s_{j}) \rightarrow (-T,0). \end{aligned}$$
(4.52)

By continuity of \(f_{(\beta ,s_1,\dots ,s_j)}(z)\):

$$\begin{aligned} \begin{aligned}&L^{T}_{1} - L^{T}_{2} = \int _{\Delta ^{j}_{\beta }} d{\underline{s}} \lim _{\varepsilon \rightarrow 0^{+}}\Big [ \int _{-T}^{t - \varepsilon } dr\, f(r) - \int _{-T}^{t-\varepsilon } dr\, f(r-is_{j})\Big ] \\&\quad = i\int _{\Delta ^{j}_{\beta }} d{\underline{s}} \lim _{\varepsilon \rightarrow 0^{+}}\Big [ \int _{0}^{s_{j}} ds_{j+1}\, f(t - \varepsilon - is_{j+1}) - \int _{0}^{s_{j}} ds_{j+1}\, f(-T - is_{j+1})\Big ] \\&\quad = i\int _{\Delta ^{j}_{\beta }} d{\underline{s}}\, \Big [ \int _{0}^{s_{j}} ds_{j+1}\, f(t - is_{j+1}) - \int _{0}^{s_{j}} ds_{j+1}\, f(-T - is_{j+1})\Big ], \end{aligned}\nonumber \\ \end{aligned}$$
(4.53)

where the second identity follows from Cauchy theorem and the last from the continuity of the integrand. The last term in the right-hand side of (4.53) vanishes as \(T\rightarrow \infty \). This is implied by the following estimate, recall Eq. (4.49):

$$\begin{aligned} | f_{(\beta ,s_1,\dots ,s_j)}(-T - i s_{j+1}) | \le \Vert \tilde{a} \Vert _{1}^{j+1} \Vert C \Vert \Vert B \Vert ^{j+1} e^{2\beta (j+1) \Vert {\mathscr {H}}\Vert } e^{- \frac{2\pi }{\beta } T(j+1)}. \end{aligned}$$
(4.54)

Consider now the first term in the right-hand side of (4.53). The integrand has the form, for a suitable function F, recall (4.49):

$$\begin{aligned} \begin{aligned}&f_{(\beta ,s_1,\dots ,s_j)}(t - i s_{j+1}) \\&\quad =F(t - i(\beta - s_{j} + s_{j+1}),\ldots , t - i (s_{k-1} - s_{j} + s_{j+1}),\ldots , t - i s_{j+1}). \end{aligned} \end{aligned}$$
(4.55)

Let us introduce the change of variables, for \(2 \le k \le j\):

$$\begin{aligned} s'_1=\beta -s_j+s_{j+1},\quad s'_k=s_{k-1}-s_j+s_{j+1}, \quad s'_{j+1}=s_{j+1}. \end{aligned}$$
(4.56)

We notice that \(\beta> s'_{1}> \ldots> s'_{k}> \ldots> s'_{j+1} > 0\). Thus, the second term in the r.h.s. of (4.53) can be written as the integral over the simplex \(\Delta ^{\beta }_{j+1}\):

$$\begin{aligned} \begin{aligned}&i \int _{\Delta ^{j}_{\beta }} d{\underline{s}} \int _{0}^{s_j} d s_{j+1}\, f_{(\beta ,s_1,\dots ,s_j)}(t - i s_{j+1}) \\&\quad = i \int _{\Delta ^{j+1}_{\beta }} d{\underline{s}}'\, F(t - is'_{1},\ldots , t - i s'_{k},\ldots , t - i s'_{j+1}) \\&\quad = i\int _{\Delta ^{j+1}_{\beta }} d{\underline{s}}'\, \langle B_{j+1}(t - is'_{1},\dots , t - i s'_{k},\ldots , t - i s'_{j+1} ) C\rangle _{\beta , \mu , L}. \end{aligned} \end{aligned}$$
(4.57)

All in all, from (4.45), (4.53), (4.54), (4.57), relabelling the \(s'\) variables as s variables:

$$\begin{aligned} \begin{aligned}&\int _{-\infty }^{t} d r\, \int _{\Delta ^{j}_{\beta }} d {\underline{s}}\, a(ir) \langle B_j\left( r - i s_1,\dots , r - i s_j \right) \left[ \tau _{r}(B),C \right] \rangle _{\beta , \mu , L} \\&\quad = L_{1}^{\infty } - L_{2}^{\infty } \\&\quad = i\int _{\Delta ^{j+1}_{\beta }} d{\underline{s}}\, \langle B_{j+1}(t - is_{1},\dots , t - i s_{k},\ldots , t - i s_{j+1} ) C\rangle _{\beta , \mu , L} \end{aligned} \end{aligned}$$
(4.58)

which concludes the proof of the proposition. \(\square \)

Next, we use Proposition 4.4 to rewrite the coefficients appearing in the Duhamel expansion in terms of imaginary-time correlations.

Proposition 4.5

(Multiple complex deformation). Under the same assumptions of Lemma 4.2 the following identity holds:

$$\begin{aligned} \begin{aligned}&\int _{-\infty \le s_{n} \le \ldots \le s_{1} \le t} d{\underline{s}}\, \Big [ \prod _{i=1}^{n} a(is_{i}) \Big ] \langle [ \cdots [[\tau _{t}(A), \tau _{s_{1}}(B)], \tau _{s_{2}}(B)] \cdots \tau _{s_{n}}(B) ] \rangle _{\beta ,\mu ,L}\\&\quad = (-i)^n \int _{0}^{\beta } d s_1 \dots \int _0^{s_{n-1}} d s_n\, \Big [\prod _{j=1}^{n} a(it+s_{j})\Big ] \langle \gamma _{s_{1}}(B) \cdots \gamma _{s_{n}}(B) A\rangle _{\beta ,\mu ,L}. \end{aligned} \end{aligned}$$
(4.59)

Proof

To avoid confusion, in the proof we shall call \(\{r_{j}\}\) the variables corresponding to real-time integrations and \(\{s_{j}\}\) the variables corresponding to imaginary-time integrations. To simplify the notations, we will omit the \(\beta ,\mu ,L\) subscript in the Gibbs state. Using the notation (4.30), we rewrite:

$$\begin{aligned} \begin{aligned}&\int _{-\infty \le r_{n} \le \ldots \le r_{1} \le t} d{\underline{r}}\, \Big [ \prod _{i=1}^{n} a(ir_{i}) \Big ] \langle [ \cdots [[\tau _{t}(A), \tau _{r_{1}}(B)], \tau _{r_{2}}(B)] \cdots \tau _{r_{n}}(B) ] \rangle \\&\quad \equiv \int _{-\infty \le r_{n} \le \ldots \le r_{1} \le t} d{\underline{r}}\, a(ir_{n}) \langle [ C_{n-1}(r_1,\dots , r_{n-1}), \tau _{r_{n}} (B) ]\rangle . \end{aligned} \end{aligned}$$
(4.60)

We have:

$$\begin{aligned} \begin{aligned}&\int _{-\infty \le r_{n} \le \ldots \le r_{1} \le t} d{\underline{r}}\, a(ir_{n}) \langle [ C_{n-1}(r_1,\dots ,r_{n-1}), \tau _{r_n} (B) ]\rangle \\&\quad = -\int _{-\infty }^{t} d r_1 \dots \int _{-\infty }^{r_{n-2}} d r_{n-1} \int _{-\infty }^{r_{n-1}} dr_{n} a(ir_{n}) \langle [ \tau _{r_n} (B), C_{n-1}(r_1,\dots ,r_{n-1}) ]\rangle \\&\quad = -i \int _{-\infty }^{t} d r_1 \dots \int _{-\infty }^{r_{n-2}} d r_{n-1} \int _{0}^{\beta } d s_1\, \langle B_1(r_{n-1} - i s_1) C_{n-1}(r_1,\dots , r_{n-1})\rangle \end{aligned}\nonumber \\ \end{aligned}$$
(4.61)

where the last equality follows from Proposition 4.4 for \(j=0\), applied to the \(r_{n}\) integration. Next, using again (4.30), we write:

$$\begin{aligned} C_{n-1}(r_1,\dots , r_{n-1}) = -a(ir_{n-1}) [ \tau _{r_{n-1}}(B), C_{n-2}(r_1,\dots , r_{n-2}) ] \end{aligned}$$
(4.62)

and hence:

$$\begin{aligned} \begin{aligned}&\int _{-\infty \le r_{n} \le \ldots \le r_{1} \le t} d{\underline{r}}\, a(ir_{n}) \langle [ C_{n-1}(r_1,\dots ,r_{n-1}), \tau _{r_n} (B) ]\rangle \\&\quad = i \int _{-\infty }^{t} d r_1 \dots \int _{-\infty }^{r_{n-2}} d r_{n-1}\, a(ir_{n-1}) \\&\qquad \cdot \int _{0}^{\beta } d s_1\, \langle B_1(r_{n-1} - i s_1) [ \tau _{r_{n-1}}(B), C_{n-2}(r_1,\dots , r_{n-2}) ]\rangle \\&\quad = (i)^{2} \int _{-\infty }^{t} d r_1 \dots \int _{-\infty }^{r_{n-3}} d r_{n-2} \\&\qquad \cdot \int _{\Delta _{\beta }^{2}} d{\underline{s}}\, \langle B_{2}(r_{n-2} - is_{1}, r_{n-2} - is_{2}) C_{n-2}(r_1,\dots , r_{n-2}) \rangle , \end{aligned} \end{aligned}$$
(4.63)

where in the last step we applied Proposition 4.4 for \(j=1\) to the \(r_{n-1}\) integration. We continue applying Proposition 4.4 until all commutators are exhausted. We find:

$$\begin{aligned} \begin{aligned}&\int _{-\infty \le r_{n} \le \ldots \le r_{1} \le t} d{\underline{r}}\, a(ir_{n}) \langle [ C_{n-1}(r_1,\dots ,r_{n-1}), \tau _{r_n} (B) ]\rangle \\&\qquad = (-i)^{n} \int _{\Delta ^{n}_{\beta }} d{\underline{s}}\, \langle B_{n} (t-is_1,\dots , t-is_n) \tau _{t}(A) \rangle . \end{aligned} \end{aligned}$$
(4.64)

To conclude, recall that by Eq. (4.31):

$$\begin{aligned} \begin{aligned} B_{n} (t-is_1,\dots , t-is_n)&= \Big [ \prod _{i=1}^{n} a(it + s_{j}) \Big ] \tau _{t - is_{1}}(B) \tau _{t - is_{2}}(B) \cdots \tau _{t - is_{n}}(B) \\&\equiv \Big [ \prod _{i=1}^{n} a(it + s_{j}) \Big ] \tau _{t} \Big ( \gamma _{s_{1}}(B) \gamma _{s_{2}} (B) \cdots \gamma _{s_{n}}(B) \Big ) \end{aligned} \end{aligned}$$
(4.65)

where in the last step we used that B commutes with the number operator, which implies \(\tau _{-is}(B) = \gamma _{s}(B)\). Plugging (4.65) into the right-hand side of (4.64), and using the invariance of the Gibbs state under time-evolution, the final claim (4.59) follows. \(\square \)

Next, we rewrite the imaginary-time expressions appearing after the Wick rotation as connected correlation functions. We recall the following relation between the expectation value of a product of operators, and the truncated expectations:

$$\begin{aligned} \langle O_{i_{1}} \cdots O_{i_{n}} \rangle = \sum _{P} \prod _{J\in P} \langle O(J) \rangle ^{T} \end{aligned}$$
(4.66)

where P are partitions of the ordered set \(\{i_1,\dots , i_{n}\}\), with elements \(J = \{ j_{1}, \ldots , j_{|J|} \}\) which inherit the order of \(\{i_1,\dots , i_{n}\}\), and

$$\begin{aligned} \langle O(J) \rangle ^{T}:= \langle O_{j_{1}}; O_{j_{2}}; \cdots ; O_{j_{|J|}} \rangle . \end{aligned}$$
(4.67)

The next result is a straightforward consequence of the definition of truncated expectation. We shall use the notation, for \(J = \{ j_{1}, \ldots , j_{m} \}\):

$$\begin{aligned} B(-i{\underline{s}}_{J}):= \gamma _{s_{j_1}}(B) \cdots \gamma _{s_{j_{m}}}(B). \end{aligned}$$
(4.68)

Proposition 4.6

(Factorization property). The following identity holds true:

$$\begin{aligned} \begin{aligned}&\langle \gamma _{s_{1}}(B) \gamma _{s_{2}} (B) \cdots \gamma _{s_{n}}(B) A\rangle \\&\quad = \sum _{J\subseteq \{1,\ldots , n\}} \big \langle \gamma _{s_{j_1}}(B); \gamma _{s_{j_2}} (B); \cdots ; \gamma _{s_{j_{|J|}}}(B); A \big \rangle \big \langle B(-i{\underline{s}}_{\{1,\ldots , n\} \setminus J}) \big \rangle \end{aligned} \end{aligned}$$
(4.69)

where the sum is over ordered subsets of \(\{1, \ldots , n\}\).

Proof

Let:

$$\begin{aligned} O_{1} = \gamma _{s_{1}}(B),\quad O_{2} = \gamma _{s_{2}}(B),\quad \ldots \quad , O_{n}=\gamma _{s_{n}}(B)\quad ,\quad O_{n+1} = A. \end{aligned}$$
(4.70)

From (4.66):

$$\begin{aligned} \begin{aligned} \langle O_{1} O_{2} \cdots O_{n+1} \rangle&= \sum _{P} \prod _{J\in P} \langle O(J) \rangle ^{T}\\&= \sum _{P} \langle O(J_{n+1}) \rangle ^{T} \prod _{J\in P:\, n+1\notin J} \langle O(J) \rangle ^{T} \end{aligned} \end{aligned}$$
(4.71)

where the sum is over partitions P of \(\{1, \ldots , n+1\}\), and J are the elements of the partition. In particular, \(J_{n+1}\) is the element of the partition that contains \(n+1\). The right-hand side of (4.71) can be rewritten as:

$$\begin{aligned} \langle O_{1} O_{2} \cdots O_{n+1} \rangle = \sum _{J_{n+1}} \langle O(J_{n+1}) \rangle ^{T} \sum _{{\widetilde{P}}\, \text {of}\, \{1,\ldots , n+1\} \setminus J_{n+1}} \prod _{J\in {\widetilde{P}}} \langle O(J) \rangle ^{T} \end{aligned}$$
(4.72)

which we rewrite as, using again (4.66):

$$\begin{aligned} \begin{aligned}&\langle O_{1} O_{2} \cdots O_{n+1} \rangle \\&\quad = \sum _{J \subseteq \{1,\ldots , n\}} \langle \gamma _{s_{j_1}}(B); \gamma _{s_{j_2}} (B); \cdots ; \gamma _{s_{j_{|J|}}}(B); A \rangle ^{T} \Big \langle \prod _{j\in \{1,\ldots , n\} \setminus J} O_{j} \Big \rangle . \end{aligned} \end{aligned}$$
(4.73)

This concludes the proof of the proposition. \(\square \)

The next proposition allows to rewrite the right-hand side of (4.59) in terms of truncated correlation functions, in Euclidean time.

Proposition 4.7

(Reduction to connected Euclidean correlations). Under the same assumptions of Lemma 4.2 the following identity holds:

$$\begin{aligned} \begin{aligned}&\int _{\Delta ^{n}_\beta } d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(it+s_{j}) \Big ] \langle \gamma _{s_{1}}(B) \gamma _{s_{2}}(B) \cdots \gamma _{s_{n}}(B) A\rangle _{\beta , \mu , L} \\&\quad = \int _{\Delta ^{n}_{\beta }} d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(it+s_{j})\Big ] \langle \gamma _{s_{1}}(B); \gamma _{s_{2}}(B); \cdots ; \gamma _{s_{n}}(B); A \rangle _{\beta , \mu , L}. \end{aligned} \end{aligned}$$
(4.74)

Proof

We omit the \(\beta , \mu , L\) labels for simplicity. By Proposition 4.6 we have:

$$\begin{aligned} \begin{aligned}&\langle \gamma _{s_{1}}(B) \gamma _{s_{2}} (B) \cdots \gamma _{s_{n}}(B) A\rangle \\&\quad = \sum _{J\subseteq \{1,\ldots , n\}} \big \langle \gamma _{s_{j_1}}(B); \gamma _{s_{j_2}} (B); \cdots ; \gamma _{s_{j_{|J|}}}(B); A \big \rangle \big \langle B(-i{\underline{s}}_{\{1,\ldots , n\} \setminus J}) \big \rangle . \end{aligned} \end{aligned}$$
(4.75)

Hence, we can rewrite the left-hand side of (4.74) as:

$$\begin{aligned} \begin{aligned}&\int _{\Delta ^{n}_\beta } d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(it+s_{j}) \Big ] \langle \gamma _{s_{1}}(B) \gamma _{s_{2}} (B) \cdots \gamma _{s_{n}}(B) A\rangle \\&\quad =\sum _{m=0}^{n} \int _{\Delta ^{n}_\beta } d{\underline{s}}\, \sum _{\begin{array}{c} J \subseteq \{1,\ldots , n\} \\ |J| = m \end{array}} \Big [\prod _{j\in J} a(it+s_{j}) \Big ] \big \langle \gamma _{s_{j_1}}(B); \cdots ; \gamma _{s_{j_{m}}}(B); A \big \rangle \\&\qquad \cdot \Big [\prod _{j\in \{1,\ldots , n\} \setminus J} a(it+s_{j}) \Big ] \big \langle B(-i{\underline{s}}_{\{1,\ldots , n\} \setminus J}) \big \rangle . \end{aligned} \end{aligned}$$
(4.76)

Next, we shall use the following identity:

$$\begin{aligned} \begin{aligned}&\int _{\Delta ^{n}_\beta } d{\underline{s}}\, \sum _{\begin{array}{c} J \subseteq \{1,\ldots , n\} \\ |J| = m \end{array}} \Big [\prod _{j\in J} a(it+s_{j}) \Big ] \big \langle \gamma _{s_{j_1}}(B); \cdots ; \gamma _{s_{j_{m}}}(B); A \big \rangle \\&\qquad \cdot \Big [\prod _{j\in \{1,\ldots , n\} \setminus J} a(it+s_{j}) \Big ] \big \langle B(-i{\underline{s}}_{\{1,\ldots , n\} \setminus J}) \big \rangle \\&\quad = \int _{\Delta ^{m}_\beta } d{\underline{s}}\, \Big [ \prod _{i=1}^{m} a(it+s_{i})\Big ] \big \langle \gamma _{s_{1}}(B); \cdots ; \gamma _{s_{m}}(B); A \big \rangle \\&\qquad \cdot \int _{\Delta ^{n-m}_\beta } d{\underline{s}}\, \Big [ \prod _{i=m+1}^{n} a(it+s_{i})\Big ] \langle \gamma _{s_{m+1}}(B) \gamma _{s_{m+2}}(B)\cdots \gamma _{s_{n}}(B) \rangle . \end{aligned} \end{aligned}$$
(4.77)

Equation (4.77) is obtained via the application of Proposition B.1 in Appendix B, with the following choices for the functions f and g:

$$\begin{aligned} \begin{aligned} f(s_{1}, \ldots , s_{m})&= \Big [ \prod _{i=1}^{m} a(it+s_{i})\Big ] \big \langle \gamma _{s_{1}}(B); \cdots ; \gamma _{s_{m}}(B); A \big \rangle \\ g(s_{m+1}, \ldots , s_{n})&= \Big [ \prod _{i=m+1}^{n} a(it+s_{i})\Big ] \langle \gamma _{s_{m+1}}(B) \gamma _{s_{m+2}}(B)\cdots \gamma _{s_{n}}(B) \rangle . \end{aligned} \end{aligned}$$
(4.78)

We claim that, for all \(k>0\):

$$\begin{aligned} \int _{\Delta ^{k}_\beta } d{\underline{s}}\, \Big [ \prod _{i=1}^{k} a(it+s_{i})\Big ] \langle \gamma _{s_{1}}(B) \gamma _{s_{2}}(B)\cdots \gamma _{s_{k}}(B) \rangle = 0. \end{aligned}$$
(4.79)

Combined with (4.76), (4.77), this implies the final statement, Eq. (4.74): the only term contributing to the sum over m in Eq. (4.76) is \(m=n\). To prove Eq. (4.79), we proceed as follows. First, we write:

$$\begin{aligned} \begin{aligned}&\int _{\Delta ^{k}_\beta } d{\underline{s}}\, \Big [ \prod _{i=1}^{k} a(it+s_{i})\Big ] \langle \gamma _{s_{1}}(B) \gamma _{s_{2}}(B)\cdots \gamma _{s_{k}}(B) \rangle \\&\quad = \frac{1}{k!} \int _{[0,\beta ]^{k}} d{\underline{s}}\, \Big [ \prod _{i=1}^{k} a(it+s_{i})\Big ] \sum _{\pi } \mathbb {1}(s_{\pi (1)}> s_{\pi (2)}> \ldots > s_{\pi (k)}) \\&\qquad \cdot \langle \gamma _{s_{\pi (1)}}(B) \gamma _{s_{\pi (2)}}(B)\cdots \gamma _{s_{\pi (k)}}(B) \rangle \end{aligned} \nonumber \\ \end{aligned}$$
(4.80)

where the sum is over permutations of \(\{1, \ldots , k\}\). Let:

$$\begin{aligned} \begin{aligned}&G(s_{1}, \ldots , s_{k}) \\&\quad := \sum _{\pi } \mathbb {1}(s_{\pi (1)}> s_{\pi (2)}> \ldots > s_{\pi (k)}) \langle \gamma _{s_{\pi (1)}}(B) \gamma _{s_{\pi (2)}}(B)\cdots \gamma _{s_{\pi (k)}}(B) \rangle . \end{aligned} \end{aligned}$$
(4.81)

We claim that G is \(\beta \)-periodic in all its arguments:

$$\begin{aligned} G(s_{1}, \ldots , s_{i-1}, 0, s_{i+1}, \ldots , s_{k}) = G(s_{1}, \ldots , s_{i-1}, \beta , s_{i+1}, \ldots , s_{k}). \end{aligned}$$
(4.82)

In particular, the function G extends to a periodic function on \({\mathbb {R}}^{k}\), with period \(\beta \) in all variables. With a slight abuse of notation, let us denote by G such periodic extension. Notice that the function \(s\mapsto a(it+s)\) is also periodic with period \(\beta \) (recall definition (4.28)), which means that the whole integrand in the right-hand side of (4.80) can be extended to a \(\beta \)-periodic function in all its arguments. Furthermore, we claim that, for all \(\sigma \in {\mathbb {R}}\):

$$\begin{aligned} G(s_{1}, s_{2}, \ldots , s_{k}) = G(s_{1} + \sigma , s_{2} + \sigma , \ldots , s_{k} + \sigma ), \end{aligned}$$
(4.83)

that is, the function G is translation invariant. Both (4.82), (4.83) are well known; they ultimately follow from the KMS identity. For the sake of completeness, Eqs. (4.82), (4.83) will be reviewed in Appendix B, Proposition B.2. Thus, one gets:

$$\begin{aligned} \begin{aligned}&\int _{\Delta ^{k}_\beta } d{\underline{s}}\, \Big [ \prod _{i=1}^{k} a(it+s_{i})\Big ] \langle \gamma _{s_{1}}(B) \gamma _{s_{2}}(B)\cdots \gamma _{s_{k}}(B) \rangle \\&\quad \equiv \frac{1}{k!} \int _{(S^{1}_{\beta })^{k}} d{\underline{s}}\, \Big [ \prod _{i=1}^{k} a(it+s_{i})\Big ] G(s_{1}, s_{2}, \ldots , s_{k}), \end{aligned} \end{aligned}$$
(4.84)

where \(S^{1}_{\beta } = {\mathbb {R}} / \beta {\mathbb {Z}}\). We rewrite this expression as:

$$\begin{aligned} \begin{aligned}&\frac{1}{k!} \sum _{\omega _{i} \in \frac{2\pi }{\beta } {\mathbb {N}}} \Big [ \prod _{i=1}^{k} {\tilde{a}}(\omega _{i}) e^{\omega _{i} t}\Big ] \\&\qquad \cdot \int _{(S^{1}_{\beta })^{k}} d{\underline{s}}\, e^{-i\sum _{j=1}^{k} \omega _{j} s_{1} } e^{-i\sum _{j=1}^{k} \omega _{j} (s_{j} - s_{1})} G(0, s_{2} - s_{1}, \ldots , s_{k} - s_{1}) \\&\quad = \frac{1}{k!} \sum _{\omega _{i} \in \frac{2\pi }{\beta } {\mathbb {N}}} \Big [ \prod _{i=1}^{k} {\tilde{a}}(\omega _{i}) e^{\omega _{i} t}\Big ] \\&\qquad \cdot \int _{S^{1}_{\beta }} ds_{1}\, e^{-i \sum _{j=1}^{k} \omega _{j} s_{1} }\int _{(S^{1}_{\beta })^{k-1}} d{\underline{s}}\,e^{-i\sum _{j=2}^{k} \omega _{j} s_{j}} G(0, s_{2}, \ldots , s_{k}), \end{aligned} \end{aligned}$$
(4.85)

where in the last step we used that \(e^{-i\sum _{j=1}^{k} \omega _{j} (s_{j} - s_{1})} G(0, s_{2} - s_{1}, \ldots , s_{k} - s_{1})\), as a function of \(s_{j}\), \(j=2,\ldots , k\), is a function on \((S^{1}_{\beta })^{k-1}\). Then, the claim (4.79) follows from (recall that we can assume \(\omega _{j} \ge \frac{2\pi }{\beta }\), since \({\tilde{a}}(0) = 0\)):

$$\begin{aligned} \int _{S^{1}_{\beta }} ds_{1}\, e^{-i \sum _{j=1}^{k} \omega _{j} s_{1} } = 0. \end{aligned}$$
(4.86)

This concludes the proof of Proposition 4.7. \(\square \)

Remark 4.8

By the same arguments used in the proof of Proposition 4.7, Eq. (4.74) can also be rephrased as:

$$\begin{aligned} \begin{aligned}&\int _{\Delta ^{n}_\beta } d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(it+s_{j}) \Big ] \langle \gamma _{s_{1}}(B) \gamma _{s_{2}}(B) \cdots \gamma _{s_{n}}(B) A\rangle _{\beta , \mu , L} \\&\quad = \frac{1}{n!} \int _{(S^{1}_{\beta })^{n}} d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(it+s_{j}) \Big ] \langle \textbf{T} \gamma _{s_{1}}(B); \gamma _{s_{2}}(B); \cdots \gamma _{s_{n}}(B); A\rangle _{\beta , \mu , L} \end{aligned} \end{aligned}$$
(4.87)

where \(\textbf{T}\) denotes the time-ordering, as defined in Eq. (2.10). This is initially defined for operators whose imaginary-time arguments are in \([0,\beta )\). The resulting expression is then extended to a periodic function with period \(\beta \) on the whole \({\mathbb {R}}^{n}\). See Proposition B.2 and Remark B.3 for further details.

We are now ready to prove Lemma 4.2.

Proof of Lemma 4.2

By Proposition 4.5:

$$\begin{aligned} \begin{aligned}&\int _{-\infty \le s_{n} \le \ldots \le s_{1} \le t} d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(is_{j})\Big ] \langle [ \cdots [[\tau _{t}(A), \tau _{s_{1}}(B)], \tau _{s_{2}}(B)] \cdots \tau _{s_{n}}(B) ] \rangle _{\beta ,\mu ,L} \\&\quad = (-i)^{n} \int _{0}^{\beta } d s_1 \dots \int _0^{s_{n-1}} d s_n\, \Big [\prod _{j=1}^{n} a(it + s_{j})\Big ] \langle \gamma _{s_{1}}(B) \cdots \gamma _{s_{n}}(B) A\rangle _{\beta ,\mu ,L}. \end{aligned} \end{aligned}$$
(4.88)

Next, by Proposition 4.7:

$$\begin{aligned} \begin{aligned}&\int _{-\infty \le s_{n} \le \ldots \le s_{1} \le t} d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(is_{j})\Big ] \langle [ \cdots [[\tau _{t}(A), \tau _{s_{1}}(B)], \tau _{s_{2}}(B)] \cdots \tau _{s_{n}}(B) ] \rangle _{\beta ,\mu ,L}\\&\quad = (-i)^{n} \int _{0}^{\beta } d s_1 \dots \int _0^{s_{n-1}} d s_n\, \Big [\prod _{j=1}^{n} a(it + s_{j})\Big ] \langle \gamma _{s_{1}}(B); \cdots ; \gamma _{s_{n}}(B); A\rangle _{\beta ,\mu ,L}. \end{aligned} \end{aligned}$$
(4.89)

Finally, by Remark 4.8:

$$\begin{aligned} \begin{aligned}&\int _{-\infty \le s_{n} \le \ldots \le s_{1} \le t} d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(is_{j})\Big ] \langle [ \cdots [[\tau _{t}(A), \tau _{s_{1}}(B)], \tau _{s_{2}}(B)] \cdots \tau _{s_{n}}(B) ] \rangle _{\beta ,\mu ,L} \\&\quad = \frac{(-i)^{n}}{n!}\int _{(S^{1}_{\beta })^{n}} d{\underline{s}}\, \Big [\prod _{j=1}^{n} a(it + s_{j})\Big ] \langle \textbf{T} \gamma _{s_{1}}(B); \cdots ; \gamma _{s_{n}}(B); A\rangle _{\beta ,\mu ,L}, \end{aligned} \end{aligned}$$
(4.90)

which concludes the proof of Lemma 4.2. \(\square \)

4.4 Cumulant expansion for the instantaneous Gibbs state

In this section we shall review the well-known cumulant expansion for the Gibbs state of the Hamiltonian \({\mathscr {H}}(\eta t)\),

$$\begin{aligned} {\mathscr {H}}(\eta t) = {\mathscr {H}} + \varepsilon g(\eta t) {\mathscr {P}}, \end{aligned}$$
(4.91)

that is:

$$\begin{aligned} \langle {\mathscr {O}}_{X} \rangle _{t} = \frac{ {\text {Tr}}{\mathscr {O}}_{X} e^{-\beta ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})} }{ {\text {Tr}}\, e^{-\beta ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})} }. \end{aligned}$$
(4.92)

Perturbation theory in \(\varepsilon \) is generated by the following chain of identities:

$$\begin{aligned} \begin{aligned}&e^{\beta ({\mathscr {H}}- \mu {\mathscr {N}})} e^{-\beta ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})} - \mathbb {1} \\&\quad = \int _{0}^{\beta } ds\, \frac{\partial }{\partial s} e^{s ({\mathscr {H}}- \mu {\mathscr {N}})} e^{-s ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})} \\&\quad = -\varepsilon g(\eta t) \int _{0}^{\beta } ds\, e^{s ({\mathscr {H}}- \mu {\mathscr {N}})} {\mathscr {P}} e^{-s ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})} \\&\quad = -\varepsilon g(\eta t) \int _{0}^{\beta } ds\, e^{s ({\mathscr {H}}- \mu {\mathscr {N}})} {\mathscr {P}} e^{-s ({\mathscr {H}} - \mu {\mathscr {N}})} e^{s ({\mathscr {H}} - \mu {\mathscr {N}})} e^{-s ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})} \\&\quad \equiv -\varepsilon g(\eta t) \int _{0}^{\beta } ds\, \gamma _{s}({\mathscr {P}}) e^{s ({\mathscr {H}}- \mu {\mathscr {N}})} e^{-s ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})}. \end{aligned} \end{aligned}$$
(4.93)

Iterating:

$$\begin{aligned} \begin{aligned}&e^{\beta ({\mathscr {H}}- \mu {\mathscr {N}})} e^{-\beta ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})} \\&\quad = \mathbb {1} + \sum _{n\ge 1} (-\varepsilon g(\eta t))^{n} \int _{0}^{\beta } ds_{1} \int _{0}^{s_{1}} ds_{2} \cdots \int _{0}^{s_{n-1}} ds_{n}\, \gamma _{s_{1}}({\mathscr {P}})\cdots \gamma _{s_{n}}({\mathscr {P}}) \end{aligned} \end{aligned}$$
(4.94)

which we can also write as:

$$\begin{aligned} \begin{aligned}&e^{-\beta ({\mathscr {H}}(\eta t) - \mu {\mathscr {N}})} \\ {}&\qquad = e^{-\beta ({\mathscr {H}}- \mu {\mathscr {N}})}\Big [ \mathbb {1} + \sum _{n\ge 1} \frac{(-\varepsilon g(\eta t))^{n}}{n!} \int _{[0,\beta )^{n}} d{\underline{s}}\, \textbf{T} \gamma _{s_{1}}({\mathscr {P}})\cdots \gamma _{s_{n}}({\mathscr {P}}) \Big ]. \end{aligned} \end{aligned}$$
(4.95)

For finite L and finite \(\beta \), the series is norm convergent, thanks to the boundedness of the fermionic operators. Thus, the expectation value of a local operator on the Gibbs state of \({\mathscr {H}}(\eta t)\) can be written as:

$$\begin{aligned} \langle {\mathscr {O}}_{X} \rangle _{t} = \frac{\langle {\mathscr {O}}_{X} \rangle _{\beta ,\mu ,L} + \sum _{n\ge 1} \frac{(-\varepsilon g(\eta t))^{n}}{n!}\int _{[0,\beta )^{n}} d{\underline{t}}\, \langle \textbf{T} \gamma _{t_{1}}({\mathscr {P}})\cdots \gamma _{t_{n}}({\mathscr {P}}) {\mathscr {O}}_{X} \rangle _{\beta , \mu , L}}{1 + \sum _{n\ge 1} \frac{(-\varepsilon g(\eta t))^{n}}{n!}\int _{[0,\beta )^{n}} d{\underline{t}}\, \langle \textbf{T} \gamma _{t_{1}}({\mathscr {P}})\cdots \gamma _{t_{n}}({\mathscr {P}})\rangle _{\beta , \mu , L}}\nonumber \\ \end{aligned}$$
(4.96)

which is analytic in \(\varepsilon \) for \(|\varepsilon |\) small enough. We would like to show that analyticity in \(\varepsilon \) extends to a ball whose radius is bounded uniformly in L. To this end, Eq. (4.96) can be further rewritten as (omitting the \(\beta , \mu , L\) labels in the state and the \([0,\beta )^{n}\) domain in the integral):

$$\begin{aligned} \begin{aligned}&\langle {\mathscr {O}}_{X} \rangle _{t} \\&\quad = \frac{\partial }{\partial \zeta } \log \Big ( \sum _{n,m \ge 0} \frac{(-\varepsilon g(\eta t))^{n}}{n!} \frac{\zeta ^{m}}{m!} \int d{\underline{s}}\, \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}})\cdots \gamma _{s_{n}}({\mathscr {P}}) {\mathscr {O}}_{X}^{m} \rangle \Big )\Big |_{\zeta = 0} \\&\quad = \sum _{n\ge 0} \frac{\varepsilon ^{n}}{n!} \\&\qquad \cdot \frac{\partial ^{n}}{\partial \varepsilon ^{n}} \frac{\partial }{\partial \zeta } \log \Big ( \sum _{\ell ,m \ge 0}\frac{(-\varepsilon g(\eta t))^{\ell }}{\ell !} \frac{\zeta ^{m}}{m!} \int d{\underline{s}}\, \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}})\cdots \gamma _{s_{\ell }}({\mathscr {P}}) {\mathscr {O}}_{X}^{m} \rangle \Big )\Big |_{\begin{array}{c} \varepsilon = 0 \\ \zeta = 0 \end{array}}. \end{aligned} \end{aligned}$$
(4.97)

Then, it is not difficult to see that the right-hand side can be written as a sum over time-ordered cumulants, defined as in Eq. (2.14). We have:

$$\begin{aligned} \langle {\mathscr {O}}_{X} \rangle _{t} = \langle {\mathscr {O}}_{X} \rangle + \sum _{n\ge 1} \frac{(-\varepsilon g(\eta t))^{n}}{n!} \int d{\underline{s}}\, \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle . \end{aligned}$$
(4.98)

Under the assumption (3.2), the series converges for \(|\varepsilon |\) small enough, uniformly in L. By using Lemma 4.2, we will show that, for \(\eta \) small enough, the Duhamel series of the auxiliary dynamics is term-by-term close to the cumulant expansion of the instantaneous Gibbs state, Eq. (4.98).

4.5 Conclusion: proof of Theorem 3.7

We are now ready to prove our main result, Theorem 3.7.

Proof of Theorem 3.7

By Proposition 4.1 we have, for all \(t\le 0\):

$$\begin{aligned} \begin{aligned} {\text {Tr}}{\mathscr {O}}_{X} \rho (t)&= {\text {Tr}}{\mathscr {O}}_{X} {\tilde{\rho }}(t) + R_{1}(t) \\ |R_{1}(t)|&\le \frac{K|\varepsilon |}{\eta ^{d+2}\beta }, \end{aligned} \end{aligned}$$
(4.99)

where \({\tilde{\rho }}(t)\) is the evolution of the equilibrium state under the Hamiltonian \({{\mathscr {H}}}_{\beta ,\eta }(t)\), Eq. (4.11). Next, we rewrite the first term via its Duhamel series, as discussed in Sect. 4.1. We have, from Eq. (4.9), replacing \(g(\eta t)\) with \(g_{\beta , \eta }(t)\):

$$\begin{aligned} \begin{aligned}&{\text {Tr}}{\mathscr {O}}_{X} {\tilde{\rho }}(t) \\&\quad = {\text {Tr}}{\mathscr {O}}_{X} \rho _{\beta , \mu , L} + \sum _{n=1}^{\infty } (-i\varepsilon )^{n} \int _{-\infty \le s_{n} \le \ldots \le s_{1} \le t} d {\underline{s}}\, \Big [ \prod _{i=1}^{n} g_{\beta , \eta }(s_{i}) \Big ] \\&\qquad \cdot \langle [ \cdots [[\tau _{t}({\mathscr {O}}_{X}), \tau _{s_{1}}({\mathscr {P}})], \tau _{s_{2}}({\mathscr {P}})] \cdots \tau _{s_{n}}({\mathscr {P}}) ] \rangle _{\beta , \mu , L}. \end{aligned} \end{aligned}$$
(4.100)

Consider the integral. We apply Lemma 4.2, choosing:

$$\begin{aligned} A = {\mathscr {O}}_{X},\qquad B = {\mathscr {P}},\qquad a(s) = g_{\beta , \eta }(-is). \end{aligned}$$
(4.101)

We have, omitting the \(\beta , \mu , L\) labels:

$$\begin{aligned} \begin{aligned}&\int _{-\infty \le s_{n} \le \ldots \le s_{1} \le t} d {\underline{s}}\, \Big [ \prod _{i=1}^{n} g_{\beta , \eta }(s_{i}) \Big ] \langle [ \cdots [[\tau _{t}({\mathscr {O}}_{X}), \tau _{s_{1}}({\mathscr {P}})], \tau _{s_{2}}({\mathscr {P}})] \cdots \tau _{s_{n}}({\mathscr {P}}) ] \rangle \\&\quad = \frac{(-i)^{n}}{n!} \int _{[0,\beta )^{n}} d{\underline{s}}\, \Big [\prod _{j=1}^{n} g_{\beta , \eta }(t-is_{j})\Big ] \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \gamma _{s_{2}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle . \end{aligned} \end{aligned}$$
(4.102)

Equations (4.99), (4.100), (4.102) prove the identity (3.14). The estimate (3.17) follows from the bound in (4.99). To prove the bound (3.16), we use that:

$$\begin{aligned} \begin{aligned}&|I^{(n)}_{\beta ,\mu ,L}(\eta ,t)| \\&\quad = \Big |\int _{[0,\beta )^{n}} d{\underline{s}}\, \Big [\prod _{j=1}^{n} g_{\beta , \eta }(t-is_{j})\Big ] \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \gamma _{s_{2}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle \Big | \\&\quad \le \Vert h \Vert _{1}^{n} \sum _{\begin{array}{c} X_{1}, \ldots , X_{n} \subseteq \Lambda _{L} \\ \text {diam} X_{i} \le R \end{array}} \int _{[0,\beta )^{n}} d{\underline{s}}\, \Big |\langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}_{X_{1}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}_{X_{n}}); {\mathscr {O}}_{X} \rangle \Big | \\&\quad \le \Vert h \Vert _{1}^{n} {\mathfrak {c}}^{n} n! \end{aligned} \end{aligned}$$
(4.103)

where in the first inequality we used the estimate (3.9), while the last inequality follows from Assumption 3.1. This proves the bound (3.16), which shows that series in Eq. (3.14) is absolutely convergent for:

$$\begin{aligned} |\varepsilon | < \frac{1}{{\mathfrak {c}} \Vert h\Vert _{1}}. \end{aligned}$$
(4.104)

To conclude, let us prove Eq. (3.18). Rewriting the functions \(g_{\beta , \eta }(t-is_{j})\) as in (3.6), we get:

$$\begin{aligned} \begin{aligned}&\int _{[0,\beta )^{n}} d{\underline{s}}\, \Big [\prod _{j=1}^{n} g_{\beta , \eta }(t-is_{j})\Big ] \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \gamma _{s_{2}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle \\&\quad = \sum _{\underline{\omega } \in \frac{2\pi }{\beta } {\mathbb {N}}^{n}} \Big [ \prod _{i=1}^{n} {\tilde{g}}_{\beta , \eta }(\omega _{i}) e^{\omega _{i} t}\Big ] \\ {}&\quad \cdot \int _{[0,\beta )^{n}} d{\underline{s}}\, e^{-i\sum _{i=1}^{n} \omega _{i} s_{i}} \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \gamma _{s_{2}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle . \end{aligned} \end{aligned}$$
(4.105)

Let:

$$\begin{aligned} \begin{aligned}&\langle \textbf{T} \widehat{{\mathscr {P}}}_{\omega _{1}}; \widehat{{\mathscr {P}}}_{\omega _{2}}; \cdots ; \widehat{{\mathscr {P}}}_{\omega _{n}}; {\mathscr {O}}_{X} \rangle \\ {}&\quad := \int _{[0,\beta )^{n}} d{\underline{s}}\, e^{-i\sum _{i=1}^{n} \omega _{i} s_{i}} \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \gamma _{s_{2}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle . \end{aligned} \end{aligned}$$
(4.106)

We can rewrite Eq. (4.100) in terms of these functions as:

$$\begin{aligned} \begin{aligned}&{\text {Tr}}{\mathscr {O}}_{X} {\tilde{\rho }}(t) = {\text {Tr}}{\mathscr {O}}_{X} \rho _{\beta , \mu , L}\\ {}&\quad + \sum _{n=1}^{\infty } \frac{(-i\varepsilon )^{n}}{n!} \sum _{\underline{\omega } \in \frac{2\pi }{\beta } {\mathbb {N}}^{n}} \Big [ \prod _{i=1}^{n} \tilde{g}_{\beta , \eta }(\omega _{i}) e^{\omega _{i} t}\Big ] \langle \textbf{T} \widehat{{\mathscr {P}}}_{\omega _{1}}; \widehat{{\mathscr {P}}}_{\omega _{2}}; \cdots ; \widehat{{\mathscr {P}}}_{\omega _{n}}; {\mathscr {O}}_{X} \rangle , \end{aligned}\nonumber \\ \end{aligned}$$
(4.107)

which is absolutely convergent, since as implied by Assumption 3.1

$$\begin{aligned} |\langle \textbf{T} \widehat{{\mathscr {P}}}_{\omega _{1}}; \widehat{{\mathscr {P}}}_{\omega _{2}}; \cdots ; \widehat{{\mathscr {P}}}_{\omega _{n}}; {\mathscr {O}}_{X} \rangle | \le {\mathfrak {c}}^{n} n! \end{aligned}$$
(4.108)

To prove Eq. (3.18), we preliminarily observe that, from Eq. (4.98):

$$\begin{aligned} \langle {\mathscr {O}}_{X} \rangle _{t} = {\text {Tr}}{\mathscr {O}}_{X} \rho _{\beta , \mu , L} + \sum _{n\ge 1} \frac{(-\varepsilon g(\eta t))^{n}}{n!} \langle \textbf{T} \widehat{{\mathscr {P}}}_{0}; \widehat{{\mathscr {P}}}_{0}; \cdots ; \widehat{{\mathscr {P}}}_{0}; {\mathscr {O}}_{X} \rangle . \end{aligned}$$
(4.109)

Therefore,

$$\begin{aligned} \begin{aligned}&{\text {Tr}}{\mathscr {O}}_{X} {\tilde{\rho }}(t) - \langle {\mathscr {O}}_{X} \rangle _{t} = \sum _{n=1}^{\infty } \frac{(-\varepsilon )^{n}}{n!}\\&\qquad \cdot \Big [ \sum _{\underline{\omega } \in \frac{2\pi }{\beta } {\mathbb {N}}^{n}} \Big [ \prod _{i=1}^{n} {\tilde{g}}_{\beta , \eta }(\omega _{i}) e^{\omega _{i} t}\Big ] \langle \textbf{T} \widehat{{\mathscr {P}}}_{\omega _{1}}; \widehat{{\mathscr {P}}}_{\omega _{2}}; \cdots ; \widehat{{\mathscr {P}}}_{\omega _{n}}; {\mathscr {O}}_{X} \rangle \\&\qquad - g(\eta t)^{n} \langle \textbf{T} \widehat{{\mathscr {P}}}_{0}; \widehat{{\mathscr {P}}}_{0}; \cdots ; \widehat{{\mathscr {P}}}_{0}; {\mathscr {O}}_{X} \rangle \Big ]; \end{aligned} \end{aligned}$$
(4.110)

the expression in the square brackets can be rewritten as

$$\begin{aligned} \begin{aligned}&\sum _{\underline{\omega } \in \frac{2\pi }{\beta } {\mathbb {N}}^{n}} \Big [ \prod _{i=1}^{n} {\tilde{g}}_{\beta , \eta }(\omega _{i}) e^{\omega _{i} t}\Big ] \Big (\langle \textbf{T} \widehat{{\mathscr {P}}}_{\omega _{1}}; \widehat{{\mathscr {P}}}_{\omega _{2}}; \cdots ; \widehat{{\mathscr {P}}}_{\omega _{n}}; {\mathscr {O}}_{X} \rangle - \langle \textbf{T} \widehat{{\mathscr {P}}}_{0}; \widehat{{\mathscr {P}}}_{0}; \cdots ; \widehat{{\mathscr {P}}}_{0}; {\mathscr {O}}_{X} \rangle \Big ) \\&\quad + \Big (g_{\beta , \eta }(t)^{n} - g(\eta t)^{n}\Big ) \langle \textbf{T} \widehat{{\mathscr {P}}}_{0}; \widehat{{\mathscr {P}}}_{0}; \cdots ; \widehat{{\mathscr {P}}}_{0}; {\mathscr {O}}_{X} \rangle =: R^{(n)}_{2,1}(t) + R^{(n)}_{2,2}(t). \end{aligned}\nonumber \\ \end{aligned}$$
(4.111)

Consider the term \(R^{(n)}_{2,1}(t)\). We have:

$$\begin{aligned} \begin{aligned} | R^{(n)}_{2,1}(t) |&\le \sum _{\underline{\omega } \in \frac{2\pi }{\beta } {\mathbb {N}}^{n}} \Big [ \prod _{i=1}^{n} |{\tilde{g}}_{\beta , \eta }(\omega _{i})| e^{\omega _{i} t}\Big ] \\&\quad \cdot \int _{[0,\beta )^{n}} d{\underline{s}}\, \big | e^{i\underline{\omega }\cdot {\underline{s}}} - 1 \big | \big |\langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \gamma _{s_{2}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle \big | \\&\le \Big [ \sum _{\underline{\omega } \in \frac{2\pi }{\beta } {\mathbb {N}}^{n}} \Big [ \prod _{i=1}^{n} |{\tilde{g}}_{\beta , \eta }(\omega _{i})| e^{\omega _{i} t}\Big ] |\underline{\omega }| \Big ] \\ {}&\quad \cdot \int _{[0,\beta )^{n}} d{\underline{s}}\, |{\underline{s}}|_{\beta } \big | \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \gamma _{s_{2}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle \big |, \end{aligned} \end{aligned}$$
(4.112)

where \(|\underline{\omega }| = \sum _{i=1}^{n} |\omega _{i}|\) and \(|{\underline{s}}|_{\beta }\) is defined in Eq. (3.3). The integral in the right-hand side is estimated using Assumption 3.1:

$$\begin{aligned} \int _{[0,\beta )^{n}} d{\underline{s}}\, |{\underline{s}}|_{\beta } \big | \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \gamma _{s_{2}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}});{\mathscr {O}}_{X} \rangle \big | \le {\mathfrak {c}}^{n} n! \end{aligned}$$
(4.113)

Then, the argument of the square brackets in the right-hand side of (4.112) is bounded as follows:

$$\begin{aligned} \sum _{\underline{\omega } \in \frac{2\pi }{\beta } {\mathbb {N}}^{n}} \Big [ \prod _{i=1}^{n} |{\tilde{g}}_{\beta , \eta }(\omega _{i})| e^{\omega _{i} t}\Big ] |\underline{\omega }| \le n \Big (\sum _{\omega \in \frac{2\pi }{\beta } {\mathbb {N}}} \omega |{\tilde{g}}_{\beta , \eta }(\omega )| \Big ) \Vert h \Vert _{1}^{n-1} \end{aligned}$$
(4.114)

where we used Eq. (3.9). The sum in the right-hand side of (4.114) is estimated as:

$$\begin{aligned} \begin{aligned} \sum _{\omega \in \frac{2\pi }{\beta } {\mathbb {N}}} \omega |{\tilde{g}}_{\beta , \eta }(\omega )|&= \sum _{\omega \in \frac{2\pi }{\beta } {\mathbb {N}}} \omega \Big | \int _{\frac{\omega }{\eta } - \frac{2\pi }{\beta \eta }}^{\frac{\omega }{\eta }} d\xi \, h(\xi )\Big | \\&\le \sum _{\omega \in \frac{2\pi }{\beta } {\mathbb {N}}} \Big (\omega - \frac{2\pi }{\beta }\Big ) \Big | \int _{\frac{\omega }{\eta } - \frac{2\pi }{\beta \eta }}^{\frac{\omega }{\eta }} d\xi \, h(\xi )\Big | + \frac{2\pi }{\beta } \Vert h\Vert _{1} \\&\le \sum _{\omega \in \frac{2\pi }{\beta } {\mathbb {N}}} \eta \int _{\frac{\omega }{\eta } - \frac{2\pi }{\beta \eta }}^{\frac{\omega }{\eta }} d\xi \, \xi |h(\xi )| + \frac{2\pi }{\beta } \Vert h\Vert _{1} = \eta \Vert \xi h \Vert _{1} + \frac{2\pi }{\beta } \Vert h\Vert _{1}. \end{aligned} \end{aligned}$$
(4.115)

All together,

$$\begin{aligned} | R^{(n)}_{2,1}(t) | \le n \Vert h \Vert _{1}^{n-1} \Vert (1 + \xi ) h \Vert _{1} \Big (\eta + \frac{2\pi }{\beta }\Big ) {\mathfrak {c}}^{n} n!. \end{aligned}$$
(4.116)

Consider now the error term \(R^{(n)}_{2,2}(t)\) in (4.111). We have, using that \(| g(\eta t) | \le \Vert h \Vert _{1}\) and \(| g_{\beta , \eta }(t) | \le \Vert h \Vert _{1}\), together with (4.108):

$$\begin{aligned} | R^{(n)}_{2,2}(t) | \le n 2^{n-1} \Vert h \Vert _{1}^{n-1} \big | g_{\beta , \eta }(t) - g(\eta t) \big | {\mathfrak {c}}^{n} n!. \end{aligned}$$
(4.117)

Then, using (3.12) we find:

$$\begin{aligned} | R^{(n)}_{2,2}(t) | \le n 2^{n-1} \Vert h \Vert _{1}^{n-1} \frac{2\pi }{e \beta \eta } \Big \Vert \frac{h}{\xi } \Big \Vert _{1} {\mathfrak {c}}^{n} n!. \end{aligned}$$
(4.118)

Coming back to (4.110), we have, for \(|\varepsilon | < \varepsilon _{0}\), with \(\varepsilon _{0}\) small enough only dependent on h and on \({\mathfrak {c}}\):

$$\begin{aligned} \begin{aligned} \Big |{\text {Tr}}{\mathscr {O}}_{X} {\tilde{\rho }}(t) - \langle {\mathscr {O}}_{X} \rangle _{t}\Big |&\le \sum _{n=1}^{\infty } \frac{|\varepsilon |^{n}}{n!}\Big ( | R^{(n)}_{2,1}(t) | + | R^{(n)}_{2,2}(t) | \Big ) \\&\le C_{1} |\varepsilon | \Big ( \eta + \frac{1}{\beta } \Big ) + \frac{ C_{2}|\varepsilon | }{\beta \eta }, \end{aligned} \end{aligned}$$
(4.119)

where the constants \(C_{1} \equiv C_{1}(\mathfrak {c}, h)\) and \(C_{2} \equiv C_{2}(\mathfrak {c}, h)\) can be obtained from (4.116), (4.118), respectively. In conclusion, combining the bound (4.119) with (4.99):

$$\begin{aligned} \begin{aligned} \Big | {\text {Tr}}{\mathscr {O}}_{X} \rho (t) - \langle {\mathscr {O}}_{X} \rangle _{t} \Big |&\le \Big | {\text {Tr}}{\mathscr {O}}_{X} \rho (t) - {\text {Tr}}{\mathscr {O}}_{X} {\tilde{\rho }}(t) \Big | + \Big |{\text {Tr}}{\mathscr {O}}_{X} {\tilde{\rho }}(t) - \langle {\mathscr {O}}_{X} \rangle _{t}\Big | \\&\le \frac{K|\varepsilon |}{\eta ^{d+2}\beta } + C_{1} |\varepsilon | \Big ( \eta + \frac{1}{\beta } \Big ) + \frac{C_{2} |\varepsilon |}{\beta \eta }. \end{aligned}\nonumber \\ \end{aligned}$$
(4.120)

This proves (3.18) and concludes the proof of Theorem 3.7. \(\square \)

Proof of Corollary 3.9

Let us show how the strategy used above can be adapted to obtain the improved result (3.23). Under the assumptions of Corollary 3.9, the function g(z) is \(m+1\) times continuously differentiable for \(\text {Re}z\le 0\), and the same holds for \(g_{\beta , \eta }(z)\). We proceed as in the proof of Theorem 3.7, the only difference being the estimate for the term \(R^{(n)}_{2,1}(0)\) in Eq. (4.111). We have:

$$\begin{aligned} \begin{aligned}&R^{(n)}_{2,1}(0) = \sum _{\underline{\omega } \in \frac{2\pi }{\beta } {\mathbb {N}}^{n}} \Big [ \prod _{j=1}^{n} {\tilde{g}}_{\beta , \eta }(\omega _{j}) \Big ]\Big (\langle \textbf{T} \widehat{{\mathscr {P}}}_{\omega _{1}}; \widehat{{\mathscr {P}}}_{\omega _{2}}; \cdots ; \widehat{{\mathscr {P}}}_{\omega _{n}}; {\mathscr {O}}_{X} \rangle - \langle \textbf{T} \widehat{{\mathscr {P}}}_{0}; \widehat{{\mathscr {P}}}_{0}; \cdots ; \widehat{{\mathscr {P}}}_{0}; {\mathscr {O}}_{X} \rangle \Big ) \\&\quad = \int _{[0,\beta )^{n}} d{\underline{s}}\, \Big (\prod _{j=1}^{n} g_{\beta ,\eta }(-is_{j}) - g_{\beta ,\eta }(0)^{n}\Big )\langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \gamma _{s_{2}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle . \end{aligned} \end{aligned}$$
(4.121)

By differentiability of \(g_{\beta ,\eta }(-is)\), we have the Taylor expansion, for \(s\in [0,\beta )\) and \(s_{\beta }=s-m\beta \) such that \(|s_{\beta }|=|s|_{\beta }\), recall Eq. (3.3):

$$\begin{aligned} g_{\beta ,\eta }(-is) - g_{\beta ,\eta }(0) = \sum _{j=1}^{m} \frac{\partial _s^j g_{\beta ,\eta }(0)}{j!}(-is_{\beta })^j + r_{\beta ,\eta }^{(m+1)}(s), \end{aligned}$$
(4.122)

and the remainder can be estimated in a similar way to (4.115), so that there is some \(L_{m+1}>0\) such that

$$\begin{aligned} |r_{\beta ,\eta }^{(m+1)}(s)| \le L_{m+1} \Big ( \eta + \frac{1}{\beta }\Big )^{m+1} |s|_{\beta }^{m+1}. \end{aligned}$$
(4.123)

Concerning the first term in the right-hand side of (4.122), we use that, recalling that by assumption \(\partial ^{j}_{s}g(0) = 0\) for all \(j\le m\),

$$\begin{aligned} \begin{aligned} \Big | \partial _{s}^{j} g_{\beta ,\eta }(0) \Big |&= \Big |\partial _{s}^{j} g_{\beta ,\eta }(0) - \eta ^{j} \partial _{s}^{j} g(0)\Big | \\&= \Big | \sum _{r=0}^{\infty } \int _{\frac{2\pi }{\beta \eta } r}^{\frac{2\pi }{\beta \eta }(r+1)} d\xi \, h(\xi ) \Big [ \Big (\frac{2\pi }{\beta } (r+1)\Big )^{j} - (\xi \eta )^{j} \Big ] \Big | \\&\le \sum _{r=0}^{\infty } \int _{\frac{2\pi }{\beta \eta } r}^{\frac{2\pi }{\beta \eta }(r+1)} d\xi \, | h(\xi ) | \Big | \Big (\xi \eta + \frac{2\pi }{\beta }\Big )^{j} - (\xi \eta )^{j} \Big | \\&\le {\widetilde{C}}_{j} \sum _{\ell = 1}^{j} \frac{\eta ^{j-\ell }}{\beta ^{\ell }} \end{aligned} \end{aligned}$$
(4.124)

where we used the assumption (3.22). Therefore, from (4.122), (4.123), (4.124):

$$\begin{aligned} \big | g_{\beta ,\eta }(-is) - g_{\beta ,\eta }(0) \big | \le C_{m+1} \Big [ \left( \eta + \frac{1}{\beta }\right) ^{m+1} |s|_{\beta }^{m+1} + \frac{1}{\beta }(1 + |s|_{\beta }^{m+1})\Big ] \end{aligned}$$
(4.125)

which implies

$$\begin{aligned} \begin{aligned}&\Big | \prod _{j=1}^{n} g_{\beta ,\eta }(-is_{j}) - g_{\beta ,\eta }(0)^{n} \Big | \\&\quad \le 2^{n-1} \Vert h\Vert _{1}^{n-1} C_{m+1} \Big [ \left( \eta +\frac{1}{\beta }\right) ^{m+1} \sum _{j=1}^{n} |s_{j}|_{\beta }^{m+1} + \frac{1}{\beta }\sum _{j=1}^{n} (1+|s_{j}|_{\beta }^{m+1})\Big ]. \end{aligned} \end{aligned}$$
(4.126)

Plugging this bound in (4.121) we get

$$\begin{aligned} \begin{aligned} | R^{(n)}_{2,1}(0) |&\le 2^{n-1} \Vert h\Vert _{1}^{n-1} C_{m+1} \Big [ \left( \eta +\frac{1}{\beta }\right) ^{m+1} + \frac{1}{\beta }\Big ] \\ {}&\quad \cdot \sum _{j=1}^{n} \int _{[0,\beta )^{n}} d{\underline{s}}\, (1+|s_{j}|_{\beta }^{m+1}) \big | \langle \textbf{T} \gamma _{s_{1}}({\mathscr {P}}); \cdots ; \gamma _{s_{n}}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle \end{aligned} \end{aligned}$$
(4.127)

and so from the assumption (3.21), we obtain, for a new constant \({\widetilde{C}}_{m+1}\):

$$\begin{aligned} | R^{(n)}_{2,1}(0) | \le {\widetilde{C}}_{m+1} C^{n} {\mathfrak {c}}^{n} \Big [ \Big ( \eta + \frac{1}{\beta }\Big )^{m+1} + \frac{1}{\beta }\Big ] n!. \end{aligned}$$
(4.128)

The final claim, Eq. (3.23), follows proceeding as in the proof of Theorem 3.7, replacing the bound (4.116) with (4.128). \(\square \)

To conclude the section, we discuss the proof of Corollary 3.11.

Proof of Corollary 3.11

Equations (3.24), (3.25) follow from Eqs. (4.99), (4.102), and from the convergence of the series in (4.107). Equation (3.26) is proved following the argument after (4.110). To prove Eq. (3.27), we use that, from (4.102):

$$\begin{aligned} \int _{0}^{\beta } ds\, g_{\beta ,\eta }(t - is) \langle \gamma _{s}({\mathscr {P}}); {\mathscr {O}}_{X} \rangle _{\beta , \mu , L} = i\int _{-\infty }^{t} d s\, g_{\beta ,\eta }(s) \langle \left[ \tau _{t}({\mathscr {O}}_{X}), \tau _{s}({\mathscr {P}}) \right] \rangle _{\beta , \mu , L}. \nonumber \\ \end{aligned}$$
(4.129)

Next, we estimate the error introduced by replacing \(g_{\beta ,\eta }(s)\) with \(g(\eta s)\). We have:

$$\begin{aligned} \begin{aligned}&\Big | \int _{-\infty }^{t} d s\, (g_{\beta ,\eta }(s) - g(\eta s)) \langle \left[ \tau _{t}({\mathscr {O}}_{X}), \tau _{s}({\mathscr {P}}) \right] \rangle _{\beta , \mu , L}\Big | \\&\quad \le \int _{-\infty }^{t} d s\, |g_{\beta ,\eta }(s) - g(\eta s)| \big | \langle \left[ \tau _{t}({\mathscr {O}}_{X}), \tau _{s}({\mathscr {P}}) \right] \rangle _{\beta , \mu , L}\big | \\&\quad \le {{\widetilde{K}}}\int _{-\infty }^{t} d s\, |g_{\beta ,\eta }(s) - g(\eta s)| (|t-s|^{d} + 1), \end{aligned} \end{aligned}$$
(4.130)

where in the last step we used the Lieb-Robinson bound, as in (4.19). Finally, proceeding as in (4.25)–(4.27), we get:

$$\begin{aligned} \begin{aligned}&\int _{-\infty }^{t} d s\, |g_{\beta ,\eta }(s) - g(\eta s)| (|t-s|^{d} + 1) \\&\quad \le \frac{{\widetilde{K}}}{\beta } \int _{-\infty }^{0} ds \int _{0}^{\infty } d\xi \, |h(\xi )| e^{\xi \eta s} (|s|^{d+1} + 1) \\&\quad \le \frac{K}{\beta \eta ^{d+2}}. \end{aligned} \end{aligned}$$
(4.131)

This concludes the proof of (3.27) and of the corollary. \(\square \)