1 Introduction

The emergence of the heat equation from a microscopic dynamics after a diffusive rescaling of space and time is a challenging mathematical problem in non-equilibrium statistical mechanics [6]. Here we study this problem in the context of conversion of work into heat in a simple model: a pinned harmonic chain. The system is in contact at its left end with a thermal reservoir at temperature \(T_-\) which acts on the leftmost particle via a Langevin force (Ornstein–Uhlenbeck process). The rightmost particle is acted on by a deterministic periodic force which does work on the system. The work pumps energy into the system with the energy then flowing into the reservoir in the form of heat.

To describe this flow we need to know the heat conductivity of the system. As it is well known, the harmonic crystal has an infinite heat conductivity [19]. To model realistic systems with finite heat conductivity we add to the harmonic dynamics a random velocity reversal. It models in a simple way the various dissipative mechanisms in real systems and produces a finite conductivity (cf. [1, 5]).

In paper [14], which is the first part of the present work, we studied this system in the limit \(t\rightarrow \infty \), see Sect. 2.1 for rigorous statements of the main results obtained there. In this limit the probability distribution of the phase space configurations is periodic with the period of the external force, see Theorem 2.1 below. We also showed that with a proper scaling of the force and period the averaged temperature profile satisfies the stationary heat equation with an explicitly given heat current. In the present paper we study the time dependent evolution of the system, on the diffusive time scale, starting with some specified initial distribution. We derive a heat equation for the temperature profile of the system.

The periodic forcing generates a Neumann type boundary condition for the macroscopic heat equation, so that the gradient of the temperature at the boundary must satisfy Fourier law with the boundary energy current generated by the work of the periodic forcing (see (2.35) below). On the left side the boundary condition is given by the assigned value \(T_-\), the temperature of the heat bath. As \(t\rightarrow \infty \) the profile converges to the macroscopic profile obtained in [14].

The energy diffusion in the harmonic chain on a finite lattice, with energy conserving noise and Langevin heat bath at different temperatures at the boundaries, have been previously considered [2,3,4, 13, 18]. But complete mathematical results, describing the time evolution of the macroscopic temperature profile, have been obtained only for unpinned chains [4, 13].

This article gives the first proof of the heat equation for the pinned chain in a finite interval, and the method can be applied with different boundary conditions (see Remark 2.12). Investigation about energy transport in anharmonic chain under periodic forcing can be found in [10, 11], and very recently in [20]. In the review article [16] we considered various extensions of the present results to unpinned, multidimensional and anharmonic dynamics.

1.1 Structure of the Article

We start Sect. 2 with the precise description of the dynamics of the oscillator chain. Then, as already mentioned, in Sect. 2.1 we give an account of results obtained in [14]. In Sect. 2.2 we formulate our two main theorems: Theorem 2.5 about the limit current generated at the boundary by a periodic force, and Theorem 2.10 about the convergence of the energy profile to the solution of the heat equation with mixed boundary conditions.

In Sect. 3 we obtain a uniform bound on the total energy at any macroscopic time by an entropy argument. As a corollary (cf. Corollary 3.3) we obtain a uniform bound on the time integrated energy current, with respect to the size of the system.

Section 4 contains the proof of the equipartition of energy: Proposition 4.1 shows that the limit profiles of the kinetic and potential energy are equal. Furthermore, we show there the fluctuation-dissipation relation ((4.5)). It gives an exact decomposition of the energy currents into a dissipative term (given by a gradient of a local function) and a fluctuation term (given by the generator of the dynamics applied to a local function).

The fluctuation-dissipation relation (4.5) and equipartition of energy (4.1) are two of the ingredients for the proof of the main Theorem 2.10. The third component is a local equilibrium result for the limit covariance of the positions integrated in time. It is formulated in Proposition 5.1, for the covariances in the bulk, and in Proposition 5.2, for the boundaries. The local equilibrium property allows to identify correctly the thermal diffusivity in the proof of Theorem 2.10, see Sect. 5.

The technical part of the argument is presented in the appendices: the proof of the local equilibrium is given in Appendix D, after the analysis of the time evolution of the matrix for the time integrated covariances of positions and momenta, carried out in Appendix C. Both in Appendix C and Appendix D we use results proven in [14], when possible. Appendix B contains the proof of the current asymptotics (Theorem 2.5), that involves only the dynamics of the averages of the configurations. Appendix E contains the proof of the uniqueness of measured valued solutions of the Dirichlet-Neumann initial-boundary problem for the heat equation, satisfied by the limiting energy profile. Finally, in Appendix F we present an argument for the relative entropy inequality stated in Proposition 3.1.

2 Description of the Model

We consider a pinned chain of \(n+1\)-harmonic oscillators in contact on the left with a Langevin heat bath at temperature \(T_-\), and with a periodic force acting on the last particle on the right. The configuration of particle positions and momenta are specified by

$$\begin{aligned} ({\textbf{q}}, {\textbf{p}}) = (q_0, \dots , q_n, p_0, \dots , p_n) \in \Omega _n:={\mathbb R}^{n+1}\times {\mathbb R}^{n+1}. \end{aligned}$$
(2.1)

We should think of the positions \(q_x\) as relative displacement from a point, say x in a finite lattice \(\{0,1,\ldots ,n\}\). The total energy of the chain is given by the Hamiltonian: \(\mathcal {H}_n ({\textbf{q}}, {\textbf{p}}):= \sum _{x=0}^n {\mathcal {E}}_x ({\textbf{q}}, {\textbf{p}}),\) where the energy of particle x is defined by

$$\begin{aligned} {\mathcal {E}}_x ({\textbf{q}}, {\textbf{p}}):= \frac{p_x^2}{2} + \frac{1}{2} (q_{x}-q_{x-1})^2 +\frac{\omega _0^2 q_x^2}{2} ,\ \quad x = 0, \dots , n, \end{aligned}$$
(2.2)

where \(\omega _0>0\) is the pinning strenght. We adopt the convention that \(q_{-1}:=q_0\).

figure a

The microscopic dynamics of the process \(\{({\textbf{q}}(t), {\textbf{p}}(t))\}_{t\geqslant 0}\) describing the total chain is given in the bulk by

$$\begin{aligned} \begin{aligned} \dot{q}_x(t)&= p_x(t) , \qquad x\in \{0, \dots , n\},\\ \textrm{d}p_x(t)&= \left( \Delta q_x(t)-\omega _0^2 q_x(t)\right) \textrm{d}t- 2 p_x(t-) \textrm{d}N_x(\gamma t), \quad x\in \{1, \dots , n-1\} \end{aligned} \end{aligned}$$
(2.3)

and at the boundaries by

$$\begin{aligned} \textrm{d}p_0(t)&= \; \Big (q_1(t)-q_0(t) - \omega _0^2 q_0(t) \Big ) \textrm{d}t - 2 \gamma p_0(t) \textrm{d}t +\sqrt{4 \gamma T_-} \textrm{d}{{\widetilde{w}}}_-(t), \nonumber \\ \textrm{d}p_n(t)&= \; \Big (q_{n-1}(t) -q_n(t) -\omega _0^2 q_n(t) \Big ) \textrm{d}t +{\mathcal {F}}_n(t) \textrm{d}t - 2 p_n(t-) \textrm{d}N_n(\gamma t). \end{aligned}$$
(2.4)

Here \(\Delta \) is the Neumann discrete laplacian, corresponding to the choice \(q_{n+1}:=q_n\) and \(q_{-1}= q_0\), see (A.1) below. Processes \(\{N_x(t), x=1,\ldots ,n\}\) are independent Poisson of intensity 1, while \({\widetilde{w}}_-(t)\) is a standard one dimensional Wiener process, independent of the Poisson processes. Parameter \(\gamma >0\) regulates the intensity of the random perturbations and the Langevin thermostat. We have choosen the same parameter in order to simplify notations, it does not affect the results concerning the macroscopic properties of the dynamics.

We assume that the forcing \({\mathcal {F}}_n(t)\) is given by

$$\begin{aligned} {\mathcal {F}}_n(t)= \;\frac{1}{\sqrt{n}} {\mathcal {F}}\left( \frac{t}{ \theta }\right) . \end{aligned}$$
(2.5)

where \( {\mathcal {F}}(t)\) is a 1-periodic function such that

$$\begin{aligned} \int _0^1 {\mathcal {F}}(t) \textrm{d}t = 0, \qquad \int _0^1 {\mathcal {F}}(t)^2 \textrm{d}t > 0\quad \text{ and }\quad \sum _{\ell \in {\mathbb Z}} |{\widehat{{\mathcal {F}}}}(\ell )|<+\infty . \end{aligned}$$
(2.6)

Here

$$\begin{aligned} {\widehat{{\mathcal {F}}}}(\ell )=\int _0^1 e^{-2\pi i\ell t }{\mathcal {F}}(t)\textrm{d}t,\quad {\ell \in {\mathbb Z}}, \end{aligned}$$
(2.7)

are the Fourier coefficients of the force. Note that by (2.6) we have \({\widehat{{\mathcal {F}}}}(0)=0\).

For a given function \(f:\{0,\ldots ,n\}\rightarrow {\mathbb R}\) define the Neumann laplacian

$$\begin{aligned} \Delta f_x:=f_{x+1}+f_{x-1}-2f_x,\,x=0,\ldots ,n, \end{aligned}$$
(2.8)

with the convention \(f_{-1}:=f_0\) and \(f_{n+1}:=f_n\). The generator of the dynamics can be then written as

$$\begin{aligned} \mathcal {G}_t = \mathcal {A}_t + \gamma S_{\text {flip}} + 2 \gamma S_-, \end{aligned}$$
(2.9)

where

$$\begin{aligned} \mathcal {A}_t = \sum _{x=0}^n p_x \partial _{q_x} + \sum _{x=0}^n (\Delta q_{x}-\omega ^2_0q_x) \partial _{p_x} + {\mathcal {F}}_n(t) \partial _{p_n} \end{aligned}$$
(2.10)

and

$$\begin{aligned} S_{\text {flip}} F ({\textbf{q}},{\textbf{p}}) = \sum _{x=1}^{n} \Big ( F({\textbf{q}},{\textbf{p}}^x) - F({\textbf{q}},{\textbf{p}})\Big ), \end{aligned}$$
(2.11)

Here \(F:{\mathbb R}^{2(n+1)}\rightarrow {\mathbb R}\) is a bounded and measurable function, \({\textbf{p}}^x\) is the velocity configuration with sign flipped at the x component, i.e. \({\textbf{p}}^x=(p_0^x,\ldots ,p_n^x)\), with \(p_y^x=p_y\), \(y\not =x\) and \(p_x^x=-p_x\). Furthermore,

$$\begin{aligned} S_- = T_- \partial _{p_0}^2 - p_0 \partial _{p_0}. \end{aligned}$$
(2.12)

The microscopic energy currents are given by

$$\begin{aligned} \mathcal {G}_t \mathcal {E}_x(t) = j_{x-1,x}(t) - j_{x,x+1}(t) , \end{aligned}$$
(2.13)

with \(\mathcal {E}_x(t):={\mathcal {E}}_x \big ({\textbf{q}}(t), \textbf{p}(t)\big )\) and

$$\begin{aligned} j_{x,x+1}(t):=- p_x(t) \big (q_{x+1}(t) - q_x(t)\big ), \qquad \text{ if } \quad x \in \{0,\ldots ,n-1\} \end{aligned}$$

and at the boundaries

$$\begin{aligned} j_{-1,0} (t):= 2 { \gamma } \left( T_- - p_0^2(t) \right) , \qquad j_{n,n+1} (t):= - {\mathcal {F}}_n(t) p_n(t). \end{aligned}$$
(2.14)

2.1 Summary of Results Concerning Periodic Stationary State

The present section is devoted to presentation of the results of [14] (some additional facts are contained in [15]). They concern the case when the chain is in its (periodic) stationary state. More precisely, we say that the family of probability measures \(\{\mu _t^P, t\in [0,+\infty )\}\) constitutes a periodic stationary state for the chain described by (2.3) and (2.4) if it is a solution of the forward equation: for any function F in the domain of \(\mathcal {G}_t\):

$$\begin{aligned} \partial _t \int F({\textbf{q}},{\textbf{p}}) \mu _t^P(\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}}) = \int (\mathcal {G}_t F({\textbf{q}},{\textbf{p}})) \mu _t^P(\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}}), \end{aligned}$$
(2.15)

such that \(\mu _{t + \theta }^P=\mu _{t}^P\).

Given a measurable function \(F:{\mathbb R}^{2(n+1)}\rightarrow {\mathbb R}\) we denote

$$\begin{aligned} \langle \langle F\rangle \rangle := \frac{1}{\theta }\int _0^{\theta }\textrm{d}t\int _{{\mathbb R}^{2(n+1)}}F({\textbf{q}},{\textbf{p}})\mu _t^P(\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}}), \end{aligned}$$
(2.16)

provided that \(|F({\textbf{q}},{\textbf{p}})|\) is integrable w.r.t. the respective product measure.

It has been shown, see [14, Theorem 1.1, Proposition A.1] and also [15, Theorem A.2], that there exists a unique periodic, stationary state.

Theorem 2.1

For a fixed \(n\geqslant 1\) there exists a unique periodic stationary state \(\{\mu _s^P,s\in [0,+\infty )\}\) for the system (2.3)–(2.4). The measures \(\mu _s^P\) are absolutely continuous with respect to the Lebesgue measure \(\textrm{d}{\textbf{q}}\textrm{d}{\textbf{p}}\) and the respective densities \(\mu _s^P(\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}})=f_s^P({\textbf{q}},{\textbf{p}}) \textrm{d}{\textbf{q}}\textrm{d}{\textbf{p}}\) are strictly positive. The time averages of all the second moments \(\langle \langle p_xp_y\rangle \rangle \), \(\langle \langle p_xq_y\rangle \rangle \) and \(\langle \langle q_xq_y\rangle \rangle \) are finite and \(\min _x \langle \langle p_x^2\rangle \rangle \geqslant T_-\). Furthermore, given an arbitrary initial probability distribution \(\mu \) on \({\mathbb R}^{2(n+1)}\) and \((\mu _t)\) the solution of (2.15) such that \(\mu _{0}=\mu \), we have

$$\begin{aligned} \lim _{t\rightarrow +\infty }\Vert \mu _{t}-\mu _t^P\Vert _{\textrm{TV}}=0. \end{aligned}$$
(2.17)

Here \(\Vert \cdot \Vert _{\textrm{TV}}\) denotes the total variation norm.

In the periodic stationary state the time averaged energy current \( J_n=\langle \langle j_{x,x+1}\rangle \rangle \) is constant for \(x=-1,\ldots ,n\). In particular

$$\begin{aligned} J_n= - \frac{1}{\sqrt{n}\theta } \int _0^{\theta } {\mathcal {F}}\left( \frac{s}{\theta }\right) {\overline{p}}_n(s)\textrm{d}s , \end{aligned}$$
(2.18)

where \({\overline{p}}_x(s):= \int _{{\mathbb R}^{2(n+1)}}p_x \mu _s^P(\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}})\). It turns out that the stationary current is of size O(1/n) as can be seen from the following.

Theorem 2.2

(see Theorem 3.1 of [14]) Suppose that \(\mathcal {F}(\cdot )\) satisfies (2.6) and, in addition, we also have \(\sum _{\ell \in {\mathbb Z}}\ell ^2|\widehat{\mathcal {F}}(\ell )|^2<+\infty \). Then,

$$\begin{aligned} \lim _{n\rightarrow +\infty }n J_n= J :=-\left( \frac{2\pi }{\theta }\right) ^2 \sum _{\ell \in {\mathbb Z}} \ell ^2\mathcal {Q} (\ell ), \end{aligned}$$
(2.19)

with \(\mathcal {Q}(\ell )\) given by,

$$\begin{aligned} \begin{aligned}&\mathcal {Q}(\ell )= 4\gamma |{\widehat{{\mathcal {F}}}}(\ell )|^2 \int _0^1 \cos ^2\left( \frac{\pi z}{2}\right) \left\{ \left[ 4\sin ^2\left( \frac{\pi z}{2}\right) +\omega _0^2 -\left( \frac{2\pi \ell }{\theta }\right) ^2\right] ^2 +\left( \frac{{4} \gamma \pi \ell }{\theta }\right) ^2 \right\} ^{-1}\textrm{d}z. \end{aligned} \end{aligned}$$
(2.20)

In the more general case when the forcing \({\mathcal {F}}_n(t)\) is \(\theta _n\)-periodic, with the period \(\theta _n=n^b\theta \) and the amplitude \(n^{a}\), i.e. \( {\mathcal {F}}_n(t)= \;n^{a} {\mathcal {F}}\left( \frac{t}{ \theta _n}\right) , \) and

$$\begin{aligned} b-a=\frac{1}{2},\quad a\leqslant 0\quad \text{ and }\quad b>0 \end{aligned}$$
(2.21)

the convergence in (2.19) still holds. However, then

$$\begin{aligned} \begin{aligned} \mathcal {Q} (\ell )= 4\gamma |{\widehat{{\mathcal {F}}}}(\ell )|^2 \int _0^1 \cos ^2\left( \frac{\pi z}{2}\right) \left[ 4\sin ^2\left( \frac{\pi z}{2}\right) +\omega _0^2 \right] ^{-2}\textrm{d}z ,\quad \text{ when } b>0. \end{aligned} \end{aligned}$$
(2.22)

Concerning the convergence of the energy profile we have shown the following, see [14, Theorem 3.4].

Theorem 2.3

Under the assumptions of Theorem 2.2 we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n} \sum _{x=0}^n \varphi \left( \frac{x}{n+1} \right) \langle \langle p^2_x\rangle \rangle {= \lim _{n\rightarrow \infty } \frac{1}{n} \sum _{x=0}^n \varphi \left( \frac{x}{n+1} \right) \langle \langle {\mathcal {E}}_x\rangle \rangle } = \int _0^1 \varphi (u) T(u) \textrm{d}u, \end{aligned}$$
(2.23)

with

$$\begin{aligned} T(u) = T_--\frac{4\gamma Ju}{D}, \quad u\in [0,1], \end{aligned}$$
(2.24)

for any \(\varphi \in C[0,1]\). Here J is given by (2.19) and

$$\begin{aligned} D = 1 - \omega _0^2 \Big (G_{\omega _0}(0)+ G_{\omega _0}(1)\Big ) =\frac{2}{2+\omega _0^2+\omega _0\sqrt{\omega _0^2+4}}, \end{aligned}$$
(2.25)

where \(G_{\omega _0}(\ell )\) is the Green function defined in (A.2).

Concerning the time variance of the average kinetic energy we have shown the following.

Theorem 2.4

(Theorem 9.1, [14]) Suppose that the forcing \(\mathcal {F}_n(\cdot )\) is given by (2.5), where \(\mathcal {F}(\cdot )\) satisfies the hypotheses made in Theorem 2.2. Then, there exists a constant \(C>0\) such that

$$\begin{aligned} \sum _{x=0}^n \frac{1}{\theta }\int _0^\theta \left( \overline{p_x^2}(t) - \langle \langle p_x^2\rangle \rangle \right) ^2 \textrm{d}t \leqslant \frac{C}{n^2},\quad n=1,2,\ldots . \end{aligned}$$
(2.26)

Here \( \overline{p_x^2}(t):=\int _{{\mathbb R}^{2(n+1)}}p_x^2 \mu _t^P(\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}})\).

2.2 Statements of the Main Results

2.2.1 Macroscopic Energy Current Due to Work

The first results concerns the work done by the forcing in a diffusive limit, i.e.

$$\begin{aligned} J_n(t, \mu ) = \frac{1}{n} \int _0^{n^2t} {\mathbb {E}}_{\mu }\left( j_{n,n+1} (s, {\textbf{q}}(s),{\textbf{p}}(s)) \right) ds = - \frac{1}{n} \int _0^{n^2t} {\mathcal {F}}_n(s) {\mathbb {E}}_{\mu }\left( p_n(s) \right) ds, \end{aligned}$$
(2.27)

where \( {\mathbb {E}}_{\mu }\) denotes the expectation of the process with the initial configuration \(({\textbf{q}},{\textbf{p}})\) distributed according to a probability measure \(\mu \). We shall write \(J_n(t, \textbf{q},\textbf{p})\) if for a deterministic initial configuration \((\textbf{q},\textbf{p})\), i.e. \(\mu =\delta _\mathbf{{q},\textbf{p}}\), the \(\delta \)-measure that gives probability 1 to such configuration.

Assume furthermore that \((\mu _n)\) is a sequence of initial distributions, with each \(\mu _n\) probability measure on \({\mathbb R}^{n+1}\times {\mathbb R}^{n+1}\). We suppose that there exist \(C>0\) and \(\delta \in [0,2)\) for which for any integer \(n\geqslant 1\)

$$\begin{aligned} \mathcal {H}_n({\overline{{\textbf{q}}}}_n, {\overline{{\textbf{p}}}}_n) \leqslant Cn^\delta . \end{aligned}$$
(2.28)

Here \(({\overline{{\textbf{q}}}}_n, {\overline{{\textbf{p}}}}_n)\) is the vector of the averages of the configuration with respect to \(\mu _n\). We are interested essentially in the case \(\delta =1\), but Theorem 2.5 is valid also for any \(\delta <2\). In Proposition B.1 we prove that, in the diffusive time scaling, the energy due to the averages (2.28) becomes negligible at any time \(t>0\).

In Section B.2 of the Appendix we prove the following.

Theorem 2.5

Under the assumptions listed above, we have

$$\begin{aligned} \lim _{n\rightarrow +\infty }\sup _{t\geqslant 0}\Big |J_n(t, \mu _n)-Jt\Big |=0, \end{aligned}$$
(2.29)

where J is given by (2.19).

Remark 2.6

The asymptotic current J is the same as in the stationary state (cf. [14]) and it does not depend on the initial configuration.

Remark 2.7

Analogously to the stationary case, rescaling the period \(\theta \) with n and the strenght of the force in such a way that

$$\begin{aligned} {\mathcal {F}}_n(t)= \;n^{a} {\mathcal {F}}\left( \frac{t}{ n^{b}\theta }\right) , \qquad b-a=\frac{1}{2},\quad a\leqslant 0\quad \text{ and }\quad b>0, \end{aligned}$$
(2.30)

Theorem 2.5 still holds, but with a different value of the current. Namely, J is given by (2.19) with \(\mathcal {Q}(\ell )\) defined by (2.22). Formula (2.22) corresponds to (2.20) with the value \(\theta = \infty \). If \(b-a\not =1/2\) the macrosopic current \(nJ_n\) is not of order O(1), which leads to an anomalous behavior of the heat conductivity of the chain (it vanishes, if \(b-a>1/2\), becomes unbounded, if \(b-a<1/2\). The assumption \(a\leqslant 0\) guarantees that the force acting on the system does not become infinite, as \(n\rightarrow +\infty \).

Remark 2.8

Using contour integration it is possible to calculate the quantities appearing in (2.20) and (2.22), see [15, Appendix D]. In the case of (2.20) we obtain

$$\begin{aligned} {\mathcal {Q}}(\ell )&=\frac{\theta |{\widehat{{\mathcal {F}}}}(\ell )|^2}{2\pi \ell } {\textrm{Im}}\left( \left\{ \frac{2 }{\lambda (\omega _0,\ell )\sqrt{1+4/\lambda (\omega _0,\ell )}} +\frac{1}{2}\right\} \right. \\&\quad \left. \left\{ 1+\frac{ \lambda (\omega _0,\ell )}{2}\Big (1+\sqrt{1 +\frac{4}{\lambda (\omega _0,\ell )}}\Big )\right\} ^{-1}\right) , \end{aligned}$$

with

$$\begin{aligned} \lambda (\omega _0,\ell ):=\omega _0^2 -\left( \frac{2\pi \ell }{\theta }\right) ^2 +i\left( \frac{{4} \gamma \pi \ell }{\theta }\right) . \end{aligned}$$

Furthermore, in the case of (2.22) we have

$$\begin{aligned} \mathcal {Q}(\ell ) = \frac{2\gamma |{\widehat{{\mathcal {F}}}}(\ell )|^2 (4 +\omega _0^2) }{ (\omega _0^4+4 \omega _0^2+8)^{3/2}} . \end{aligned}$$

2.2.2 Macroscopic Energy Profile

Let \(\nu _{T_-} (\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}})\) be defined as the product Gaussian measure on \(\Omega _n\) (see (2.1)) of zero average and variance \(T_->0\) given by

$$\begin{aligned} \begin{aligned}&\nu _{T_-} (\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}}) : = \frac{1}{Z}\prod _{x=0}^n \exp \left\{ -{\mathcal E}_x({\textbf{q}},{\textbf{p}})/T_- \right\} \textrm{d}{\textbf{q}}\textrm{d}{\textbf{p}}, \end{aligned} \end{aligned}$$
(2.31)

where Z is the normalizing constant. Let \(f({\textbf{q}},{\textbf{p}})\) be a probability density with respect to \(\nu _{T_-}\). We denote the relative entropy

$$\begin{aligned} \textbf{H}_{n}(f) := \int _{\Omega _n} f({\textbf{q}},{\textbf{p}})\log f({\textbf{q}},{\textbf{p}}) \textrm{d}\nu _{T_-}({\textbf{q}},{\textbf{p}}). \end{aligned}$$
(2.32)

We assume now that the initial distribution \(\mu _n\) has density \(f_n(0, {\textbf{q}},{\textbf{p}})\), with respect to \(\nu _{T_-}\), such that there exists a constant \(C>0\) for which

$$\begin{aligned} {\textbf{H}}_n(f_n(0)) \leqslant C n,\quad n=1,2,\ldots . \end{aligned}$$
(2.33)

For example, it can be verified that local Gibbs measures of the form

$$\begin{aligned} f_n({\textbf{q}},{\textbf{p}}) \textrm{d}\nu _{T_-}({\textbf{q}},{\textbf{p}}) = \prod _{x=0}^n \exp \left\{ -\frac{{\mathcal E}_x({\textbf{q}},{\textbf{p}})}{T_{x,n}} \right\} \textrm{d}{\textbf{q}}\textrm{d}{\textbf{p}}, \end{aligned}$$
(2.34)

with \(\inf _{x,n} T_{x,n} >0\) satisfy (2.33). At this point we only remark that, due to the entropy inequality (see the proof of Corollary 3.2 below), assumption (2.33) implies

$$\begin{aligned} \sup _{n\geqslant 1} {\mathbb {E}}_{\mu _n}\left[ \frac{1}{n+1} \sum _{x=0}^n {\mathcal {E}}_x(0)\right] <+\infty . \end{aligned}$$

Furthermore, since the Hamiltonian \(\mathcal {H}(\cdot ,\cdot )\) is a convex function, by the Jensen inequality

$$\begin{aligned} \sup _{n\geqslant 1}\frac{1}{n+1}\mathcal {H}_n({\overline{{\textbf{q}}}}_n, {\overline{{\textbf{p}}}}_n) \leqslant \sup _{n\geqslant 1} {\mathbb {E}}_{\mu _n}\left[ \frac{1}{n+1} \sum _{x=0}^n \mathcal {H}_n({\textbf{q}}, {\textbf{p}})\right] <+\infty , \end{aligned}$$

so (2.28) is satisfied with \(\delta =1\).

Denote by \(\mathcal {M}_{\textrm{fin}}([0,1])\), resp \(\mathcal {M}_{+}([0,1])\) the space of bounded variation, Borel, resp. positive, measures on the interval [0, 1] endowed with the weak topology. Before formulating the main result we introduce the notion of a measured valued solution of the following initial-boundary value problem

$$\begin{aligned} \begin{aligned}&\partial _t T = \frac{D}{4\gamma } \partial _u^2 T, \quad u\in (0,1),\\&T(t,0) = T_-, \quad \partial _u T (t,1) = -\frac{4\gamma J}{D},\quad T(0,\textrm{d}u) = T_0(\textrm{d}u). \end{aligned} \end{aligned}$$
(2.35)

Here J and D are defined by (2.19) and (2.25), respectively and \(T_0\in \mathcal {M}_{\textrm{fin}}([0,1])\).

Definition 2.9

We say that a function \(T:[0,+\infty )\rightarrow \mathcal {M}_{\textrm{fin}}([0,1])\) is a weak (measured valued) solution of (2.35) if: it belongs to \(C\Big ([0,+\infty ); \mathcal {M}_{\textrm{fin}}([0,1])\Big )\) and for any \(\varphi \in C^2[0,1]\) such that \(\varphi (0)=\varphi '(1)=0\) we have

$$\begin{aligned} \begin{aligned} \int _0^1\varphi (u)T(t,\textrm{d}u) - \int _0^1\varphi (u)T_0(\textrm{d}u) =&\frac{D}{4\gamma }\int _0^t\textrm{d}s\int _0^1 \varphi ''(u)T(s,\textrm{d}u) \\&+\frac{DT_- t}{4\gamma }\varphi '(0)-Jt \varphi (1). \end{aligned} \end{aligned}$$
(2.36)

The proof of the uniqueness of the solution of (2.36) is quite routine. For completeness sake we present it in Appendix E.

Theorem 2.10

Suppose that the initial configurations \((\mu _n)\) satisfy (2.33). Assume furthermore that there exists \(T_0\in \mathcal {M}_+([0,1])\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathbb {E}}_{\mu _n} \left[ \frac{1}{n+1} \sum _{x=0}^n \varphi \left( \frac{x}{n+1} \right) \mathcal {E}_x(0)\right] = \int _0^1 \varphi (u) T_0( \textrm{d}u), \end{aligned}$$
(2.37)

for any function \(\varphi \in C[0,1]\) - the space of continuous functions on [0, 1]. Here \(\mathcal {E}_x(t) = {\mathcal E}_x({\textbf{q}}(t),{\textbf{p}}(t))\). Then,

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n+1} \sum _{x=0}^n \varphi \left( \frac{x}{n+1} \right) {\mathbb {E}}_{\mu _n} \left( {\mathcal {E}}_x(n^2 t)\right) = \int _0^1 \varphi (u) T(t, \textrm{d}u). \end{aligned}$$
(2.38)

Here \(T(t, \textrm{d}u)\) is the unique weak solution of (2.35), with the initial data given by measure \(T_0\) in (2.37).

Remark 2.11

The initial energy \(\mathcal {E}_x(0)\) can be represented as the sum \(\mathcal {E}_x^{\textrm{th}}+\mathcal {E}_x^{\textrm{mech}}\) of the thermal energy

$$\begin{aligned} \mathcal {E}_x^{\textrm{th}}:= \frac{1}{2} \left[ (p'_x)^2 + (q'_x - q'_{x-1})^2 + \omega _0^2 (q'_x)^2\right] \end{aligned}$$

and the mechanical energy

$$\begin{aligned} \mathcal {E}_x^{\textrm{mech}}:=\frac{1}{2} \left[ {\overline{p}}_x^2 + ({\overline{q}}_x - {\overline{q}}_{x-1})^2 + \omega _0^2 {\overline{q}}_x^2\right] . \end{aligned}$$

Here \(q'_x = q_x - {\overline{q}}_x\) and \(p'_x = p_x - \overline{p}_x\), with \({\overline{p}}_x:= \int _{\Omega _n}p_x \mu _n(\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}})\) and \({\overline{q}}_x:= \int _{\Omega _n}q_x \mu _n(\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}})\).

If \(\mathcal {E}_x^{\textrm{mech}}\not =0\) and satisfies (2.28), with \(\delta =1\), then the initial measure \(T_0(du)\) is the macroscopic distribution of the total energy and not of the temperature, where the latter is understood as the thermal energy. Nevertheless, as a consequence of Proposition B.1, at any macroscopic positive time the entire mechanical energy is transformed immediately into the thermal energy, so that \(T(t, \textrm{d}u)\) for \(t>0\) can be seen as the macroscopic temperature distribution. The situation is different for the unpinned dynamics (\(\omega _0 = 0\)) where the transfer of mechanical energy to thermal energy happens slowly at macroscopic times (see [13]).

Remark 2.12

Concerning Theorem 2.10, a similar proof will work in the case where two Langevin heat baths at two temperatures, \(T_-\) and \(T_+\) are placed at the boundaries, in the absence of the periodic forcing. In this case the macroscopic equation will be the same but with boundary conditions \(T(t,0) = T_-\) and \(T (t,1) =T_+\).

Also, in the absence of any heat bath, we could apply two periodic forces \(\mathcal {F}_n^{(0)}(t)\) and \(\mathcal {F}_n^{(1)}(t)\) respectively at the left and right boundary. They will generate two incoming energy current, \(J^{(0)}>0\) on the left and \(J^{(1)}<0\) on the right, given by the corresponding formula (2.19), and we will have the same equation but with boundary conditions \(\partial _u T (t,0) = -\frac{4\gamma J^{(0)}}{D}\) and \(\partial _u T (t,1) = -\frac{4\gamma J^{(1)}}{D}\). Of course in this case the total energy increases in time and periodic stationary states do not exist.

In the case where both a heat bath and a periodic force are present on the same side, say on the right endpoint, then the macroscopic boundary condition arising is \(T (t,1) =T_+\), i.e. the periodic forcing is ineffective on the macroscopic level, and all the energy generated by its work will flow into the heat bath. It would be interesting to investigate what happens when the amplitude of the forcing is larger than considered here (\(-1/2<a\leqslant 0\) in (2.30)). However, it is not yet clear to us what occurs in this case.

Remark 2.13

If the initial data \(T_0\) is \(C^1\) smooth and satisfies the boundary condition in (2.35), then the initial-boundary value problem (2.35) has a unique strong solution T(tu) that belongs to the intersection of the spaces \(C\big ([0,+\infty )\times [0,1]\big )\) and \( C^{1,2}\big ((0,+\infty )\times (0,1)\big )\) - the space of functions continuously differentiable once in the first and twice in the second variable, see e.g. [8, Corollary 5.3.2, p.147]. This solution coincides then with the unique weak solution in the sense of Definition 2.9.

Remark 2.14

In the proof of Theorem 2.10 we need to show a result about the equipartition of energy (cf. Proposition 4.1). As a consequence the limit profile of the energy equals the limit profile of the temperature, i.e. we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n+1} \sum _{x=0}^n \int _0^{+\infty }\varphi \left( t,\frac{x}{n+1} \right) {\mathbb {E}}_{\mu _n} \left( {p}_x^{2}(n^2 t)\right) \textrm{d}t = \int _0^{+\infty }\textrm{d}t\int _0^1 \varphi (t,u) T(t, \textrm{d}u), \end{aligned}$$
(2.39)

for any compactly supported test function.

3 Entropy, Energy and Currents Bounds

We first prove that the initial entropy bound (2.33) holds for all times.

Proposition 3.1

Suppose that the law of the initial configuration admits the density \( f_n(0, {\textbf{q}}, {\textbf{p}})\) w.r.t. the Gibbs measure \(\nu _{T-}\) that satisfies (2.33). Then, for any \(t>0\) there exists \( f_n(t, {\textbf{q}}, {\textbf{p}})\) - the density of the law of the configuration \(\big ({\textbf{q}}(t), {\textbf{p}}(t)\big )\). In addition, for any t there exists a constant C independent of n such that

$$\begin{aligned} \sup _{s\in [0,t]} \textbf{H}_{n}(f_n(n^2 s)) \leqslant C n, \end{aligned}$$
(3.1)

Proof

For simplicity sake, we present here a proof under an additional assumption that \( f_n(t,{\textbf{q}}, {\textbf{p}})\) is a smooth function such that \(\mu _t(\textrm{d}{\textbf{q}},\textrm{d}{\textbf{p}})= f_n(t,{\textbf{q}}, {\textbf{p}}) \textrm{d}{\textbf{q}}\textrm{d}{\textbf{p}}\) is the solution of the forward equation (2.15). The general case is treated in Appendix F. Using (2.9) for the generator \(\mathcal {G}_t\) we conclude that

$$\begin{aligned} \textbf{H}_{n}(f_n(n^2 t)) - \textbf{H}_{n}(f_n(0)) =\int _0^{n^2 t} \textrm{d}s \int _{\Omega _n} f_n(s) \mathcal {G}_{s} \log f_n(s) \textrm{d}\nu _{T-} =\textrm{I}_n+\textrm{II}_n, \end{aligned}$$

with

$$\begin{aligned}&{\textrm{I}}_n:=\gamma \int _0^{n^2 t} \textrm{d}s \int _{\Omega _n} f_n(s)\left( {\mathcal {S}}_{\textrm{flip}} + 2\mathcal {S}_{-}\right) \log f_n(s) \textrm{d}\nu _{T-},\\&{\textrm{II}}_n :=\int _0^{n^2 t} \textrm{d}s \int _{\Omega _n} f_n(s) \mathcal {A}_{ s}\log f_n(s) \textrm{d}\nu _{T-}. \end{aligned}$$

We have that \( {\textrm{I}}_n\leqslant 0\) because \(\mathcal {S}_{\textrm{flip}}\) and \(\mathcal {S}_{-}\) are symmetric negative operators with respect to the measure \(\nu _{T-}\).

The only positive contribution comes from the second term where the boundary work defined by (2.27) appears:

$$\begin{aligned} {\textrm{II}}_n = \int _0^{n^2 t} \textrm{d}s {\mathcal {F}}_n( s) \int _{\Omega _n} \frac{p_n}{T_-} f_n(s) \textrm{d}\nu _{T-} =-\frac{n}{T_-} {J_n(t,\mu _0)}, \end{aligned}$$

where \(\textrm{d}\mu _0:=f_n(0) \textrm{d}\nu _{T-}\). Therefore

$$\begin{aligned}&\textbf{H}_{n}(f_n(n^2 t))\leqslant \textbf{H}_{n}(f_n(0)) -\frac{n }{T_-} J_n(t,\mu _0). \end{aligned}$$

The conclusion of the proposition then follows from a direct application of (2.33) and Theorem 2.5. \(\square \)

To abbreviate the notation we shall omit the index by the expectation sign, indicating the initial condition.

Corollary 3.2

(Energy bound) For any \(t_* \geqslant 0\) we have

$$\begin{aligned} \sup _{t\in [0,t_*]}\sup _{n\geqslant 1} {\mathbb {E}}\left[ \frac{1}{n+1} \sum _{x=0}^n {\mathcal {E}}_x(n^2 t)\right] =E(t_*)<+\infty . \end{aligned}$$
(3.2)

Proof

It follows from the entropy inequality, see e.g.  [9, p. 338], that for \(\alpha >0\) small enough we can find \(C_\alpha >0\) such that

$$\begin{aligned} {\mathbb {E}}\left[ \sum _{x=0}^n{\mathcal {E}}_x(n^2 t)\right] \leqslant \frac{1}{\alpha }\big (C_\alpha n+ \textbf{H}_{n}(t)\big ),\qquad t\geqslant 0 . \end{aligned}$$
(3.3)

\(\square \)

From Theorem 2.5 and Corollary 3.2 we immediately conclude the following.

Corollary 3.3

(Current size) For any \(t_* \geqslant 0\) there exists \(C>0\) such that

$$\begin{aligned} \sup _{x=0,\ldots ,n+1,\,t\in [0,t_*]}\left| \int _0^t \mathbb {E} \left[ j_{x-1,x}(n^2s)\right] \textrm{d}s\right| \leqslant \frac{C}{ n },\quad n=1,2,\ldots . \end{aligned}$$
(3.4)

In particular, for any \(t>0\) there exists \(C>0\) such that

$$\begin{aligned} \Big | \int _0^{t} \Big \{\mathbb {E} \big [ p_0^2(n^2s) \big ]- T_- \Big \}\textrm{d}s\Big |\leqslant \frac{C}{n}, \end{aligned}$$
(3.5)

Proof

By the local conservation of energy

$$\begin{aligned} n^{-2}\frac{\textrm{d}}{\textrm{d}t}{\mathbb {E}}[{\mathcal {E}}_x(n^2 t)]= {\mathbb {E}}\Big [j_{x-1,x}(n^2 t) -j_{x,x+1}(n^2 t)\Big ] . \end{aligned}$$
(3.6)

Therefore

$$\begin{aligned} \int _0^t {\mathbb {E}}j_{x-1,x}(n^2 s)\textrm{d}s=\int _0^t {\mathbb {E}}j_{n,n+1}(n^2 s)\textrm{d}s +n^{-2}\sum _{y=x}^n\Big ({\mathbb {E}}[{\mathcal {E}}_y(n^2 t)] -{\mathbb {E}}[{\mathcal {E}}_y(0)]\Big ), \end{aligned}$$
(3.7)

and bound (3.4) follows directly from estimates (2.29) and (3.2). Estimate (3.5) is a consequence of the definition of \(j_{-1,0}\) (see (2.14)) and (3.4). \(\square \)

4 Equipartition of Energy and Fluctuation-Dissipation Relations

4.1 Equipartition of the Energy

In the present section we show the equipartition property of the energy.

Proposition 4.1

Suppose that \(\varphi \in {C^1}[0,1]\) is such that \(\textrm{supp}\,\varphi \subset (0,1)\). Then,

$$\begin{aligned} \lim _{n\rightarrow +\infty }\frac{1}{n+1}\sum _{x=0}^n \varphi \left( \frac{x}{n+1}\right) \int _0^{t} \mathbb {E}\Big [ p_x^2(n^2s)- \big (q_x(n^2s)-q_{x-1}(n^2s)\big )^2-\omega _0^2 q_x^2(n^2s)\Big ]\textrm{d}s=0. \end{aligned}$$
(4.1)

Proof

After a simple calculation we obtain the following fluctuation-dissipation relation: for \(x= 1, \dots , n-1\),

$$\begin{aligned} p_x^2 - \omega _0^2 q_x^2 - (q_x - q_{x-1})^2 = \nabla ^\star \left[ q_x(q_{x+1} - q_x)\right] + \mathcal {G}_t\left( q_x p_x + \gamma q_x^2\right) , \end{aligned}$$
(4.2)

where the discrete gradient \(\nabla \) and its adjoint \(\nabla ^\star \) are defined in (A.1) below.

Therefore,

$$\begin{aligned}&\int _0^t \mathbb {E} \Big [p_x^2(n^2s) - \omega ^2_0q_x^2(n^2s) -(q_x(n^2s) - q_x(n^2 s))^2\Big ] \textrm{d}s\nonumber \\&\quad = \nabla \int _0^t \mathbb {E}\left[ q_x(n^2s)(q_{x+1}(n^2s) - q_x(n^2s))\right] \textrm{d}s\nonumber \\&\qquad +n^{-2} \mathbb {E} \Big [q_x(n^2t)p_x(n^2t) +2\gamma q_x^2(n^2t) \Big ] - n^{-2} \mathbb {E} \Big [q_x(0)p_x(0) +2\gamma q_x^2(0) \Big ]. \end{aligned}$$
(4.3)

After summing up against the test function \(\varphi \) (that has compact support strictly contained in (0, 1)) and using the energy bound (3.2) we conclude (4.1). \(\square \)

4.2 Fluctuation-Dissipation Relation

In analogy to [14, Section 5.1] define

$$\begin{aligned} \begin{aligned} \mathfrak {f}_x&:= \frac{1}{4\gamma } \left( q_{x+1} - q_x\right) \left( p_x + p_{x+1}\right) + \frac{1}{4} \left( q_{x+1} - q_x\right) ^2 ,\qquad x=0, \dots , n-1,\\ \mathfrak {F}_x&:= p_x^2 + \left( q_{x+1} - q_x\right) \left( q_{x} - q_{x-1}\right) -\omega _0^2 q_x^2,\qquad x=0,\dots , n, \end{aligned} \end{aligned}$$
(4.4)

with the convention that \(q_{-1} = q_{0}\), \(q_n=q_{n+1}\). Then

$$\begin{aligned} j_{x,x+1} = - \frac{1}{4\gamma } \nabla \mathfrak {F}_x + \mathcal {G}_t \mathfrak {f}_x - \frac{\delta _{x,n-1}}{4\gamma } {\mathcal {F}}_n(t) \left( q_{n} - q_{n-1}\right) , \quad x=0, \dots , n-1. \end{aligned}$$
(4.5)

5 Local Equilibrium and the Proof of Theorem 2.10

The fundamental ingredients in the proof of Theorem 2.10 are the identification of the work done at the boundary given by Theorem 2.5, the equipartition and the fluctuation-dissipation relation contained in Theorem 4, and the following local equilibrium results. In the bulk we have the following:

Proposition 5.1

Suppose that \(\varphi \in C[0,1]\) is such that \(\textrm{supp}\,\varphi \subset (0,1)\). Then

$$\begin{aligned} \lim _{n\rightarrow +\infty }\frac{1}{n+1} \sum _{x=0}^{n} \varphi \left( \frac{x}{n+1}\right) \int _0^{t} \mathbb {E} \big [ q_x(n^2s) q_{x+\ell }(n^2s) - G_{\omega _0}(\ell ) p_x^2(n^2s) \big ] \textrm{d}s=0, \end{aligned}$$
(5.1)

for \(\ell =0,1,2\). Here \(G_{\omega _0}(\ell )\) is the Green’s function of \(-\Delta _{{\mathbb Z}} + \omega ^2_0\), where \(\Delta _{{\mathbb Z}}\) is the lattice laplacian, see (A.2).

At the left boundary the situation is a bit different, due to the fact that \(q_0=q_{-1}\), and we have

Proposition 5.2

We have

$$\begin{aligned} \lim _{n\rightarrow +\infty } \int _0^{t} \mathbb {E}\big [ q_0^2(n^2 s) - \left( G_{\omega _0}(1) + G_{\omega _0}(0)\right) p_0^2(n^2s) \big ] \textrm{d}s =0. \end{aligned}$$
(5.2)

The proofs of Propositions 5.1 and 5.2 require the analysis of the evolution of the covariance matrix of the position and momenta vector and will be done in Appendix D. As a consequence, recalling definition (4.4), the bound (3.5) and the identity \(2G_{\omega _0} (1)-G_{\omega _0} (0) -G_{\omega _0} (2) =-\omega _0^2G_{\omega _0} (1)\) we have the following corollary

Corollary 5.3

For any \(t>0\) and \(\varphi \in C[0,1]\) such that \(\textrm{supp}\,\varphi \subset (0,1)\) we have

$$\begin{aligned} \lim _{n\rightarrow +\infty }\frac{1}{n+1} \sum _{x=0}^{n} \varphi \left( \frac{x}{n+1}\right) \int _0^{t} \mathbb {E} \big [ \mathfrak {F}_x(n^2s) - D p_x^2(n^2s) \big ] \textrm{d}s=0 \end{aligned}$$
(5.3)

and

$$\begin{aligned} \lim _{n\rightarrow +\infty } \int _0^{t} \Big \{\mathbb {E}\big [\mathfrak {F}_0(n^2s) \big ]- DT_- \Big \}\textrm{d}s =0. \end{aligned}$$
(5.4)

Here D is defined in (2.25).

5.1 Proof of Theorem 2.10

Consider the subset \(\mathcal {M}_{+,E_*}([0,1])\) of \(\mathcal {M}_+([0,1])\) (the space of all positive, finite Borel measures on [0, 1]) consisting of measures with total mass less than or equal to \(E_*\). It is compact in the topology of weak convergence of measures. In addition, the topology is metrizable when restricted to this set.

For any \(t\in [0,t_*]\) and \(\varphi \in C[0,1]\) define

$$\begin{aligned} \begin{aligned} \xi _n(t, \varphi ) = \frac{1}{n+1} \sum _{x=0}^n \varphi _x {\mathbb {E}}\big [{\mathcal {E}}_x(n^2t)\big ],\qquad \varphi _x := \varphi \left( \frac{x}{n+1}\right) \end{aligned} \end{aligned}$$
(5.5)

for any \(\varphi \in C[0,1]\). Since flips of the momenta do not affect the energies \({\mathcal {E}}_x\), we have \(\xi _n \in C\left( [0,t_*], \mathcal {M}_+([0,1])\right) \). Here \(C\left( [0,t_*], \mathcal {M}_{+,E_*}([0,1])\right) \) is endowed with the topology of the uniform convergence. As a consequence of Corollary 3.2 for any \(t_*>0\) the total energy is bounded by \(E_* = E(t_*)\) (see (3.2)) and we have that \(\xi _n \in C\left( [0,t_*], \mathcal {M}_{+,E_*}([0,1])\right) \).

5.2 Compactness

Since \(\mathcal {M}_{+,E_*}([0,1])\) is compact, in order to show that \((\xi _n)\) is compact, we only need to control modulus of continuity in time of \(\xi _n(t, \varphi )\) for any \(\varphi \in C^1[0,1]\), see e.g. [12, p. 234]. This will be consequence of the following Proposition.

Proposition 5.4

$$\begin{aligned} \lim _{\delta \downarrow 0} \limsup _{n\rightarrow \infty } \sup _{0\leqslant s ,t\leqslant t_*, |t-s| < \delta } \left| \xi _n(t, \varphi ) - \xi _n(s, \varphi )\right| = 0 \end{aligned}$$
(5.6)

The proof of Proposition 5.4 is postponed untill Sect. 5.4, we first use it to proceed with the limit identification argument.

5.3 Limit Identification

Consider a smooth test function \(\varphi \in C^2[0,1]\) such that

$$\begin{aligned} \varphi (0)= \varphi '(1)=0. \end{aligned}$$
(5.7)

In what follows we use the following notation. For a given function \(\varphi :[0,1]\rightarrow {\mathbb R}\) and \(n=1,2,\ldots \) we define discrete approximations of the function itself and of its gradient, respectively by

$$\begin{aligned} \varphi _x:=\varphi (\tfrac{x}{n+1}) ,\quad (\nabla _n \varphi )_x:=(n+1)\big (\varphi (\tfrac{x+1}{n+1})-\varphi (\tfrac{x}{n+1})\big ), \, \text{ for } x\in \{0,\ldots ,n\}. \end{aligned}$$
(5.8)

We use the convention \(\varphi (-\tfrac{1}{n+1})=\varphi (0)\). Let \(0<t_* <+\infty \) be fixed. In what follows we show that, for any \(t\in [0,t_*]\)

$$\begin{aligned} \begin{aligned} \xi _n(t,\varphi ) - \xi _n(0,\varphi ) = \frac{\varphi '(0)DT_-t}{4\gamma } -Jt\varphi (1) +\frac{D}{4\gamma }\int _0^{t} \xi _n(s,\varphi '') \textrm{d}s + o_n. \end{aligned} \end{aligned}$$
(5.9)

Here, and in what follows \(o_n\) denotes a quantity satisfying \(\lim _{n\rightarrow +\infty }o_n=0\). Thus any limiting point of \(\big (\xi _n(t)\big )\) has to be the unique weak solution of (2.36) and this obviously proves the conclusion of Theorem 2.10.

By an approximation argument we can restrict ourselves to the case when \(\textrm{supp}\,\varphi ''\subset (0,1)\). Then as in (5.14) we have

$$\begin{aligned} \begin{aligned} \xi _n(t, \varphi ) - \xi _n(0, \varphi )&= \frac{n^2}{n+1} \sum _{x=0}^{n-1} \left( \varphi _{x+1} - \varphi _x\right) \int _0^{t} \mathbb {E}\left[ j_{x,x+1} (n^2\tau )\right] \textrm{d}\tau \\&\quad - \frac{n^2}{n+1} \varphi _n \int _0^{t}\mathbb {E}\left[ j_{n,n+1} (n^2\tau )\right] \textrm{d}\tau , \end{aligned} \end{aligned}$$
(5.10)

By Theorem 2.5 the last term converges to \(- \varphi (1) J t\). On the other hand from (4.5) we have

$$\begin{aligned} \frac{n^2}{n+1} \sum _{x=0}^{n-1} \left( \varphi _{x+1} - \varphi _x\right) \int _0^{t} \mathbb {E}\left[ j_{x,x+1} (n^2\tau )\right] \textrm{d}\tau =\sum _{j=1}^3\textrm{I}_{n,j}, \end{aligned}$$
(5.11)

where

$$\begin{aligned}&\textrm{I}_{n,1}:=-\frac{1}{4\gamma }\left( \frac{n}{n+1}\right) ^2 \sum _{x=0}^{n-1} \nabla _n \varphi _x \int _0^{t} \mathbb {E}\big [ \; \nabla \mathfrak {F}_x(n^2s) \big ] \textrm{d}s,\\&\textrm{I}_{n,2} :=\left( \frac{1}{n+1}\right) ^2\sum _{x=0}^{n-1} \nabla _n \varphi _x {\mathbb {E}}\Big [ \mathfrak {f}_x(n^2t)- \mathfrak {f}_x(0)\Big ] ,\\&\textrm{I}_{n,3} :=-\frac{1}{4\gamma }\left( \frac{n}{n+1}\right) ^2 \nabla _n \varphi _{n-1} \int _0^{t}{\mathcal {F}}_n(n^2s) \mathbb {E}\big [ q_{n}(n^2s) - q_{n-1}(n^2s)\big ]\textrm{d}s. \end{aligned}$$

It is easy to see from Corollary 3.2 that \( \textrm{I}_{n,2} ={\overline{o}}_n(t).\) Here the symbol \({\overline{o}}_n(t)\) stands for a quantity that satisfies

$$\begin{aligned} \lim _{n\rightarrow +\infty }\sup _{s\in [0,t_*]}|{\overline{o}}_n(s)|=0. \end{aligned}$$
(5.12)

Using the fact that \(\varphi '(1)=0\) and the estimate (B.15) respectively we conclude also that \(\textrm{I}_{n,3} ={\overline{o}}_n(t)\). Thanks to Corollary 3.2 and (5.7) we have

$$\begin{aligned}&\textrm{I}_{n,1}= \sum _{j=1}^3\textrm{I}_{n,1}^{(j)} + {\overline{o}}_n(t),\quad \text{ where }\\&\textrm{I}_{n,1}^{(1)}:= \frac{1}{4\gamma (n+1)} \sum _{x=0}^{n} \varphi ''\left( \frac{x}{n+1}\right) \int _0^{t} \mathbb {E}\big [ \; \mathfrak {F}_x(n^2s) \big ] \textrm{d}s,\\&\textrm{I}_{n,1}^{(2)}:=-\frac{1}{4\gamma }\left( \frac{n}{n+1}\right) ^2 \varphi '\left( \frac{n-1}{n+1}\right) \int _0^{t} \mathbb {E}\big [\; \mathfrak {F}_n(n^2s) \big ] \textrm{d}s = {\overline{o}}_n(t),\\&\textrm{I}_{n,1}^{(3)}:= \frac{1}{4\gamma } \varphi '(0) \int _0^{t} \mathbb {E}\big [ \; \mathfrak {F}_0(n^2s) \big ] \textrm{d}s. \end{aligned}$$

Since \({\textrm{supp}}\,\varphi ''\subset (0,1)\), by Corollary 5.3 and the equipartition property (Proposition 4.1) for a fixed \(t\in [0,t_*]\) we have

$$\begin{aligned} \textrm{I}_{n,1}^{(1)}&=\frac{D}{4\gamma (n+1)} \sum _{x=0}^{n}\varphi '' \left( \frac{x}{n+1}\right) \int _0^{t} \mathbb {E}\Big [ \; p_x^2(n^2s) \Big ] \textrm{d}s+{\overline{o}}_n(t)\nonumber \\&=\frac{D}{4\gamma (n+1)} \sum _{x=0}^{n}\varphi '' \left( \frac{x}{n+1}\right) \int _0^{t} \mathbb {E}\Big [ \; \mathcal {E}_x(n^2s) \Big ] \textrm{d}s+o_n. \end{aligned}$$
(5.13)

Concluding, we have obtained

$$\begin{aligned} \lim _{n\rightarrow +\infty }\textrm{I}_{n,1}^{(3)} = \frac{\varphi '(0)DT_-t}{4\gamma }. \end{aligned}$$

Thus (5.9) follows. \(\square \)

5.4 Proof of Proposition 5.4

From the calculation made in (5.10)–(5.13) we conclude that for any function \(\varphi \in C^2[0,1]\) satisfying (5.7) we have

$$\begin{aligned} \begin{aligned}&\xi _n(t, \varphi ) - \xi _n(s, \varphi ) = \frac{\varphi '(0)DT_-}{4\gamma } (t-s) -J\varphi (1) (t-s) \\&+\frac{D}{4\gamma (n+1)} \sum _{x=0}^{n}\varphi '' \left( \frac{x}{n+1}\right) \int _s^{t} \mathbb {E}\Big [ \; p_x^2(n^2\tau ) \Big ] \textrm{d}\tau + {\overline{o}}_n(t) + {\overline{o}}_n(s) \end{aligned} \end{aligned}$$
(5.14)

for any \( 0\leqslant s<t \leqslant t_*\) and (5.6) follows immediately. A density argument completes the proof.