1 Introduction

The thermodynamic uncertainty relation (TUR) in a non-equilibrium steady state (NESS) provides a bound on the entropy production in terms of mean and variance of an arbitrary current [1]. Specifically, in the NESS, after a time t a fluctuating integrated current X(t) has a mean \(\left\langle X(t)\right\rangle =j\,t\), and a diffusivity \(D=\lim _{t\rightarrow \infty }\left\langle (X(t)-j\,t)^2\right\rangle /(2\,t)\). With the entropy production rate \(\sigma \) the expectation of the total entropy production in the NESS is given by \(\sigma \,t\). These quantities satisfy the universal thermodynamic uncertainty relation

$$\begin{aligned} \sigma \ge \frac{j^2}{D}, \end{aligned}$$

i.e. \(\sigma \) is bounded from below by \(j^2/D\). The TUR has been proven for a Markovian dynamics on a general network by Gingrich et al. [2, 3] and further investigated for a number of different settings, both in the classical (see, e.g., [4,5,6,7,8,9,10,11,12,13,14,15]) and the quantum domain (see, e.g., [16,17,18,19,20,21,22]). It has led to a deeper understanding of systems far from equilibrium as it introduces a lower bound on the dissipation given the knowledge of the occurring fluctuations. Such a relation is of interest for the modeling and analysis of e.g. biomolecular processes, which may often be described as a Markov network (see e.g. [23,24,25]).

Of particular interest is the work by Gingrich et al. [8], where the authors extend the relation from mesoscopic Markov jump processes to overdamped Langevin equations. Here a temporal coarse-graining procedure is described, which allows the formulation of a discrete Markov jump process in terms of an overdamped Langevin equation for the mesoscopic states of the model. These authors observe that for purely dissipative dynamics the TUR is saturated. An additional spatial coarse-graining performed in [8] results in a macroscopic description, where it is found that the tightness of the resulting uncertainty relation increases with the strength of the Gaussian potential wells (see [8], fig. 9).

In this work, we present a field-theoretic equivalent to the TUR. Such a thermodynamic uncertainty relation for general field-theoretic Langevin equations may prove helpful in further understanding complex dynamics like turbulence for fluid flow or non-linear growth processes, described by the stochastic Navier-Stokes equation (e.g. [26]) or the Kardar–Parisi–Zhang equation [27], respectively. Both are prominent representatives of field-theoretic Langevin equations. For the latter, we highlight the recent progress concerning a study of the inward growth of interfaces in liquid crystal turbulence as an experimental realization [28]. On the theory side, analytic results on the effect of aging of two-time correlation functions for the interface growth were found [29]. Furthermore we refer the reader to three review articles [30,31,32] concerning the latest developments around the Kardar–Parisi–Zhang universality class. Recently, ‘generalized TURs’ have been derived from fluctuation relations [33, 34]. For current-like observables, the original TUR [1] is stronger than the ‘generalized TURs’. In our manuscript we use these current-like observables and thus we focus on the original TUR.

The paper is organized as follows. In order to state a field-theoretic version of the thermodynamic uncertainty relation, we translate in Sect. 2 the notion of current, diffusivity and entropy production known from the setting of coupled Langevin equations to their respective equivalents for general field-theoretic Langevin equations. As an illustration of the generalizations introduced in Sect. 2, we will then study the one-dimensional Kardar–Parisi–Zhang (KPZ) equation as a paradigmatic example of such a field-theoretic Langevin equation. As the calculation of the current, diffusivity and entropy production in the NESS requires a solution to the KPZ equation, we will use spectral theory and construct an approximate solution in the weak-coupling regime of the KPZ equation in Sect. 3. With this approximation, we will then derive in Sect. 4 the thermodynamic uncertainty relation to quadratic order in the coupling parameter.

2 Thermodynamic Uncertainty Relation for a Field Theory

In this section, we will present a generalization of the thermodynamic uncertainty relation introduced in [1] to a field theory. Consider a generic field theory of the form

$$\begin{aligned} \begin{aligned} \partial _t\varPhi _\gamma (\mathbf {r},t)&=F_\gamma \left[ \{\varPhi _\mu (\mathbf {r},t)\}\right] +\eta _\gamma (\mathbf {r},t),\\ \left\langle \eta _\gamma (\mathbf {r},t)\right\rangle&=0,\\ \left\langle \eta _\gamma (\mathbf {r},t)\eta _\kappa (\mathbf {r}^\prime ,t^\prime )\right\rangle&=K(\mathbf {r}-\mathbf {r}^\prime )\delta _{\gamma ,\,\kappa }\delta (t-t^\prime ). \end{aligned} \end{aligned}$$
(1)

Here \(\varPhi _\gamma (\mathbf {r},t)\) is a scalar field or the \(\gamma \)-th component of a vector field \((\gamma \in [1,n];\,n\in \mathbb {N})\) with \(\mathbf {r}\in \varOmega \subset \mathbb {R}^d\), \(F_\gamma \left[ \{\varPhi _\mu (\mathbf {r},t)\}\right] \) represents a (possibly non-linear) functional of \(\varPhi _\mu \) and \(\eta _\gamma (\mathbf {r},t)\) denotes Gaussian noise, which is white in time, and with \(K(\mathbf {r}-\mathbf {r}^\prime )\) as spatial noise correlations. Prominent examples of (1) are the stochastic Navier–Stokes equation for turbulent flow (see e.g. [26]) or the Kardar–Parisi–Zhang equation for non-linear growth processes [27] to name only two. The latter will be treated in the subsequent sections within the framework established in the following.

Let us begin with the introduction of some notions. A natural choice of a local fluctuating current \(\mathbf {j}(\mathbf {r},t)\) is

$$\begin{aligned} \mathbf {j}(\mathbf {r},t)\equiv \partial _t{{\varvec{\Phi }}}(\mathbf {r},t), \end{aligned}$$
(2)

with \({{\varvec{\Phi }}}(\mathbf {r},t)=\left( \varPhi _1(\mathbf {r},t),\ldots ,\varPhi _n(\mathbf {r},t)\right) ^\top \). The local current \(\mathbf {j}(\mathbf {r},t)\) is fluctuating around its mean, i.e.

$$\begin{aligned} \mathbf {j}(\mathbf {r},t)=\left\langle \mathbf {j}(\mathbf {r},t)\right\rangle +\delta \mathbf {j}(\mathbf {r},t), \end{aligned}$$
(3)

with \(\delta \mathbf {j}(\mathbf {r},t)\) denoting the fluctuations. Given that the system (1) possesses a NESS, the long-time behavior of the local current (2) can be described as

$$\begin{aligned} \mathbf {j}(\mathbf {r},t)=\mathbf {J}(\mathbf {r})+\delta \mathbf {j}(\mathbf {r},t), \end{aligned}$$
(4)

with \(\delta \mathbf {j}(\mathbf {r},t)\) being now a stationary stochastic process with zero mean and with

$$\begin{aligned} \mathbf {J}(\mathbf {r})=\lim _{t\rightarrow \infty }\left\langle \partial _t{{\varvec{\Phi }}}(\mathbf {r},t)\right\rangle =\lim _{t\rightarrow \infty }\frac{\left\langle {{\varvec{\Phi }}}(\mathbf {r},t)\right\rangle }{t}. \end{aligned}$$
(5)

Here \(\left\langle \cdot \right\rangle \) denotes averages with respect to the noise history. Thus, in a NESS, the local current \(\mathbf {j}(\mathbf {r},t)=\partial _t{{\varvec{\Phi }}}(\mathbf {r},t)\) is in a statistically stationary state, i.e. becomes a stationary stochastic process with mean \(\mathbf {J}(\mathbf {r})\). As the thermodynamic uncertainty relation in a Markovian network is formulated for some form of integrated currents, we define in analogy the projection of the local current onto an arbitrarily directed weight function \(\mathbf {g}(\mathbf {r})\)

$$\begin{aligned} j_g(t)\equiv \int _\varOmega d\mathbf {r}\,\mathbf {j}(\mathbf {r},t)\cdot \mathbf {g}(\mathbf {r}). \end{aligned}$$
(6)

The integral in (6) represents the usual \(\mathcal {L}_2\)-product of the two vector fields \(\mathbf {j}(\mathbf {r},t)\) and \(\mathbf {g}(\mathbf {r})\) with \(\mathbf {j}(\mathbf {r},t)\cdot \mathbf {g}(\mathbf {r})=\sum _kj_k(\mathbf {r},t)g_k(\mathbf {r})\) as the scalar product between \(\mathbf {j}\) and \(\mathbf {g}\). With this projected current \(j_g(t)\), we associate a fluctuating ‘output’

$$\begin{aligned} \varPsi _g(t)\equiv \int _\varOmega d\mathbf {r}\,{{\varvec{\Phi }}}(\mathbf {r},t)\cdot \mathbf {g}(\mathbf {r}). \end{aligned}$$
(7)

Hence \(j_g(t)=\partial _t\varPsi _g(t)\) and in the NESS

$$\begin{aligned} J_g\equiv \lim _{t\rightarrow \infty }\frac{\left\langle \varPsi _g(t)\right\rangle }{t}. \end{aligned}$$
(8)

The fluctuating output \(\varPsi _g(t)\) provides us with the means to define a measure of the precision of the system output, namely the squared variational coefficient \(\epsilon ^2\), as

$$\begin{aligned} \epsilon ^2\equiv \frac{\left\langle \left( \varPsi _g(t)-\left\langle \varPsi _g(t)\right\rangle \right) ^2\right\rangle }{\left\langle \varPsi _g(t)\right\rangle ^2}. \end{aligned}$$
(9)

If the system is in its non-equilibrium steady state, we can rewrite (9) as

$$\begin{aligned} \epsilon ^2=\frac{\left\langle \left( \varPsi _g(t)-J_g\,t\right) ^2\right\rangle }{\left( J_g\,t\right) ^2}. \end{aligned}$$
(10)

Let us now connect the variance of the output \(\varPsi _g(t)\) to the Green–Kubo diffusivity given by

$$\begin{aligned} D_g\equiv \int _0^\infty dt\,\left\langle \delta j_g(t)\,\delta j_g(0)\right\rangle . \end{aligned}$$
(11)

Using (6) and (2), it is straightforward to verify that

$$\begin{aligned} \int _0^t dt^\prime \,\delta j_g(t^\prime )=\widetilde{\varPsi }_g(t)-\left\langle \widetilde{\varPsi }_g(t)\right\rangle ,\qquad \widetilde{\varPsi }_g(t)\equiv \varPsi _g(t)-\varPsi _g(0). \end{aligned}$$

Thus,

$$\begin{aligned} \left\langle \left( \widetilde{\varPsi }_g(t)-\left\langle \widetilde{\varPsi }_g(t)\right\rangle \right) ^2\right\rangle =\int _0^tdr\int _0^tds\,\left\langle \delta j_g(r)\delta j_g(s)\right\rangle . \end{aligned}$$
(12)

By dividing both sides of (12) by 2t and taking the limit of \(t\rightarrow \infty \) it is found in analogy to [35], that

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{\int _0^tdr\int _0^tds\,\left\langle \delta j_g(r)\delta j_g(s)\right\rangle }{2t}=D_g, \end{aligned}$$

with \(D_g\) from (11) and therefore

$$\begin{aligned} D_g=\lim _{t\rightarrow \infty }\frac{\left\langle \left( \widetilde{\varPsi }_g(t)-\left\langle \widetilde{\varPsi }_g(t)\right\rangle \right) ^2\right\rangle }{2t}. \end{aligned}$$
(13)

Since in the NESS \(\varPsi _g(t)\) is stochastically independent of the initial configuration \(\varPsi _g(0)\), we can simplify the expression for the diffusivity in the NESS according to

$$\begin{aligned} D_g=\lim _{t\rightarrow \infty }\frac{\left\langle \left( \varPsi _g(t)-\left\langle \varPsi _g(t)\right\rangle \right) ^2\right\rangle }{2t}. \end{aligned}$$
(14)

With the result of (14) and \(\epsilon ^2\) from (9), an alternative formulation of the precision in a NESS is

$$\begin{aligned} \epsilon ^2=\frac{\left\langle \left( \varPsi _g(t)-\left\langle \varPsi _g(t)\right\rangle \right) ^2\right\rangle }{\left\langle \varPsi _g(t)\right\rangle ^2}=2\frac{D_g}{J_g^2}\frac{1}{t}. \end{aligned}$$
(15)

We proceed with expressing the total entropy production \(\varDelta s_\text {tot}\). The total entropy production is given by the sum of the entropy dissipated into the medium along a single trajectory, \(\varDelta s_\text {m}\), and the stochastic entropy, \(\varDelta s\), of such a trajectory; see e.g. [36]. The medium entropy is given by,

$$\begin{aligned} \varDelta s_\text {m}\equiv \ln \frac{p[{{\varvec{\Phi }}}(\mathbf {r},t)|{{\varvec{\Phi }}}(\mathbf {r},t_0)]}{p[\widetilde{{{\varvec{\Phi }}}}(\mathbf {r},t)|\widetilde{{{\varvec{\Phi }}}}(\mathbf {r},t_0)]}. \end{aligned}$$
(16)

Here \(p[{{\varvec{\Phi }}}(\mathbf {r},t)|{{\varvec{\Phi }}}(\mathbf {r},t_0)]\) denotes the functional probability density of the entire vector field \({{\varvec{\Phi }}}(\mathbf {r},t)\), i.e. the field configuration after some time t has elapsed since a starting-time \(t_0<t\), conditioned on an initial value \({{\varvec{\Phi }}}(\mathbf {r},t_0)\), i.e. a certain field configuration at the starting time \(t_0\). In contrast, \(p[\widetilde{{{\varvec{\Phi }}}}(\mathbf {r},t)|\widetilde{{{\varvec{\Phi }}}}(\mathbf {r},t_0)]\) is the conditioned probability density of the time reversed process, i.e. starting in the final configuration at time \(t_0\) and ending up in the original one at time t. For the sake of simplicity, we will write in the following \(p[{{\varvec{\Phi }}}]\) and \(p[\widetilde{{{\varvec{\Phi }}}}]\) instead of \(p[{{\varvec{\Phi }}}(\mathbf {r},t)|{{\varvec{\Phi }}}(\mathbf {r},t_0)]\) and \(p[\widetilde{{{\varvec{\Phi }}}}(\mathbf {r},t)|\widetilde{{{\varvec{\Phi }}}}(\mathbf {r},t_0)]\), respectively. The functional probability density can be expressed via a so-called action functional, \(\mathcal {S}[{{\varvec{\Phi }}}]\), according to

$$\begin{aligned} p[{{\varvec{\Phi }}}]\propto \exp \left[ -\mathcal {S}[{{\varvec{\Phi }}}]\right] . \end{aligned}$$
(17)

For the system (1), the action functional (see e.g. [36,37,38,39,40,41,42,43,44] and references therein) is given by

$$\begin{aligned} \begin{aligned} \mathcal {S}[{{\varvec{\Phi }}}]&=\frac{1}{2}\sum _\gamma \int _{t_0}^tdt^\prime \int d\mathbf {r}\,\left( \dot{\varPhi }_\gamma (\mathbf {r},t^\prime )-F_\gamma [\{\varPhi _\mu (\mathbf {r},t^\prime )\}]\right) \\&\quad \times \int d\mathbf {r}^\prime \,K^{-1}(\mathbf {r}-\mathbf {r}^\prime )\left( \dot{\varPhi }_\gamma (\mathbf {r}^\prime ,t^\prime )-F_\gamma [\{\varPhi _\mu (\mathbf {r}^\prime ,t^\prime )\}]\right) , \end{aligned} \end{aligned}$$
(18)

where \(K^{-1}(\mathbf {r}-\mathbf {r}^\prime )\) is the inverse of the noise correlation kernel \(K(\mathbf {r}-\mathbf {r}^\prime )\) from (1). The two integral kernels fulfill

$$\begin{aligned} \int d\mathbf {r}^{\prime \prime }\,K(\mathbf {r}-\mathbf {r}^{\prime \prime })K^{-1}(\mathbf {r}^{\prime \prime }-\mathbf {r}^\prime )=\delta ^d(\mathbf {r}-\mathbf {r}^\prime ). \end{aligned}$$
(19)

Before we proceed with the calculation of the medium entropy, let us give the following general remarks. Throughout the paper, stochastic integrals are interpreted in the Stratonovitch sense, i.e. mid-point discretization is used. This is essential for the calculation of the medium entropy \(\varDelta s_\text {m}\) via (16), where Ito discretization may lead to incompatibilities. Using Stratonovitch discretization, however, gives rise to an additional term in the action functional (18), which is given by the functional derivative of the generalized force term F from (1) with respect to the field \({{\varvec{\Phi }}}\). This contribution stems from the Jacobian ensuing from the variable transformation from the noise field to the field \({{\varvec{\Phi }}}\) in the functional integral used for calculating expectation values of path dependent observables. As this addition to the action functional does not contribute to the medium entropy (cf. [36, 45]) it is neglected in (18).

Inserting (17), (18) into (16) and noticing that only the time-antisymmetric part of the action functional (18) and its time-reversed counterpart survives, leads to (see also [36, 45, 46])

$$\begin{aligned} \varDelta s_\text {m}=2\sum _\gamma \int _{t_0}^tdt^\prime \int d\mathbf {r}\int d\mathbf {r}^\prime \,\dot{\varPhi }_\gamma (\mathbf {r},t^\prime )K^{-1}(\mathbf {r}-\mathbf {r}^\prime )F_\gamma [\{\varPhi _\mu (\mathbf {r}^\prime ,t^\prime )\}]. \end{aligned}$$
(20)

\(\varDelta s_\text {m}\) is a measure of the energy dissipated into the medium during the time interval \([t_0,\,t]\), in analogy to the Langevin case. The stochastic entropy change \(\varDelta s\) for the same trajectory, is given by (see also [45])

$$\begin{aligned} \varDelta s\equiv -\ln p[{{\varvec{\Phi }}}(\mathbf {r},\tau )]\Big |_{t_0}^t. \end{aligned}$$
(21)

Thus, the total entropy production \(\varDelta s_\text {tot}\) reads

$$\begin{aligned} \begin{aligned} \varDelta s_\text {tot}&=2\sum _\gamma \int _{t_0}^tdt^\prime \int d\mathbf {r}\int d\mathbf {r}^\prime \,\dot{\varPhi }_\gamma (\mathbf {r},t^\prime )K^{-1}(\mathbf {r}-\mathbf {r}^\prime )F_\gamma [\{\varPhi _\mu (\mathbf {r}^\prime ,t^\prime )\}]\\&\quad -\ln p[{{\varvec{\Phi }}}(\mathbf {r},\tau )]\Big |_{t_0}^t. \end{aligned} \end{aligned}$$
(22)

With (22) we may also define the rate of total entropy production \(\sigma \) in a NESS according to

$$\begin{aligned} \sigma =\lim _{t\rightarrow \infty }\frac{\left\langle \varDelta s_\text {tot}\right\rangle }{t}. \end{aligned}$$
(23)

The expressions stated in (9) and (22) provide us with the necessary ingredients to formulate the field-theoretic thermodynamic uncertainty relation as

$$\begin{aligned} \left\langle \varDelta s_\text {tot}\right\rangle \,\epsilon ^2=\frac{2\,D_g\,\sigma }{J_g^2}\ge 2, \end{aligned}$$
(24)

with \(\sigma \) from (23), \(D_g\) from (13) and \(J_g\) from (8). The higher the precision, i.e. the smaller \(\epsilon ^2\), the more entropy \(\left\langle \varDelta s_\text {tot}\right\rangle \) is generated, i.e. the higher the thermodynamic cost. Or, in other words, in order to sustain a certain NESS current \(J_g\), a minimal entropy production rate \(\sigma \ge J_g^2/D_g\) is required. We anticipate that for the case of the KPZ equation, the constant on the right hand side of (24) will turn out to be equal to five (see (111)), i.e. the TUR given in (24) is not saturated. As will be discussed below, we can attribute this greater value to the KPZ non-linearity.

3 Theoretical Background

Within this section we will lay the groundwork for the calculation of the quantities entering the TUR for the KPZ equation. The main focus thereby is on the perturbative solution of the KPZ equation in the weak-coupling regime and the discussion of issues with diverging terms due to a lack of regularity.

3.1 The KPZ Equation in Spectral Form

Consider the one-dimensional KPZ equation [27] on the interval [0, b], \(b>0\), with Gaussian white noise \(\eta (x,t)\)

$$\begin{aligned} \begin{aligned} \frac{\partial \, h(x,t)}{\partial t}&=\hat{L}\,h(x,t)+\frac{\lambda }{2}\,\left( \frac{\partial \, h(x,t)}{\partial x}\right) ^2+\eta (x,t)\\ \left\langle \eta (x,t)\right\rangle&=0\\ \left\langle \eta (x,t)\,\eta (x^\prime ,t^\prime )\right\rangle&=\varDelta _0\delta (x-x^\prime )\delta (t-t^\prime ), \end{aligned} \end{aligned}$$
(25)

subject to periodic boundary conditions and, for simplicity, vanishing initial condition \(h(x,0)=0\), \(x\in [0,b]\) (i.e. the growth process starts with a flat profile). Here \(\hat{L}=\nu \partial _x^2\) is a differential diffusion operator, \(\varDelta _0\) a constant noise strength, and \(\lambda \) the coupling constant of the non-linearity.

A Fourier-expansion of the height field h(xt) and the stochastic driving force \(\eta (x,t)\) reads

$$\begin{aligned} \begin{aligned} h(x,t)&=\sum _{k\in \mathbb {Z}}h_k(t)\phi _k(x),\\ \eta (x,t)&=\sum _{k\in \mathbb {Z}}\eta _k(t)\phi _k(x). \end{aligned} \end{aligned}$$
(26)

The set of \(\{\phi _k(x)\}\) is given by

$$\begin{aligned} \phi _k(x)\equiv \frac{1}{\sqrt{b}}e^{2\pi ikx/b}\qquad k\in \mathbb {Z}, \end{aligned}$$
(27)

and thus \(h_k(t)\), \(\eta _k(t)\in \mathbb {C}\) in (26). A similar proceeding for the case of the Edwards–Wilkinson equation was used in [47,48,49,50]. Inserting (26) into (25) leads to

$$\begin{aligned}&\sum _{k\in \mathbb {Z}}\dot{h}_k(t)\phi _k(x)\\&\quad =\sum _{k\in \mathbb {Z}}h_k(t)\hat{L}\phi _k(x)+\frac{\lambda }{2}\sum _{l,m\in \mathbb {Z}}h_l(t)h_m(t)\partial _x\phi _l(x)\partial _x\phi _m(x)+\sum _{k\in \mathbb {Z}}\eta _k(t)\phi _k(x)\\&\quad =\sum _{k\in \mathbb {Z}}h_k(t)\mu _k\phi _k(x)-2\pi ^2\frac{\lambda }{b^2}\sum _{l,m\in \mathbb {Z}}l\,m\,h_l(t)h_m(t)\phi _l(x)\phi _m(x)\\&\qquad +\sum _{k\in \mathbb {Z}}\eta _k(t)\phi _k(x), \end{aligned}$$

with \(\{\mu _k\}\) defined as

$$\begin{aligned} \mu _k\equiv -4\,\pi ^2\,\frac{\nu }{b^2}\,k^2\qquad k\in \mathbb {Z}. \end{aligned}$$
(28)

For the \(\{\phi _k(x)\}\) the relation \(\phi _l(x)\phi _m(x)=\phi _{l+m}(x)/\sqrt{b}\) holds and thus the double-sum in the Fourier expansion of the KPZ equation can be rewritten in convolution form setting \(k=l+m\). This yields

$$\begin{aligned} \begin{aligned}&\sum _{k\in \mathbb {Z}}\dot{h}_k(t)\phi _k(x)\\&\quad =\sum _{k\in \mathbb {Z}}h_k(t)\mu _k\phi _k(x)-2\pi ^2\frac{\lambda }{b^{5/2}}\sum _{k,l\in \mathbb {Z}}l(k-l)h_l(t)h_{k-l}(t)\phi _k(x)\\&\qquad +\sum _{k\in \mathbb {Z}}\eta _k(t)\phi _k(x), \end{aligned} \end{aligned}$$
(29)

which implies ordinary differential equations for the Fourier-coefficients \(h_k(t)\),

$$\begin{aligned} \dot{h}_k(t)=\mu _kh_k(t)-2\pi ^2\frac{\lambda }{b^{5/2}}\sum _{l\in \mathbb {Z}}l(k-l)h_l(t)h_{k-l}(t)+\eta _k(t). \end{aligned}$$
(30)

The above ODEs (30) are readily ‘solved’ by the variation of constants formula, which leads for flat initial condition \(h_k(0)\equiv 0\) to

$$\begin{aligned} h_k(t)= \int _0^tdt^\prime e^{\mu _k(t-t^\prime )}\left[ \eta _k(t^\prime )-2\pi ^2\frac{\lambda }{b^{5/2}}\sum _{l\in \mathbb {Z}\setminus \{0\}}l(k-l)h_l(t^\prime )h_{k-l}(t^\prime )\right] , \end{aligned}$$
(31)

\(k\in \mathbb {Z}\). Note, that the assumption of flat initial conditions is not in conflict with (21) as in the NESS, in which the relevant quantities will be evaluated, the probability density becomes stationary. With (31), a non-linear integral equation for the k-th Fourier coefficient has been derived. In Sect. 3.4, the solution to (31) will be constructed by means of an expansion in a small coupling parameter \(\lambda \). We close this section with the following general remarks.

  1. (i)

    Equation (31) has been derived on a purely formal level. In particular, the integral \(\int dt^\prime e^{\mu _k(t-t^\prime )}\eta _k(t^\prime )\) has to be given a meaning. In a strict mathematical formulation, this integral has to be written as

    $$\begin{aligned} \int _0^te^{\mu _k(t-t^\prime )}dW_k(t^\prime ), \end{aligned}$$
    (32)

    which is called a stochastic convolution (see e.g. [51,52,53,54]). [Note that due to the deterministic integrand of (32), the integral can optionally be interpreted in the Ito or Stratonovitch sense, respectively [51]]. This has its origin in the fact that the noise \(\eta (x,t)\) in (25) is mathematically speaking a generalized time-derivative of a Wiener process W(xt) (see also Sect 3.2, (35)). In this spirit, (31) with the first integral on the right hand side replaced by (32) may be called the mild form of the KPZ equation (in its spectral representation) and \(h(x,t)=\sum _{k\in \mathbb {Z}}h_k(t)\phi _k(x)\), \(h_k(t)\) solution of equation (31), is then called a mild solution of the KPZ equation. In mathematical literature, proofs of existence and uniqueness of such a mild solution can be found for various assumptions on the spatial regularity of the noise (see e.g. [53, 55, 56] and references therein). In particular, these assumptions are reflected by conditions for the explicit form of the spatial noise correlator \(K(x-x^\prime )\) from (1) (see (46)). An assumption will be adopted (see Sect. 3.2), which guarantees the existence of \(\left\| h(x,t)\right\| _{\mathcal {L}_2 ([0,b])}\), i.e. the norm on the Hilbert space of square-integrable functions \(\mathcal {L}_2\). This norm, or respectively the corresponding \(\mathcal {L}_2\)-product, denoted in the following by \((\cdot ,\cdot )_0\), of h with any \(\mathcal {L}_2\)-function g, i.e. \((h,g)_0\), will be used in Sect. 4.1 and Sect. 4.4 to calculate the necessary contributions to a field-theoretic thermodynamic uncertainty relation. Furthermore, with this assumption on the noise, it is shown in Appendix C for the mild solution that almost surely \(h(x,t)\in \mathcal {C}([0,T],\mathcal {L}_2([0,b]))\), \(T>0\), i.e. the trajectory \(t\mapsto h(x,t)\) is a continuous function in time t with values \(h(\cdot ,t)\in \mathcal {L}_2([0,b])\). This justifies the choice \(H=\mathcal {L}_2([0,b])\) in the following calculations.

  2. (ii)

    The Fourier expansion applied above can be understood in a more general sense. For the case of periodic boundary conditions, the differential operator \(\hat{L}\) possesses the eigenfunctions \(\{\phi _k(x)\}\) and corresponding eigenvalues \(\{\mu _k\}\) from (27) and (28), respectively. It is well-known that the set \(\{\phi _k(x)\}\) constitutes a complete orthonormal system in the Hilbert space \(\mathcal {L}_2(0,b)\) of all square-integrable functions on (0, b). Thus the Fourier-expansion performed above can also be interpreted as an expansion in the eigenfunctions of the operator \(\hat{L}\).

  3. (iii)

    With this interpretation, (31) also holds for a ‘hyperdiffusive’ version of the KPZ equation in which the operator \(\hat{L}\) is replaced by \(\hat{L}_p\equiv (-1)^{p+1}\partial _x^{2p}\), with \(p\in \mathbb {N}\) and adjusted eigenvalues \(\{\mu _k^p\}\). This may be used to introduce a higher regularity to the KPZ equation.

  4. (iv)

    Besides the complex Fourier expansion in (26) with coefficients \(h_k(t)\in \mathbb {C}\), the real expansion \(h(x,t)=\sum _{k\in \mathbb {Z}}\widetilde{h}_k(t)\gamma _k(x)\), \(\widetilde{h}_k(t)\in \mathbb {R}\) (e.g. [53]) and

    $$\begin{aligned} \gamma _0=\frac{1}{\sqrt{b}},\quad \gamma _k=\sqrt{\frac{2}{b}}\sin 2\pi k\frac{x}{b},\quad \gamma _{-k}=\sqrt{\frac{2}{b}}\cos 2\pi k\frac{x}{b}\quad k\in \mathbb {N}, \end{aligned}$$
    (33)

    will be used in the next section. The relationship between \(h_k(t)\) and \(\widetilde{h}_k(t)\) reads

    $$\begin{aligned} h_k(t)=\frac{\widetilde{h}_{-k}(t)-i\widetilde{h}_k(t)}{\sqrt{2}}\,,\qquad h_{-k}(t)=\frac{\widetilde{h}_{-k}(t)+i\widetilde{h}_k(t)}{\sqrt{2}}=\overline{h_k}(t), \end{aligned}$$
    (34)

    with \(\overline{h_k}(t)\) as the complex conjugate.

3.2 A Closer Look at the Noise

In the following discussion of the noise it is instructive to pretend, for the time being, that the noise is spatially colored with noise correlator \(K(x-x^\prime )\) instead of assuming directly spatially white noise.

The noise \(\eta (x,t)\) is given by a generalized time-derivative of a Wiener process \(W(x,t)\in \mathbb {R}\) [26, 51,52,53], i.e.

$$\begin{aligned} \eta (x,t)=\sqrt{\varDelta _0}\,\frac{\partial \, W(x,t)}{\partial t}. \end{aligned}$$
(35)

Such a Wiener process W(xt) can be written as (e.g. [51, 53])

$$\begin{aligned} W(x,t)=\sum _{k\in \mathbb {Z}}\alpha _k\beta _k(t)\gamma _k(x). \end{aligned}$$
(36)

Here \(\{\alpha _k\}\in \mathbb {R}\) are arbitrary expansion coefficients that may be used to introduce a spatial regularization of the Wiener process, \(\{\beta _k(t)\}\in \mathbb {R}\) are stochastically independent standard Brownian motions and \(\{\gamma _k(x)\}\) from (33). A well-known result for the two-point correlation function of two stochastically independent Brownian motions \(\beta _k(t)\) reads [51]

$$\begin{aligned} \left\langle \beta _k(t)\,\beta _l(t^\prime )\right\rangle =(t\wedge t^\prime )\,\delta _{k,l}, \end{aligned}$$
(37)

with \((t\wedge t^\prime )=\min (t,t^\prime )\).

In the following it will be shown that the noise \(\eta \) defined by (35) and (36) possesses the autocorrelation

$$\begin{aligned} \left\langle \eta (x,t)\,\eta (x^\prime ,t^\prime )\right\rangle =K(x-x^\prime )\delta (t-t^\prime ), \end{aligned}$$
(38)

which for \(K(x-x^\prime )=\varDelta _0\delta (x-x^\prime )\) results in the one assumed in (25). Furthermore, an explicit expression of the kernel \(K(x-x^\prime )\) by means of the Fourier coefficients \(\{\alpha _k\}\) of W(xt) from (36) will be given.

To this end, first an expression for the two-point correlation function of the Wiener process itself can be derived according to

$$\begin{aligned} \begin{aligned} \left\langle W(x,t)\,W(x^\prime ,t^\prime )\right\rangle&=\frac{t\wedge t^\prime }{b}\left[ \alpha _0^2+\sum _{k\in \mathbb {N}}\left[ \alpha _{-k}^2+\alpha _k^2\right] \cos 2\pi k\frac{x-x^\prime }{b}\right. \\&\quad \left. +\sum _{k\in \mathbb {N}}\left[ \alpha _{-k}^2-\alpha _k^2\right] \cos 2\pi k\frac{x+x^\prime }{b}\right] . \end{aligned} \end{aligned}$$
(39)

To represent the noise structure dictated by (25), the expression in (39) has to be an even, translationally invariant function in space. Thus, the following relation has to be fulfilled

$$\begin{aligned} \alpha _{-k}=\alpha _k\quad \forall \,k\in \mathbb {N}. \end{aligned}$$
(40)

Then the two-point correlation function of the Wiener process is given by

$$\begin{aligned} \left\langle W(x,t)\,W(x^\prime ,t^\prime )\right\rangle =\frac{t\wedge t^\prime }{b}\left[ \alpha _0^2+2\sum _{k\in \mathbb {N}}\alpha _k^2\,\cos 2\pi k\frac{x-x^\prime }{b}\right] . \end{aligned}$$
(41)

With \(W(x,t)=\sum _{k\in \mathbb {Z}}W_k(t)\phi _k(x)\), \(\phi _k(x)\) from (27), equation (41) implies for the two-point correlation function of the Fourier coefficients \(W_k(t)\)

$$\begin{aligned} \left\langle W_k(t)\,W_l(t^\prime )\right\rangle =\alpha _k\,\alpha _l\,(t\wedge t^\prime )\,\delta _{k,-l},\qquad k,l\in \mathbb {Z}. \end{aligned}$$
(42)

This result leads immediately to

$$\begin{aligned} \left\langle \eta _k(t)\,\eta _l(t^\prime )\right\rangle \equiv \varDelta _0\frac{\partial \,^2\, \left\langle W_k(t)\,W_l(t^\prime )\right\rangle }{\partial t\,\partial t^\prime }=\varDelta _0\alpha _k\,\alpha _l\,\delta _{k,-l}\,\delta (t-t^\prime ),\qquad k,l\in \mathbb {Z}, \end{aligned}$$
(43)

using \(\partial _t\partial _{t^\prime }(t\wedge t^\prime )=\delta (t-t^\prime )\).

For the relation between (41) and the noise from (38), we differentiate (41) with respect to t and \(t^\prime \) yielding

$$\begin{aligned} \begin{aligned} \left\langle \eta (x,t)\,\eta (x^\prime ,t^\prime )\right\rangle&=\varDelta _0\frac{\partial \,^2\, \left\langle W(x,t)\,W(x^\prime ,t^\prime )\right\rangle }{\partial t\,\partial t^\prime }\\&=\frac{\varDelta _0}{b}\left[ \alpha _0^2+2\sum _{k\in \mathbb {N}}\alpha _k^2\,\cos 2\pi k\frac{x-x^\prime }{b}\right] \delta (t-t^\prime ). \end{aligned} \end{aligned}$$
(44)

The following identification can be made

$$\begin{aligned} K(x-x^\prime )=\frac{\varDelta _0}{b}\left[ \alpha _0^2+2\sum _{k\in \mathbb {N}}\alpha _k^2\,\cos 2\pi k\frac{x-x^\prime }{b}\right] =K(|x-x^\prime |), \end{aligned}$$
(45)

which structurally represents the standard implicit assumption that \(K(x-x^\prime )\) is translationally invariant, positive definite and even. Note, that the regularity of the noise-kernel \(K(|x-x^\prime |)\) is given by the behavior of the set of \(\{\alpha _k\}\) for \(k\rightarrow \infty \), where \(\{\alpha _k\}\) are the dimensionless Fourier coefficients of the underlying Wiener process from (36) for all k. For the case of \(\alpha _k=1\)\(\forall \;k\in \mathbb {Z}\), spatially white noise is obtained.

Thus, the derivation via the Wiener process has indeed led to a translationally invariant real-valued two-point correlation function for \(\eta (x,t)\), given by (38), with \(K(|x-x^\prime |)\) from (45), which describes white in time and spatially colored Gaussian noise. In the following, we will use (45) to approximate spatially white noise to meet the required form in (25).

Now the assumption mentioned in the remarks in Sect. 3.1 can be made more precise. In the following it will be assumed that (see Appendix C)

$$\begin{aligned} \sum _{k\in \mathbb {Z}}|k|^{\chi }\alpha _k^2<\infty ,\qquad \chi >0. \end{aligned}$$
(46)

This assumption excludes spatially white noise, but via the introduction of a cutoff parameter \(\varLambda \in \mathbb {N}\), \(\varLambda \gg 1\) arbitrarily large but finite, for the range of k, white noise is accessible, i.e. for \(k\in \mathfrak {R}\) with

$$\begin{aligned} \mathfrak {R}\equiv [-\varLambda ,\varLambda ]. \end{aligned}$$
(47)

Note that for the linear case, i.e. the Edwards–Wilkinson model, the authors of [48] also introduce a cutoff, albeit in a slightly different manner. Such a cutoff amounts to an orthogonal projection of the full eigenfunction expansion of (25) to a finite-dimensional subspace spanned by the eigenfunctions \(\phi _{-\varLambda }(x),\ldots ,\phi _\varLambda (x)\). Mathematically, this projection may be represented by a linear projection operator \(\mathcal {P}_\varLambda \), which maps the Hilbert space \(\mathcal {L}_2(0,b)\) to \(\text {span}\{\phi _{-\varLambda }(x),\ldots ,\phi _\varLambda (x)\}\), acting on (29). This mapping, however, causes a problem in the non-linear term of (29), where by mode coupling the k-th Fourier mode (\(-\varLambda \le k\le \varLambda \)) is influenced also by modes with \(|l|>\varLambda \). This issue can be resolved by choosing \(\varLambda \) large enough, for modes with \(h_l(t)\sim \exp [\mu _lt]\), \(\mu _l\) from (28), (61), \(|l|>\varLambda \) will be damped out rapidly so that the bias introduced by limiting l to the interval \(\mathfrak {R}\) is small. Note that the restriction to \(h\in \text {span}\{\phi _{-\varLambda },\,\ldots ,\,\phi _\varLambda \}\) also implies the introduction of restricted summation boundaries in the convolution term in (31), namely

$$\begin{aligned} \sum _{l\in \mathbb {Z}}l(k-l)h_lh_{k-l}\quad \longrightarrow \quad \sum _{l\in \mathfrak {R}_{k}\setminus \{0 ,k\}}l(k-l)h_lh_{k-l},\qquad k\in \mathfrak {R}, \end{aligned}$$

with \(\mathfrak {R}_k\) defined by

$$\begin{aligned} \mathfrak {R}_k\equiv [\max (-\varLambda ,-\varLambda +k),\min (\varLambda ,\varLambda +k)]\,,\qquad k\in \mathfrak {R}. \end{aligned}$$
(48)

This restriction to finitely many Fourier modes is not as harsh as it might seem, since for very large wavenumbers the dynamics of the KPZ equation is dominated by its diffusive term and the non-linearity may safely be neglected. This reasoning is based on arguments for turbulence theory in e.g. [26, 57,58,59] and for the KPZ-case e.g. [60], where a momentum-scale separation is in effect. Specifically, in the case of the one-dimensional Burgers equation, the momentum scale is divided into a small-wavenumber regime where the non-linearity is dominant and a large-wavenumber regime where dissipation dominates (see also [61,62,63,64]). Hence, in the latter wavenumber range the KPZ equation reduces to the Edwards–Wilkinson equation, which, due to its equilibrium behavior, does not affect the thermodynamic uncertainty relation (24).

With the cutoff \(\varLambda \), condition (46) is of course fulfilled for \(\alpha _k=1\)\(\forall \,k\in \mathfrak {R}\) and \(\alpha _k=0\)\(\forall k\)\(\notin \mathfrak {R}\). Inserting this choice of \(\alpha _k\) into (45) yields

$$\begin{aligned} K(x-x^\prime )=\frac{\varDelta _0}{b}\left[ 1+2\sum _{k=1}^\varLambda \cos 2\pi k\frac{x-x^\prime }{b}\right] =\varDelta _0\delta (x-x^\prime )\Big |_{\text {span}\{\phi _{-\varLambda },\ldots ,\phi _\varLambda \}}. \end{aligned}$$
(49)

Also, the choice of \(\alpha _k=1\)\(\forall \,k\in \mathfrak {R}\) implies for the correlation function of the Fourier coefficients \(\eta _k(t)\) from (43)

$$\begin{aligned} \left\langle \eta _k(t)\eta _l(t^\prime )\right\rangle =\varDelta _0\delta _{k,-l}\delta (t-t^\prime )\qquad k,l\in \mathfrak {R}. \end{aligned}$$
(50)

To end this section, a noise operator \(\hat{K}\) describing spatial noise correlations will be introduced as

$$\begin{aligned} \hat{K}(\cdot )\equiv \int _0^bdx^\prime K(x-x^\prime )(\cdot )(x^\prime ), \end{aligned}$$
(51)

with kernel \(K(x-x^\prime )\) from (49) and its inverse \(\hat{K}^{-1}\) given by

$$\begin{aligned} \hat{K}^{-1}(\cdot )=\int _0^bdx^\prime K^{-1}(x-x^\prime )(\cdot )(x^\prime ), \end{aligned}$$
(52)

where its kernel reads \(K^{-1}(x-x^\prime )=\varDelta _0^{-1}\delta (x-x^\prime )\Big |_{\text {span}\{\phi _{-\varLambda },\ldots ,\phi _\varLambda \}}\).

3.3 Dimensionless Form of the KPZ Equation

Before the KPZ equation is analyzed further, it is prudent to relate all physical quantities to suitable reference values so that the scaled quantities are dimensionless and that the equation is characterized by only one dimensionless parameter. In anticipation of the calculations below, we choose this parameter to represent a dimensionless effective coupling parameter \(\lambda _\text {eff}\), that replaces the coupling constant \(\lambda \) from (25). To this end the following characteristic scales are introduced,

$$\begin{aligned} h=Hh_\text {s}\;;\quad \eta =N\eta _\text {s}\;;\quad x=b x_\text {s}\;;\quad t=Tt_\text {s}. \end{aligned}$$
(53)

Here H is a characteristic scale for the height field (not to be confused with the notation for the Hilbert space), N a scale for the noise field, b is the characteristic length scale in space and T the time scale of the system. Choosing the three respective scales according to

$$\begin{aligned} H=\sqrt{\frac{\varDelta _0\,b}{\nu }},\quad N=\sqrt{\frac{\varDelta _0\,\nu }{b^3}},\quad T=\frac{b^2}{\nu }, \end{aligned}$$
(54)

leads to the dimensionless KPZ equation on the interval \(x\in [0,1]\)

$$\begin{aligned} \partial _{t_\text {s}}h_\text {s}(x_\text {s},t_\text {s})&=\partial _{x_\text {s}}^2h_\text {s}(x_\text {s},t_\text {s})+\frac{\lambda _\text {eff}}{2}\left( \partial _{x_\text {s}}h_\text {s}(x_\text {s},t_\text {s})\right) ^2+\eta _\text {s}(x_\text {s},t_\text {s}), \end{aligned}$$
(55)
$$\begin{aligned} \left\langle \eta _\text {s}(x_\text {s},t_\text {s})\right\rangle&=0,\end{aligned}$$
(56)
$$\begin{aligned} \left\langle \eta _\text {s}(x_\text {s},t_\text {s})\eta _\text {s}(x_\text {s}^\prime ,t_\text {s}^\prime )\right\rangle&=K_\text {s}(x_\text {s}-x_\text {s}^\prime )\delta (t_\text {s}-t_\text {s}^\prime ). \end{aligned}$$
(57)

Here, the effective dimensionless coupling constant is given by

$$\begin{aligned} \lambda _\text {eff}=\frac{\lambda \,\varDelta _0^{1/2}}{\nu ^{3/2}}b^{1/2}, \end{aligned}$$
(58)

and

$$\begin{aligned} K_\text {s}(x_\text {s}-x_\text {s}^\prime )=1+2\sum _{k=1}^\varLambda \cos 2\pi k(x_\text {s}-x^\prime _\text {s}) . \end{aligned}$$
(59)

The effective coupling constant \(\lambda _\text {eff}\) is found in various works concerning the KPZ–Burgers equation; see e.g. [40, 65,66,67].

In the following sections we will perform all calculations for the dimensionless KPZ equation. This requires one simple adjustment in the linear differential operator \(\hat{L}\) on \(x_\text {s}\in [0,1]\), which is now given by

$$\begin{aligned} \hat{L}_\text {s}=\partial _{x_\text {s}}^2, \end{aligned}$$
(60)

with eigenvalues

$$\begin{aligned} \mu _{s,\,k}=-4\,\pi ^2\,k^2 \end{aligned}$$
(61)

to the orthonormal eigenfunctions

$$\begin{aligned} \phi _{s,\,k}(x_\text {s})=e^{2\pi ikx_\text {s}}. \end{aligned}$$
(62)

Furthermore, the noise correlation function in Fourier space from (50) now reads

$$\begin{aligned} \left\langle \eta _{s,\,k}(t_\text {s})\eta _{s,\,l}(t_\text {s}^\prime )\right\rangle =\delta _{k,-l}\delta (t_\text {s}-t_\text {s}^\prime ). \end{aligned}$$
(63)

The scaling also affects the noise operators defined in (51), (52) at the end of Sect. 3.2. The scaled ones read

$$\begin{aligned} \hat{K}_\text {s}(\cdot )=\int _0^1dx_\text {s}^\prime \,K_\text {s}(x_\text {s}-x^\prime _\text {s})(\cdot )(x^\prime _\text {s}), \end{aligned}$$
(64)

and

$$\begin{aligned} \hat{K}^{-1}_\text {s}(\cdot )=\int _0^1dx_\text {s}^\prime \,K_\text {s}^{-1}(x_\text {s}-x_\text {s}^\prime )(\cdot )(x_\text {s}^\prime ), \end{aligned}$$
(65)

with \(K_\text {s}(x_\text {s}-x_\text {s}^\prime )\) from (59) and \(K^{-1}_\text {s}(x_\text {s}-x^\prime _\text {s})\) is defined via the integral-relation \(\int dy_\text {s}\,K_\text {s}(x_\text {s}-y_\text {s})K_\text {s}^{-1}(y_\text {s}-z_\text {s})=\delta (x_\text {s}-z_\text {s})\).

Note that for the sake of simplicity the subscript \(\text {s}\) will be dropped in the calculations below where all quantities are understood as the scaled ones.

3.4 Expansion in a Small Coupling Constant

Returning to the nonlinear integral equation of the k-th Fourier coefficient of the heights field, \(h_k(t)\) from (31), now in its dimensionless form and with the restricted spectral range given by

$$\begin{aligned} h_k(t)= \int _0^tdt^\prime e^{\mu _k(t-t^\prime )}\left[ \eta _k(t^\prime )-2\pi ^2\lambda _\text {eff}\sum _{l\in \mathfrak {R}_{k}\setminus \{0 ,k\}}l(k-l)h_l(t^\prime )h_{k-l}(t^\prime )\right] , \end{aligned}$$
(66)

\(k\in \mathfrak {R}\), with \(\{\mu _k\}\) from (61), \(\mathfrak {R}_k\) from (48) and all quantities dimensionless, an approximate solution will be constructed. Note, that the summation of the discrete convolution in (66) is chosen such that it respects the above introduced cutoff in l as well as \(k-l\), i.e. |l|, \(|k-l|\le \varLambda \). For small values of the coupling constant we expand the solution in powers of \(\lambda _\text {eff}\), i.e.

$$\begin{aligned} h_k(t)=h_k^{(0)}(t)+\lambda _\text {eff} h_k^{(1)}(t)+\lambda _\text {eff}^2h_k^{(2)}(t)+O(\lambda _\text {eff}^3), \end{aligned}$$
(67)

with

$$\begin{aligned} h_{k}^{(0)}(t)&=\int _0^t e^{\mu _k(t-t^\prime )}dW_k(t^\prime ), \end{aligned}$$
(68)
$$\begin{aligned} h_{k}^{(1)}(t)&=-2\pi ^2\sum _{l\in \mathfrak {R}_{k}\setminus \{0 ,k\}}l(k-l)\int _0^tdt^\prime e^{\mu _k(t-t^\prime )}h_{l}^{(0)}(t^\prime )h_{k-l}^{(0)}(t^\prime ),\end{aligned}$$
(69)
$$\begin{aligned} h_{k}^{(2)}(t)&=-2\pi ^2\sum _{l\in \mathfrak {R}_{k}\setminus \{0 ,k\}}l(k-l)\int _0^tdt^\prime e^{\mu _k(t-t^\prime )}\nonumber \\&\quad \times \left( h_{l}^{(0)}(t^\prime )h_{k-l}^{(1)}(t^\prime )+h_{l}^{(1)}(t^\prime )h_{k-l}^{(0)}(t^\prime )\right) \end{aligned}$$
(70)

Thus every \(h_{k}^{(n)}\), \(n>1\), can be expressed in terms of \(h_{m}^{(0)}\), \(m\in \mathfrak {R}\), i.e. the stochastic convolution according to (32), which is known to be Gaussian.

In the following calculations multipoint correlation functions have to be evaluated, which can be simplified by Wick’s theorem, where a recurring term reads \(\left\langle h_{k}^{(0)}(t)h_{l}^{(0)}(t^\prime )\right\rangle \). It is thus helpful to determine this correlation function in general once and use this result later on. With (63) and \(k,l\in \mathbb {Z}\) (and therefore also for \(k,l\in \mathfrak {R}\)) it follows that:

$$\begin{aligned} \left\langle h_{k}^{(0)}(t)h_{l}^{(0)}(t^\prime )\right\rangle&=e^{\mu _kt}e^{\mu _lt^\prime }\int _0^tdr\int _0^{t^\prime }ds\,e^{-\mu _kr}e^{-\mu _ls}\left\langle \eta _k(r)\eta _l(s)\right\rangle \\&=e^{\mu _kt}e^{\mu _lt^\prime }\delta _{k,-l}\frac{1-e^{-(\mu _k+\mu _l)(t\wedge t^\prime )}}{\mu _k+\mu _l}=\varPi _{k,l}(t,t^\prime )\delta _{k,-l}, \end{aligned}$$

with

$$\begin{aligned} \varPi _{k,l}(t,t^\prime )\equiv e^{\mu _kt}e^{\mu _lt^\prime }\frac{1-e^{-(\mu _k+\mu _l)(t\wedge t^\prime )}}{\mu _k+\mu _l}. \end{aligned}$$
(71)

Since for the auxiliary expression \(\varPi _{k,l}\) the symmetries

$$\begin{aligned} \varPi _{k,l}(t,t^\prime )=\varPi _{k,-l}(t,t^\prime )=\varPi _{-k,l}(t,t^\prime )=\varPi _{-k,-l}(t,t^\prime ) \end{aligned}$$
(72)

hold, it is found that

$$\begin{aligned} \begin{aligned} \left\langle h_{k}^{(0)}(t)h_{l}^{(0)}(t^\prime )\right\rangle&=\left\langle \overline{h_{k}^{(0)}}(t)\overline{h_{l}^{(0)}}(t^\prime )\right\rangle =\varPi _{k,l}(t,t^\prime )\delta _{k,-l};\\ \left\langle h_{k}^{(0)}(t)\overline{h_{l}^{(0)}}(t^\prime )\right\rangle&=\left\langle \overline{h_{k}^{(0)}}(t)h_{l}^{(0)}(t^\prime )\right\rangle =\varPi _{k,l}(t,t^\prime )\delta _{k,l}. \end{aligned} \end{aligned}$$
(73)

4 Thermodynamic Uncertainty Relation for the KPZ Equation

In this section we will show that the thermodynamic uncertainty relation from (24) holds for the KPZ equation driven by Gaussian white noise in the weak-coupling regime. In particular, the small-\(\lambda _\text {eff}\) expansion from Sect. 3.4 will be employed.

To recapitulate, the two ingredients needed for the thermodynamic uncertainty relation are (i) the long time behavior of the squared variation coefficient or precision \(\epsilon ^2\) of \(\varPsi _g(t)\) from (9); (ii) the expectation value of the total entropy production in the steady state, \(\left\langle \varDelta s_\text {tot}\right\rangle \) from (22).

4.1 Expectation and Variance for the Height Field

With (7) adapted to the KPZ equation, namely

$$\begin{aligned} \varPsi _g(t)=\int _0^1dx\,h(x,t)g(x), \end{aligned}$$
(74)

with g(x) as any real-valued \(\mathcal {L}_2\)-function fulfilling \(\int _0^1dxg(x)\ne 0\), i.e. g(x) possessing non-zero mean, we rewrite the variance as

$$\begin{aligned} \left\langle \left( \varPsi _g(t)-\left\langle \varPsi _g(t)\right\rangle \right) ^2\right\rangle =\left\langle \left( \varPsi _g(t)\right) ^2\right\rangle -\left\langle \varPsi _g(t)\right\rangle ^2. \end{aligned}$$
(75)

As is shown below, \(\epsilon ^2\) can be evaluated for arbitrary time \(t>0\). However, the final interest is on the non-equilibrium steady state of the system. Therefore, the long-time asymptotics will be studied.

4.2 Evaluation of Expectation and Variance

In the small-\(\lambda _\text {eff}\) expansion, the expectation of the output \(\varPsi _g(t)\) from (74), with h(xt) solution of the dimensionless KPZ equation (55) to (57) reads:

$$\begin{aligned} \begin{aligned} \left\langle \varPsi _g(t)\right\rangle&=\sum _{k,l\in \mathfrak {R}}\left\langle h_k(t)\right\rangle \overline{g_l}\left( e^{2\pi ikx},e^{2\pi ilx}\right) _0\\&=\lambda _\text {eff}\sum _{k\in \mathfrak {R}}\overline{g_k}\left\langle h_{k}^{(1)}(t)\right\rangle +O(\lambda _\text {eff}^3), \end{aligned} \end{aligned}$$
(76)

where \(g_k\) and \(\overline{g_k}\) are the k-th Fourier coefficient of the weight function g(x) and its complex conjugate, respectively. Here the result from (68) is used as well as the fact that odd moments of Gaussian random variables vanish identically. Replacing \(h_{k}^{(1)}(t)\) by the expression derived in (69) and using (73) leads to

$$\begin{aligned} \begin{aligned} \left\langle h_{k}^{(1)}(t)\right\rangle&=-2\pi ^2e^{\mu _kt}\int _0^tdt^\prime \,e^{-\mu _kt^\prime }\sum _{l\in \mathfrak {R}_{k}\setminus \{0 ,k\}}l(k-l)\left\langle h_{l}^{(0)}(t^\prime )h_{k-l}^{(0)}(t^\prime )\right\rangle \\&=-2\pi ^2\sum _{l\in \mathfrak {R}_{k}\setminus \{0 ,k\}}l(k-l)\left[ \frac{e^{(\mu _l+\mu _{k-l})t}-e^{\mu _kt}}{(\mu _l+\mu _{k-l})(\mu _l+\mu _{k-l}-\mu _k)}\right. \\&\quad \left. -\frac{e^{\mu _kt}-1}{(\mu _l+\mu _{k-l})\mu _k}\right] \delta _{0,k}. \end{aligned} \end{aligned}$$
(77)

Note, that in the case of \(k=0\) the second term in the last line of (77) is evaluated in the limit \(\mu _k\rightarrow 0\), which yields t. Since the interest is in the NESS-current, the long-time asymptotics of the two expressions in (77) above is studied. So, eq. (76) yields

$$\begin{aligned} \left\langle \varPsi _g(t)\right\rangle \simeq \left[ 2\pi ^2g_0\lambda _\text {eff}\sum _{l\in \mathfrak {R}\setminus \{0 \}}\frac{l^2}{2(-\mu _l)}+O\left( \lambda _\text {eff}^3\right) \right] \,t,\quad \text {for}\;t\gg 1, \end{aligned}$$
(78)

where \(g_k=g_{-k}\)\(\forall k\) as \(g(x)\in \mathbb {R}\). Note, that the formulation of (78) reflects our claim, that \(\left\langle \varPsi _g(t)\right\rangle \sim t\) for \(t\gg 1\), see also the related reasoning in Appendix D. Using the explicit form of \(\mu _k\) from (61), the expression in (78) can be simplified according to

$$\begin{aligned} \left\langle \varPsi _g(t)\right\rangle =\left[ g_0\frac{\lambda _\text {eff}}{2}\,\varLambda +O\left( \lambda _\text {eff}^3\right) \right] \,t,\quad \text {for}\;t\gg 1, \end{aligned}$$
(79)

with \(\varLambda \) from (47). Equivalently, the steady state current from (8) reads

$$\begin{aligned} J_g=g_0\frac{\lambda _\text {eff}}{2}\,\varLambda +O\left( \lambda _\text {eff}^3\right) . \end{aligned}$$
(80)

The first term of the variance as defined in (75) reads in the small-\(\lambda _\text {eff}\) expansion

$$\begin{aligned} \begin{aligned} \left\langle \left( \varPsi _g(t)\right) ^2\right\rangle&=\left\langle \sum _{k,l\in \mathfrak {R}}h_k(t)\overline{g_k}h_l(t)\overline{g_l}\right\rangle \\&=\sum _{k,l\in \mathfrak {R}}\overline{g_k}\,\overline{g_l}\left[ \left\langle h_{k}^{(0)}(t)h_{l}^{(0)}(t)\right\rangle +\lambda _\text {eff}^2\left( \left\langle h_{k}^{(1)}(t)h_{l}^{(1)}(t)\right\rangle \right. \right. \\&\quad \left. \left. +\left\langle h_{k}^{(0)}(t)h_{l}^{(2)}(t)\right\rangle +\left\langle h_{k}^{(2)}(t)h_{l}^{(0)}(t)\right\rangle \right) +O(\lambda _\text {eff}^4)\right] , \end{aligned} \end{aligned}$$
(81)

where moments proportional to \(\lambda _\text {eff}\) (and \(\lambda _\text {eff}^3\)) vanish due to (68) and (69) as the two-point correlation function \(\left\langle h_{k}^{(0)}h_{l}^{(1)}\right\rangle \) and its complex conjugate are odd moments.

In Appendix A, we present the rather technical derivation of

$$\begin{aligned} \begin{aligned} \left\langle \left( \varPsi _g(t)\right) ^2\right\rangle&\simeq g_0^2\left[ 1-2(2\pi ^2)^2\lambda _\text {eff}^2\sum _{l\in \mathfrak {R}\setminus \{0 \}}\frac{l^4}{8\mu _l^3}\right] \,t\\&\quad +g_0^2\lambda _\text {eff}^2\sum _{k\in \mathfrak {R}}\left| \left\langle h_{k}^{(1)}(t)\right\rangle \right| ^2+O(\lambda _\text {eff}^4)\quad \text {for}\;t\gg 1. \end{aligned} \end{aligned}$$
(82)

Subtraction of (78) squared from (82) leads to

$$\begin{aligned} \begin{aligned}&\left\langle \left( \varPsi _g(t)\right) ^2\right\rangle -\left\langle \varPsi _g(t)\right\rangle ^2\\&\quad \simeq \left[ g_0^2\left( 1-2(2\pi ^2)^2\lambda _\text {eff}^2\sum _{l\in \mathfrak {R}\setminus \{0 \}}\frac{l^4}{8\mu _l^3}\right) +O(\lambda _\text {eff}^4)\right] \,t,\quad \text {for}\;t\gg 1. \end{aligned} \end{aligned}$$
(83)

Here, \(\left\langle \left( \varPsi _g(t)\right) ^2\right\rangle -\left\langle \varPsi _g(t)\right\rangle ^2\sim t\), \(t\gg 1\) is expected due to our reasoning in Appendix D. Again, with \(\mu _k\) from (61), the above expression in (83) can be reduced to

$$\begin{aligned} \left\langle \left( \varPsi _g(t)\right) ^2\right\rangle -\left\langle \varPsi _g(t)\right\rangle ^2=\left[ g_0^2\left( 1+\frac{\lambda _\text {eff}^2}{32\,\pi ^2}\mathcal {H}_\varLambda ^{(2)}\right) +O(\lambda _\text {eff}^4)\right] \,t. \end{aligned}$$
(84)

Here \(\mathcal {H}_\varLambda ^{(2)}=\sum _{l=1}^\varLambda 1/l^2\) is the so-called generalized harmonic number, which converges to the Riemann zeta-function \(\zeta (2)\) for \(\varLambda \rightarrow \infty \). Using (13), eq. (84) yields the diffusivity \(D_g\),

$$\begin{aligned} D_g=\frac{g_0^2}{2}\left[ 1+\frac{\lambda _\text {eff}^2}{32\,\pi ^2}\mathcal {H}_\varLambda ^{(2)}\right] +O(\lambda _\text {eff}^4). \end{aligned}$$
(85)

With (84) and (79) squared, the first constituent of the thermodynamic uncertainty relation, \(\epsilon ^2=\text {Var}[\varPsi _g(t)]/\left\langle \varPsi _g(t)\right\rangle ^2\) from (9), is given for large times by

$$\begin{aligned} \epsilon ^2\simeq \frac{4+\lambda _\text {eff}^2/(8\pi ^2)\mathcal {H}_\varLambda ^{(2)}}{\lambda _\text {eff}^2\,\varLambda ^2}\frac{1}{t}. \end{aligned}$$
(86)

Note, since \(\epsilon ^2\approx 4/(\lambda _\text {eff}^2t)\), the long time asymptotics of the second term has to scale as \(\left\langle \varDelta s_\text {tot}\right\rangle \sim \lambda _\text {eff}^2t\) for the uncertainty relation to hold. Note further, that the result for the precision of the projected output \(\varPsi _g(t)\) in the NESS is independent of the choice of g(x).

4.3 Alternative Formulation of the Precision

Before we continue with the calculation of the total entropy production, we would like to mention an intriguing observation. From the field-theoretic point of view, it seems natural to define the precision \(\epsilon ^2\) as

$$\begin{aligned} \epsilon ^2\equiv \frac{\left\langle \left\| h(x,t)-\left\langle h(x,t)\right\rangle \right\| _0^2\right\rangle }{\left\| \left\langle h(x,t)\right\rangle \right\| _0^2}. \end{aligned}$$
(87)

This is due to the fact that the height field h(xt) is at every time instant an element of the Hilbert-space \(\mathcal {L}_2([0,1])\) as mentioned in Sect. 3.1. Hence, the difference between h(xt) and its expectation is measured by its \(\mathcal {L}_2\)-norm. Also the expectation squared is in this framework given by the \(\mathcal {L}_2\)-norm squared. At a cursory glance, the definitions in (87) and (9) seem to be incompatible. However, for the case of the above calculations of \(\epsilon ^2\) for the one-dimensional KPZ equation, it holds up to \(O(\lambda _\text {eff}^3)\) in perturbation expansion that

$$\begin{aligned} \begin{aligned} \left\langle \left( \varPsi _g(t)-\left\langle \varPsi _g(t)\right\rangle \right) ^2\right\rangle&=g_0^2\left\langle \left\| h(x,t)-\left\langle h(x,t)\right\rangle \right\| _0^2\right\rangle \qquad \text {for }t\gg 1,\\ \left\langle \varPsi _g(t)\right\rangle ^2&=g_0^2\left\| \left\langle h(x,t)\right\rangle \right\| _0^2. \end{aligned} \end{aligned}$$
(88)

Thus, with (88), it is obvious that in terms of the perturbation expansion both definitions of the precision, as in (9) and (87), respectively, are equivalent. Equation (88) can be verified by direct calculation along the same lines as above in this section. By studying these calculations it is found perturbatively that the height field h(xt) is spatially homogeneous, which is reflected by \(\left\langle h_k(t)h_l(t)\right\rangle \sim \delta _{k,-l}\) (see (73)) for the correlation of its Fourier-coefficients. Further, the long-time behavior is solely determined by the largest eigenvalue of the differential diffusion operator \(\hat{L}=\partial _x^2\), namely by \(\mu _0=0\) (see e.g. (78) and (83), the essential quantities for deriving (88)).

In the following, we would like to give some reasoning why the above two statements should also hold for a broad class of field-theoretic Langevin equations as in (1). For simplicity, we restrict ourselves in (1) to the case of one-dimensional scalar fields \(\varPhi (x,t)\) and \(F[\varPhi (x,t)]=\hat{L}\varPhi (x,t)+\hat{N}[\varPhi (x,t)]\). Here \(\hat{L}\) denotes a linear differential operator and \(\hat{N}\) a non-linear (e.g. quadratic) operator. \(\hat{L}\) should be selfadjoint and possess a pure point spectrum with all eigenvalues \(\mu _k\le 0\) (e.g. \(\hat{L}=(-1)^{p+1}\partial _x^{2p}\), \(p\in \mathbb {N}\), i.e. an arbitrary diffusion operator subject to periodic boundary conditions). For this class of operators \(\hat{L}\) there exists a complete orthonormal system of corresponding eigenfunctions \(\{\phi _k\}\) in \(\mathcal {L}_2(\varOmega )\). If it is further known, that the solution \(\varPhi (x,t)\) of (1) belongs at every time t to \(\mathcal {L}_2(\varOmega )\), we can calculate e.g. the second moment of the projected output \(\varPsi _g(t)\) according to \(\left\langle (\varPsi _g(t))^2\right\rangle =\left\langle (\int _\varOmega dx\,\varPhi (x,t)g(x))^2\right\rangle \), where \(g(x)\in \mathcal {L}_2(\varOmega )\) as well. As is the case in e.g. equation (81), the second moment is determined by the Fourier-coefficients \(\varPhi _k(t)\) of \(\varPhi (x,t)\) and \(g_k\) of g(x), namely

$$\begin{aligned} \left\langle \left( \varPsi _g(t)\right) ^2\right\rangle =\sum _{k,l}\overline{g_k}\,\overline{g_l}\left\langle \varPhi _k(t)\varPhi _l(t)\right\rangle . \end{aligned}$$
(89)

Like the KPZ equation, (1) is driven by spatially homogeneous Gaussian white noise \(\eta (x,t)\) with two-point correlations of the Fourier-coefficients \(\eta _k(t)\) given by \(\left\langle \eta _k(t)\eta _l(t)\right\rangle \sim \delta _{k,-l}\). Therefore, we expect the solution to (1) subject to periodic boundary conditions to be spatially homogeneous as well, at least in the steady state, which implies

$$\begin{aligned} \left\langle \varPhi _k(t)\varPhi _l(t)\right\rangle \sim \delta _{k,-l}, \end{aligned}$$
(90)

see e.g. [57, 58]. Hence, with (90), the expression in (89) becomes

$$\begin{aligned} \left\langle \left( \varPsi _g(t)\right) ^2\right\rangle =g_0^2\left\langle \left( \varPhi _0(t)\right) ^2\right\rangle +\sum _{k\ne 0}|g_k|^2\left\langle \varPhi _k(t)\varPhi _{-k}(t)\right\rangle . \end{aligned}$$
(91)

Comparing (91) to \(\left\langle \left\| \varPhi (x,t)\right\| _0^2\right\rangle \), which is given by

$$\begin{aligned} \left\langle \left\| \varPhi (x,t)\right\| _0^2\right\rangle =\sum _k\left\langle \varPhi _k(t)\varPhi _{-k}(t)\right\rangle =\left\langle \left( \varPhi _0(t)\right) ^2\right\rangle +\sum _{k\ne 0}\left\langle \varPhi _k(t)\varPhi _{-k}(t)\right\rangle , \end{aligned}$$
(92)

we find in the NESS

$$\begin{aligned} \left\langle \left( \varPsi _g(t)\right) ^2\right\rangle \simeq g_0^2\left\langle \left\| \varPhi (x,t)\right\| _0^2\right\rangle , \end{aligned}$$
(93)

provided that the long-time behavior is dominated by the Fourier-mode with largest eigenvalue, i.e. \(k=0\) with \(\mu _0=0\). Under the same condition, the first moment of the projected output reads in the NESS

$$\begin{aligned} \left\langle \varPsi _g(t)\right\rangle =\sum _k\overline{g_k}\left\langle \varPhi _k(t)\right\rangle \simeq g_0\left\langle \varPhi _0(t)\right\rangle , \end{aligned}$$
(94)

and thus

$$\begin{aligned} \left( \left\langle \varPsi _g(t)\right\rangle \right) ^2\simeq g_0^2\left( \left\langle \varPhi _0(t)\right\rangle \right) ^2. \end{aligned}$$
(95)

Similarly,

$$\begin{aligned} \left\| \left\langle \varPhi (x,t)\right\rangle \right\| _0^2=\sum _k\left| \left\langle \varPhi _k(t)\right\rangle \right| ^2\simeq \left( \left\langle \varPhi _0(t)\right\rangle \right) ^2\qquad \text {for }t\gg 1, \end{aligned}$$
(96)

which implies

$$\begin{aligned} \left( \left\langle \varPsi _g(t)\right\rangle \right) ^2\simeq g_0^2\left\| \left\langle \varPhi (x,t)\right\rangle \right\| _0^2. \end{aligned}$$
(97)

Note, that \(g_0\) and \(\varPhi _0(t)\) have to be real throughout the argument (which is indeed the case for expansions with respect to the eigenfunctions of the general diffusion operators \(\hat{L}\) from above). Hence, under the assumption that the prior mentioned requirements are met, which, of course, would have to be checked for every individual system (as was done in this section for the KPZ equation), the asymptotic equivalence in (93) and (97) validates the statement in (88) (and therefore, in the NESS, also (87)) for a whole class of one-dimensional scalar SPDEs from (1).

4.4 Total Entropy Production for the KPZ Equation

The total entropy production for the KPZ equation is obtained by inserting \(F_\gamma [h_\mu (\mathbf {r},t)]=\partial _x^2h(x,t)+\frac{\lambda _\text {eff}}{2}\left( \partial _xh(x,t)\right) ^2\) and the explicit expression for the one-dimensional stationary probability distribution \(p^s[h]\) into (22). The form of the latter is given in the following.

4.4.1 The Fokker–Planck Equation and its 1D Stationary Solution

Let us briefly recapitulate the Fokker–Planck equation and its stationary solution in one spatial dimension for the KPZ equation.

The Fokker–Planck equation corresponding to (55) for the functional probability distribution p[h] reads, e.g. [32, 43, 68,69,70],

$$\begin{aligned} \frac{\partial \, p[h]}{\partial t}&=-\int _0^1dx\frac{\delta }{\delta h}\left[ \left( \partial _x^2h(x,t)+\frac{\lambda _\text {eff}}{2}(\partial _xh(x,t))^2\right) p[h]-\frac{1}{2}\frac{\delta p[h]}{\delta h}\right] \nonumber \\&=-\int _0^1dx\frac{\delta j[h]}{\delta h}, \end{aligned}$$
(98)
$$\begin{aligned} j[h]&\equiv \left( \partial _x^2h(x,t)+\frac{\lambda _\text {eff}}{2}(\partial _xh(x,t))^2\right) p[h]-\frac{1}{2}\frac{\delta p[h]}{\delta h}, \end{aligned}$$
(99)

with j[h] as a probability current.

It is well known that for the case of pure Gaussian white noise, a stationary solution, i.e. \(\partial _tp^s[h]=0\), to the Fokker–Planck equation is given by [32, 68, 70]

$$\begin{aligned} p^s[h]\equiv \exp \left[ -\left\| \partial _xh\right\| _0^2\right] . \end{aligned}$$
(100)

This stationary solution is the same as the one for the linear case, namely for the Edwards–Wilkinson model. Note that in (100) we denote by \(\left\| \cdot \right\| _0^2\) the standard \(\mathcal {L}_2\)-norm.

4.4.2 Stationary Total Entropy Production

With (22), the total entropy production in the NESS for the KPZ equation reads

$$\begin{aligned} \begin{aligned} \varDelta s_\text {tot}&=\varDelta s_m+\varDelta s=2\int _0^t dt^\prime \left( \dot{h},\left[ \partial _x^2 h+\frac{\lambda _\text {eff}}{2}(\partial _{x} h)^2\right] \right) _0-\left( h,\partial _x^{2}h\right) _0\\&=\left[ 2\int _0^{t} dt^\prime \left( \dot{h},\partial _{x}^2h\right) _0-\left( h,\partial _x^{2}h\right) _0\right] +\lambda _{\text {eff}}\int _0^t dt^\prime \left( \dot{h},(\partial _{x}h)^2\right) _0. \end{aligned} \end{aligned}$$
(101)

Using \(\left( \dot{h},\partial _x^2h\right) _0=\frac{1}{2}\frac{d\, }{d t}\left( h,\partial _x^2h\right) _0\), and the initial condition \(h(x,0)=0\), the first term in (101) vanishes and thus

$$\begin{aligned} \varDelta s_\text {tot}=\lambda _\text {eff}\int _0^t dt^\prime \left( \dot{h}(x,t^\prime ),\left( \partial _xh(x,t^\prime )\right) ^2\right) _0. \end{aligned}$$
(102)

For Gaussian white noise, the expectation value of (102) is given by

$$\begin{aligned} \begin{aligned} \left\langle \varDelta s_\text {tot}\right\rangle&=\lambda _\text {eff}\int _0^t dt^\prime \left\langle \left( \dot{h}(x,t^\prime ),\left( \partial _xh(x,t^\prime )\right) ^2\right) _0\right\rangle \\&=\frac{\lambda _\text {eff}^2}{2}\int _0^tdt^\prime \left\langle \left\| \left( \partial _xh(x,t^\prime )\right) ^2\right\| _0^2\right\rangle . \end{aligned} \end{aligned}$$
(103)

For a derivation of this result see Appendix B. Note that (103) and its derivation remains true for \(h\in \text {span}\{\phi _{-\varLambda },\,\ldots ,\,\phi _\varLambda \}\). More generally, the expectation of the total entropy production may also be written as

$$\begin{aligned} \left\langle \varDelta s_\text {tot}\right\rangle =\frac{\lambda _\text {eff}^2}{2}\int _0^tdt^\prime \,\left\langle \left( \left( \partial _xh(x,t^\prime )\right) ^2,\hat{K}^{-1}\left( \partial _xh(x,t^\prime )\right) ^2\right) _0\right\rangle , \end{aligned}$$
(104)

with \(\hat{K}^{-1}\) from (65).

4.4.3 Evaluating the Expectation of the Stationary Total Entropy Production

Above, an expression for the stationary total entropy production \(\varDelta s_\text {tot}\) and its expectation value were derived (see eq. (103)). Inserting the Fourier representation from (26) and (62) into (103) leads to

$$\begin{aligned} \begin{aligned}&\left\langle \varDelta s_\text {tot}\right\rangle \\&\quad =(4\pi ^2)^2\frac{\lambda _\text {eff}^2}{2}\int _0^tdt^\prime \int _0^1dx\sum _{k\in \mathfrak {R}}\sum _{m\in \mathfrak {R}}e^{2\pi ix(k-m)}\\&\qquad \times \left\langle \sum _{l\in \mathfrak {R}_{k}\setminus \{0 ,k\}}l(k-l)h_l(t^\prime )h_{k-l}(t^\prime )\sum _{n\in \mathfrak {R}_{m}\setminus \{0 ,m\}}n(m-n)\overline{h_n}(t^\prime )\overline{h_{m-n}}(t^\prime )\right\rangle \\&\quad =(4\pi ^2)^2\frac{\lambda _\text {eff}^2}{2}\int _0^tdt^\prime \sum _{k\in \mathfrak {R}}\;\sum _{l,n\in \mathfrak {R}_{k}\setminus \{0 ,k\}}l(k-l)n(k-n)\\&\qquad \times \left\langle h_l(t^\prime )h_{k-l}(t^\prime )\overline{h_n}(t^\prime )\overline{h_{k-n}}(t^\prime )\right\rangle , \end{aligned} \end{aligned}$$
(105)

with \(\mathfrak {R}_k\) from (48). As (105) above is already of order \(\lambda _\text {eff}^2\), it suffices to expand the Fourier coefficients \(h_i(t^\prime )\) to zeroth order, which yields

$$\begin{aligned} \begin{aligned} \left\langle \varDelta s_\text {tot}\right\rangle&=(4\pi ^2)^2\frac{\lambda _\text {eff}^2}{2}\int _0^tdt^\prime \sum _{k\in \mathfrak {R}}\;\sum _{l,n\in \mathfrak {R}_{k}\setminus \{0 ,k\}}l(k-l)n(k-n)\\&\quad \times \left\langle h_{l}^{(0)}(t^\prime )h_{k-l}^{(0)}(t^\prime )\overline{h_{n}^{(0)}}(t^\prime )\overline{h_{k-n}^{(0)}}(t^\prime )\right\rangle +O(\lambda _\text {eff}^4), \end{aligned} \end{aligned}$$
(106)

with \(h_{i}^{(0)}(t^\prime )\) given by (68). Via a Wick contraction and using (73), the four-point correlation function in (106) reads

$$\begin{aligned} \begin{aligned}&\left\langle h_{l}^{(0)}(t^\prime )h_{k-l}^{(0)}(t^\prime )\overline{h_{n}^{(0)}}(t^\prime )\overline{h_{k-n}^{(0)}}(t^\prime )\right\rangle \\&\quad =\varPi _{l,k-l}(t^\prime ,t^\prime )\varPi _{-n,n-k}(t^\prime ,t^\prime )\delta _{0,k}+\varPi _{l,-n}(t^\prime ,t^\prime )\varPi _{k-l,n-k}(t^\prime ,t^\prime )\delta _{l,n}\\&\qquad +\varPi _{l,n-k}(t^\prime ,t^\prime )\varPi _{k-l,-n}(t^\prime ,t^\prime )\delta _{n,k-l}. \end{aligned} \end{aligned}$$
(107)

Inserting (107) into (106) leads to the following form of the total entropy production in the NESS,

$$\begin{aligned} \begin{aligned}&\left\langle \varDelta s_\text {tot}\right\rangle \\&\quad =\left[ (4\pi ^2)^2\frac{\lambda _\text {eff}^2}{2}\left( \sum _{l,n\in \mathfrak {R}\setminus \{0 \}}\frac{l^2n^2}{4\mu _l\mu _n}+2\sum _{k\in \mathfrak {R}}\;\sum _{l\in \mathfrak {R}_{k}\setminus \{0 ,k\}}\frac{l^2(k-l)^2}{4\mu _l\mu _{k-l}}\right) +O(\lambda _\text {eff}^4)\right] \,t. \end{aligned} \end{aligned}$$
(108)

Note, \(\left\langle \varDelta s_\text {tot}\right\rangle \sim t\) for \(t\gg 1\) is expected to hold due to our reasoning in Appendix D. Note further, that the long time behavior of \(\left\langle \varDelta s_\text {tot}\right\rangle \) is indeed of the form required, i.e. \(\left\langle \varDelta s_\text {tot}\right\rangle \sim \lambda _\text {eff}^2t\) (see remark after (86)), for the uncertainty relation to hold. With \(\mu _k\) from (61), the expression for the total entropy production from (108) reads

$$\begin{aligned} \left\langle \varDelta s_\text {tot}\right\rangle =\left[ \frac{\lambda _\text {eff}^2}{2}\left( \varLambda ^2+\frac{3\varLambda ^2-\varLambda }{2}\right) +O(\lambda _\text {eff}^4)\right] \,t. \end{aligned}$$
(109)

Thus, with (23) and (109), the total entropy production rate becomes

$$\begin{aligned} \sigma =\frac{\lambda _\text {eff}^2}{2}\left[ \varLambda ^2+\frac{3\varLambda ^2-\varLambda }{2}\right] +O(\lambda _\text {eff}^4). \end{aligned}$$
(110)

With (86) and (109), or, equivalently, (80), (85) and (110), the constituents of the thermodynamic uncertainty relation are known. Hence, the product entering the TUR from (24) for the KPZ equation reads

$$\begin{aligned} \left\langle \varDelta s_\text {tot}\right\rangle \,\epsilon ^2=\frac{2\sigma \,D_g}{J_g^2}=2+\left( 3-\frac{1}{\varLambda }\right) +O(\lambda _\text {eff}^2). \end{aligned}$$
(111)

Note, that the result given in (111) holds strictly for ‘almost’ white noise only, i.e. for the truncated noise spectrum with cutoff \(\varLambda \) (see e.g. (49) and (50)). However, we choose \(\varLambda =\varLambda _0\) large enough, such that all contributions from modes with \(|k|>\varLambda _0\) are expected to be dominated by the diffusive term of the KPZ equation, hence, be effectively described by the Edwards–Wilkinson equation (see further the comments below (48)). As for very large times t, the Edwards–Wilkinson model displays a genuine equilibrium, it does not contribute to the current (80) nor the entropy production rate (110). Consequently, modes with \(|k|>\varLambda _0\) do not affect the TUR in (111) and thus we expect it to hold also for ‘fully’ white noise, i.e. without the need to increase the cutoff parameter \(\varLambda \). The TUR in (111) is the central result of this paper.

Note further, that in (111), we deliberately refrain from writing \(\left\langle \varDelta s_\text {tot}\right\rangle \,\epsilon ^2=5-1/\varLambda \) as this would somewhat mask the physics causing this result. This point will be discussed further in the following.

4.5 Edwards–Wilkinson Model for a Constant Driving Force

To give an interpretation of the two terms in (108) and consequently in (111), we believe it instructive to briefly calculate the precision and total entropy production for the case of the one-dimensional Edwards–Wilkinson model modified by an additional constant non-random driving ‘force’ \(v_0\) and subject to periodic boundary conditions. To be specific, we consider

$$\begin{aligned} \partial _th(x,t)=\partial _x^2h(x,t)+v_0+\eta (x,t)\qquad x\in [0,1], \end{aligned}$$
(112)

already in dimensionless form and with space-time white noise \(\eta \). The stochastic partial differential equation in (112) has the same form like the KPZ equation from (55) but with the non-linearity replaced by \(v_0\). This ensures a NESS, such that quantities like current and entropy production can be calculated, without the difficulties of the mode-coupling encountered with the KPZ non-linearity. We denote (112) in the sequel with FEW for ‘forced Edwards–Wilkinson equation’. Following the same procedure as described in Sect. 3, we find as an integral expression for the k-th Fourier coefficient of the height field in FEW,

$$\begin{aligned} h_k(t)=e^{\mu _kt}\int _0^tdt^\prime \,e^{-\mu _kt^\prime }\left[ v_0\delta _{0,k}+\eta _k(t^\prime )\right] , \end{aligned}$$
(113)

where again a flat initial configuration was assumed and \(\mu _k=-4\pi ^2k^2\) as above. With (113), we get immediately in the NESS

$$\begin{aligned} \left\langle \varPsi _g(t)\right\rangle =g_0v_0t=J_gt, \end{aligned}$$
(114)

and thus \(\left\langle \varPsi _g(t)\right\rangle ^2=g_0^2v_0^2t^2\) as well as

$$\begin{aligned}&\left\langle \left( \varPsi _g(t)\right) ^2\right\rangle \\&\quad =\sum _{k,l\in \mathfrak {R}}\overline{g_k}\,\overline{g_l}\left\langle h_k(t)h_l(t)\right\rangle \\&\quad =\sum _{k,l\in \mathfrak {R}}\overline{g_k}\,\overline{g_l}e^{(\mu _k+\mu _l)t}\int _0^tdr\int _0^tds\,e^{-\mu _kr-\mu _ls}\left( v_0^2\delta _{0,k}\delta _{0,l}+\left\langle \eta _k(r)\eta _l(s)\right\rangle \right) \\&\quad =g_0^2v_0^2t^2+\sum _{k\in \mathfrak {R}}|g_k|^2\frac{e^{2\mu _kt}-1}{2\mu _k}\\&\quad =g_0^2v_0^2t^2+g_0^2t,\qquad \text {for }t\gg 1. \end{aligned}$$

Thus,

$$\begin{aligned} \epsilon ^2=\frac{\left\langle \left( \varPsi _g(t)-\left\langle \varPsi _g(t)\right\rangle \right) ^2\right\rangle }{\left\langle \varPsi _g(t)\right\rangle ^2}\simeq \frac{t}{v_0^2t^2}=\frac{1}{v_0^2}\frac{1}{t}. \end{aligned}$$
(115)

As already discussed above in Sect. 3, the Fokker–Planck equation corresponding to (112) has the stationary solution \(p^s[h]=\exp \left[ -\int dx\,(\partial _xh)^2\right] \) and thus, with (22) and (113), the total entropy production reads in the NESS

$$\begin{aligned} \left\langle \varDelta s_\text {tot}\right\rangle =2\int _0^1dx\,\left\langle h(x,t)\right\rangle \,v_0=2\,v_0^2\,t. \end{aligned}$$
(116)

With (115) and (116), the TUR product for (112) is given by

$$\begin{aligned} \left\langle \varDelta s_\text {tot}\right\rangle \,\epsilon ^2=2, \end{aligned}$$
(117)

i.e. the thermodynamic uncertainty relation is indeed saturated for the Edwards–Wilkinson equation subject to a constant driving ‘force’ \(v_0\). For the sake of completeness we state the expressions for the current, diffusivity and rate of entropy production in the non-equilibrium steady state, namely

$$\begin{aligned} J_g^\text {FEW}=g_0v_0,\qquad D_g^\text {FEW}=\frac{g_0^2}{2},\qquad \sigma ^\text {FEW}=2\,v_0^2. \end{aligned}$$
(118)

With the calculations for FEW, we can now give an interpretation of the two terms in (109) and (111). The first term in the inner brackets in (109) originates from the first term of (108), where the latter represents the action of all higher-order Fourier modes on the mode \(k=0\) (see (107)). To illustrate this point further, observe that, in the NESS, we get according to (76) to (80) for the current:

$$\begin{aligned} J_g=2\pi ^2g_0\lambda _\text {eff}\left( \sum _{l\in \mathfrak {R}\setminus \{0\}}\frac{l^2}{2(-\mu _l)}\right) =g_0\frac{\lambda _\text {eff}}{2}\varLambda , \end{aligned}$$
(119)

and from the calculation above we see that it contains only the impact of Fourier modes \(l\ne 0\) on the mode \(k=0\), which belongs to the constant eigenfunction \(\phi _0(x)=1\). In other words, the modes \(l\ne 0\) act like a constant external excitation, just in the same manner as \(v_0\) acts for FEW in (114). Comparing (119) to (114), we may set

$$\begin{aligned} v_0=2\pi ^2\lambda _\text {eff}\left( \sum _{l\in \mathfrak {R}\setminus \{0\}}\frac{l^2}{2(-\mu _l)}\right) =\frac{\lambda _\text {eff}}{2}\varLambda , \end{aligned}$$
(120)

and get \(J_g=g_0v_0\) in both cases.

Following now the calculations for FEW, we would expect from (116)

$$\begin{aligned} \left\langle \varDelta s_\text {tot}\right\rangle =2v_0^2\,t=(4\pi ^2)^2\frac{\lambda _\text {eff}^2}{2}\left( \sum _{l\in \mathfrak {R}\setminus \{0\}}\frac{l^2}{2(-\mu _l)}\right) ^2\,t=\frac{\lambda _\text {eff}^2}{2}\varLambda ^2\,t, \end{aligned}$$
(121)

which is in fact exactly the first term in the inner brackets from (108) and (109), respectively. Since with (120) also the expression for \(\epsilon ^2\) from (115) coincides with the first summand on the r.h.s. of (86), it is clear that both cases result in the saturated TUR. This explains the value 2 on the r.h.s. of (111).

Turning to the second term of (109), we see that it stems from the second term in (108). In contrast to the first \(\lambda _\text {eff}^2\)-term in (108), the second one does not only measure the effect of the modes on the \(k=0\) mode but also on all other modes \(k\ne 0\). It further features interactions of the k and l modes among each other via mode coupling. Hence, the mode coupling seems responsible for the larger constant on the right hand side of (111), since by neglecting the mode coupling term in (109), the thermodynamic uncertainty relation was saturated also for the KPZ equation up to \(O(\lambda _\text {eff}^2)\). To conclude this brief discussion, we give the respective relations of the KPZ current (80), diffusivity (85) and total entropy production rate (110) to FEW, namely

$$\begin{aligned} \begin{aligned} J_g^\text {KPZ}&=J_g^\text {FEW}+O(\lambda _\text {eff}^3),\\ D_g^\text {KPZ}&=D_g^\text {FEW}+g_0^2\frac{\lambda _\text {eff}^2}{64\,\pi ^2}\mathcal {H}_\varLambda ^{(2)}+O(\lambda _\text {eff}^4),\\ \sigma ^\text {KPZ}&=\sigma ^\text {FEW}+\lambda _\text {eff}^2\frac{3\varLambda ^2-\varLambda }{4}+O(\lambda _\text {eff}^4), \end{aligned} \end{aligned}$$
(122)

with \(J_g^\text {FEW}\), \(D_g^\text {FEW}\) and \(\sigma ^\text {FEW}\) from (118). We see that the additional mode coupling term in KPZ leads to corrections in \(D_{g}^\text {KPZ}\) and \(\sigma ^{\text {KPZ}}\) of at least second order in \(\lambda _\text {eff}\). For the case of \(\lambda _\text {eff}\rightarrow 0\) the KPZ equation becomes the standard Edwards–Wilkinson equation (EW), namely \(\partial _th(x,t)=\partial _x^2h(x,t)+\eta (x,t)\), which possesses a genuine equilibrium steady state. Therefore, for the standard EW we have \(J_g^\text {EW}=0\), \(\sigma ^\text {EW}=0\) and \(D_g^\text {EW}=g_0^2/2\). From (122) it follows that for \(\lambda _\text {eff}\rightarrow 0\), \((J_g,\sigma ,D_g)_\text {KPZ}\rightarrow (J_g,\sigma ,D_g)_\text {FEW}\) and from (118), (120) that \((J_g,\sigma ,D_g)_\text {FEW}\rightarrow (J_g,\sigma ,D_g)_\text {EW}=(0,0,g_0^2/2)\). Hence, the non-zero expressions for \(J_g^\text {KPZ}\) and \(\sigma ^\text {KPZ}\) result solely from the KPZ non-linearity. The impact of the latter on the \(k=0\) Fourier mode (i.e. the spatially constant mode) results in contributions to \(J_g^\text {KPZ}\) and \(\sigma ^\text {KPZ}\) that can be modeled exactly by FEW, the Edwards–Wilkinson equation driven by a constant force \(v_0\) from (112).

5 Conclusion

We have proposed an analog of the TUR [1, 2] in a general field-theoretic setting (see (24)) and shown its validity for the Kardar–Parisi–Zhang equation up to second order of perturbation. To ensure convergence of the quantities entering the thermodynamic uncertainty relation, we had to introduce an arbitrarily large but finite cutoff \(\varLambda \) of the corresponding Fourier spectrum, which restricted the considered Gaussian white-in-time noise to be only ‘almost white’ in space. However, the cutoff was chosen large enough such as to guarantee the dominance of the diffusive term over the non-linear term. This led us to expect our field-theoretic TUR to hold for spatially ‘fully white’ noise as well (see the reasoning below (111)).

To circumvent the introduction of a cutoff for ensuring convergence, a possible solution may be to induce a higher regularity by treating spatially colored noise instead of Gaussian white noise and/or choosing a higher order diffusion operator \(\hat{L}\) (see e.g. [71, 72]) . This is currently under investigation.

As is obvious from (111), the field-theoretic version of the TUR is not saturated for the KPZ equation. This is due to the mode-coupling of the fields as a consequence of the KPZ non-linearity. To illustrate this point, we also treated the Edwards–Wilkinson equation in Sect. 4.5, driven out of equilibrium by a constant velocity \(v_0\), see (112). By identifying \(v_0\) with the influence of higher-order Fourier modes on the mode \(k=0\), we may interpret the first \(\lambda _\text {eff}^2\)-term in (108) as the contribution from the forced Edwards–Wilkinson equation, for which the TUR is saturated (see (117)), an observation which is in accordance with findings in [8] for finite dimensional driven diffusive systems. The second \(\lambda _\text {eff}^2\)-term in (108) is the contribution to the entropy production made up by the interaction between Fourier modes of arbitrary order, which is due to the mode-coupling generated by the KPZ non-linearity. It is this additional entropy production that leads to the TUR being not saturated. Note, that also the first term in (108) is due to the mode-coupling, however is special in thus far that it measures only the impact of the other modes on the zeroth k-mode and does not include a response of the mode \(k=0\).

Regarding future research, an intriguing topic is the question as to whether the findings in [8] concerning conditions for the saturation of the dissipation bound in the TUR for an overdamped two-dimensional Langevin equation can be recovered in the present field-theoretic setting. Furthermore, it would be of great interest to employ the developed framework to other spatio-temporal noise systems in order to observe the resulting dissipation bounds in the corresponding TURs. Of special interest in this context is the stochastic Burgers equation, excited by a noise term suitable for generating turbulent response (see [59]). A comparison of the predictions made in the present paper to numerical simulations of the KPZ equation seems to be another intriguing task, currently under investigation. Besides numerical calculations, it would also be of great interest to test our predictions via experimental realizations of KPZ interfaces. Lastly, the formulation of a genuine non-perturbative, analytic formalism would also be of utmost interest.