1 Introduction

Let \(\mathcal {O}\) be a bounded domain in \(\mathbb {R}^d\), \(d\ge 1\) and denote by \(H=L^2(\mathcal {O})\), the Hilbert space of square-integrable functions on \(\mathcal {O}\). Given a self-adjoint negative operator\(A:D(A)\subset H\rightarrow H\) and a memory function \(K:\mathbb {R}\rightarrow [0,\infty )\), we are interested in the following equation for \(u(t,\mathbf {x}):\mathbb {R}\times \mathcal {O}\rightarrow \mathbb {R}^d\)

$$\begin{aligned} {\dot{u}}(t,\mathbf {x}) = \int _{-\infty }^t\!\!\!K(t-s)Au(s,\mathbf {x})\mathrm {d}s+\mathbf {F}(t,\mathbf {x}), \end{aligned}$$

where \(\mathbf {F}(t,\mathbf {x})\) has the representation

$$\begin{aligned} \mathbf {F}(t,\mathbf {x})=\sum _{k\ge 1} \lambda _k F_k(t)e_k(\mathbf {x}). \end{aligned}$$

Here, \(\{\lambda _k\}_{k \ge 1}\) is a sequence of positive constants, \(\{e_k\}_{k\ge 1}\) is an orthonomal basis for H, and \(\{F_k(t)\}_{k\ge 1}\) is a sequence of i.i.d. stationary Gaussian processes with autocovariance

$$\begin{aligned} \mathbb {E}[F_k(t)F_k(s)]=K(|t-s|). \end{aligned}$$

The goal of this note is to give an analysis on the well-posedness and regularity of (1.1) under appropriate assumptions on A and K. Our motivation for  (1.1)–(1.2)–(1.3) is as follows. Suppose that the orthonormal basis \(\{e_k\}\), as in (1.2), diagonalizes A in the sense that for each \(k \in \mathbb {N}\), there exists an \(\alpha _k>0\) such that \(Ae_k=-\alpha _ke_k\). By writing \(u(t,\mathbf {x})=\sum _{k\ge 1}u_k(t)e_k(\mathbf {x})\), formally, \(u_k(t)\) satisfies the following equation

$$\begin{aligned} {\dot{u}}_k(t) = -\alpha _k\int _{-\infty }^t \!\!\! K(t-s)u_k(s)\mathrm {d}s+\lambda _k F_k(t). \end{aligned}$$

Equation (1.4) is a simpler version of the so-called generalized Langevin equation (GLE) that is used to model single-particle movements in viscoelastic fluids. The full 1d GLE has the following form [22, 23, 29]

$$\begin{aligned} {\left\{ \begin{array}{ll} m{\dot{v}}(t) = -\gamma v(t)-\beta \int _{-\infty }^t K(t-s)v(s)\mathrm {d}s + \lambda F(t)+ \sqrt{2\gamma }{\dot{B}}(t),\\ \mathbb {E}(F(t)F(s))=K(t-s), \end{array}\right. } \end{aligned}$$

where m is the mass, \(\gamma \) is the drag due to viscosity, F(t) is a stationary Gaussian process and B(t) is a standard Brownian motion. Here the covariance of the stationary force F is the same as the memory kernel K, which is a force-balance condition resulting from the Fluctuation–Dissipation relationship [31].. Historically, the GLE was first proposed and studied in the work of [27, 31] and later popularized in [28]. The GLE has been given renewed attention in the last two decades due to its ability to produce what is known as anomalous diffusion [30]. Namely, let \(x(t):=\int _0^t v (s)\mathrm {d}s\) where v(t) satisfies (1.5). If the memory kernel K is integrable, it has been shown that the Mean-Squared Displacement \(\mathbb {E}[x(t)^2]\) grows linearly in time. On the other hand, if there exists an \(\alpha \in (0,1]\) such that \(K(t)\sim t^{-\alpha }\) as \(t\rightarrow \infty \), then for \(\alpha \in (0,1)\), \(\mathbb {E}[x(t)^2]\sim t^\alpha \) [26, 29] and for \(\alpha =1\), \(\mathbb {E}[x(t)^2]\sim t/\log (t)\) as \(t\rightarrow \infty \) [16]. Here \(f(t)\sim g(t),\ t\rightarrow \infty \) means \(\lim _{t\rightarrow \infty }f(t)/g(t)=c\in (0,\infty )\). While the GLE is useful when modeling single-particle movements, it fails to capture multi-particle interactions through fluctuating hydrodynamics. Recently, a first step in this direction was a model proposed and studied numerically in [23] that generalized the fluctuating Landau–Lifschitz Navier–Stokes equations from viscous to viscoelastic fluids. The model that we consider (1.1) is the linearized version of the system, which appears in [23], without fluid specific terms like the Navier–Stokes non-linear term or time-dependent pressure. Inspired by [16, 23, 29], we would like to investigate the regularity of the stationary solutions for (1.1) whenever they make sense.

Stochastic partial-integro-differential equations with infinite delay have been studied in literature [3, 7, 10, 25, 34,35,36,37]. Recently, in [3], when A is the usual Laplace operator, the author considered a stochastic heat equation with memory that has the form

$$\begin{aligned} {\dot{u}}(t,\mathbf {x}) = k_0A\,u(t,\mathbf {x})-\int _{-\infty }^t \!\!\!K(t-s)A\, u(s,\mathbf {x})\mathrm {d}s+{\dot{W}}(t,\mathbf {x}),\quad (t,\mathbf {x})\in \mathbb {R}\times \mathcal {O}, \end{aligned}$$

where \(k_0>0\) is a constant satisfying \(k_0>\int _0^\infty \! K(s)ds\) and W is a Wiener process. Equation (1.6) arises from theory of thermal viscoelasticity, in which \(k_0\) and K respectively represent the instantaneous conductivity and the heat flux memory kernel. See also [4, 5, 8,9,10] for related work on stochastic systems similar to (1.6). It is interesting to study how and whether the Hölder regularity of solutions to (1.6) differ from the analogous stochastic heat equation that does not feature memory:

$$\begin{aligned} {\dot{u}}(t,\mathbf {x}) = A\,u(t,\mathbf {x})+{\dot{W}}(t,\mathbf {x}),\qquad (t,\mathbf {x})\in \mathbb {R}\times \mathcal {O}. \end{aligned}$$

In fact, under suitable assumptions on W and A, Hölder regularity of solutions (1.6) does not differ from that of (1.7) [15, 21]. The proofs of these arguments rely on Kolmogorov’s criterion and estimates on the differences in space and time within a solution. Motivated by these results, we would like to investigate how the memory structure intrinsic in  (1.1) affects regularity of its solutions when compared to the analogous versions of (1.6) and (1.7).

Our problem differs from those considered previously as follows: first, in preceding works, the random part is usually a Wiener process that is white-in-time and is defined on an auxiliary Hilbert space that does not depend on the memory. By contrast, our noise \(\mathbf {F}\) has the decomposition (1.2) and each mode \(F_k\) is related with the memory K via relation (1.3), which follows from the Fluctuation–Dissipation relationship. Thus, one can regard \(\mathbf {F}\) in (1.1) as being colored-in-time. In addition, the memory kernels in previous works [3, 7, 10] are required to be integrable on the positive real line. This condition excludes those that have a slow power-law decay, namely \(K(t)\sim t^{-\alpha }\) as \(t\rightarrow \infty \) for \(\alpha \in (0,1]\). As already mentioned, the Mean-Squared Displacement \(\mathbb {E}[x(t)^2]\) associated with these kernels in (1.5) obeys a sub-linear growth, i.e., \(t^\alpha \) or \(t/\log t\) as \(t\rightarrow \infty \), respectively when \(\alpha \in (0,1)\) [29] or \(\alpha =1\) [16]. In our work, the memory kernels need not be integrable and moreover, the conditions that we impose allow them to have a slow power-law decay, cf. Assumption 2.4. To be more precise, throughout the rest of this work, we will restrict our memory kernel to the class of completely monotone functions, which have been studied in great detail elsewhere [3, 8, 10]. The important feature these functions possess is that their Fourier transforms admit clean expressions, which allows us to perform analysis on the regularity of (1.1) in Sect. 4. As a result of these assumptions, the system (1.1) admits better regularity compared to (1.6)–(1.7). In particular, the one-dimensional versions of (1.6)–(1.7) with white noise are both \(\gamma \)-Hölder continuous in time for \(\gamma \in (0,1/4)\) [3, 15]. In contrast, as a corollary of Theorem 2.10 below, our 1D heat equation with colored noise \(\mathbf {F}\) is \(\gamma \)-Hölder continuous in time for \(\gamma \in (0,1/2)\).

Finally, we mention that on the deterministic side, there are many related works on partial differential equations with memeory kernels. For a few examples, we refer the reader to [1, 2, 6, 11, 12, 18].

The rest of the paper is organized as follows. In Sect. 2, we introduce our definition of a stationary solution for (1.1), state our assumptions, and summarize our main results, particularly on the Hölder regularity of  (1.1), cf. Theorem 2.10. In Sect. 3, we present the preliminaries needed for our analysis, e.g., weak formulations of (1.4) and a Fourier analysis on the class of completely monotone functions. We then detail the proofs of main results in Sect. 4. We finish with Sect. 5 discussing related problems as well as applications to stochastic heat equations with memory.

2 Assumptions and Main Results

Concerning the linear operator A, we assume the following standard condition (see also [15, Sect. 5.5.1]).

Assumption 2.1

The orthonormal basis \(\{e_k\}_{k\ge 1}\) in (1.2) belongs to D(A) and diagonalizes A, i.e., there exists an increasing sequence \(0<\alpha _1\le \alpha _2\le \cdots \) such that \(\{\alpha _k\}\uparrow \infty \) and that

$$\begin{aligned} Ae_k=-\alpha _k e_k,\quad k\ge 1. \end{aligned}$$

We further assume that

$$\begin{aligned} |e_k(\xi )|\le M c_k,\quad |\nabla e_k(\xi )|\le M\alpha _k^{1/2}c_k, \quad \forall \xi \in \mathcal {O},\quad k\ge 1, \end{aligned}$$

for some positive constant M and a sequence of (increasing) positive numbers \(\{c_k\}_{k\ge 1}\).

One can think of A as the usual Laplacian operator, but it need not be. Up to this point, we have not defined what we mean by a stationary solution of (1.4). We shall postpone introducing the formulation of these solutions, cf. Definition 3.6, as well as the required elements to construct them in Sect. 3.1, based on the work of [16, 24, 29]. We now state the following important definition of a stationary solution of (1.1).

Definition 2.2

A random field \(u(t,\mathbf {x})=\sum _{k\ge 1}u_k(t)e_k(\mathbf {x})\) is called a stationary solution of (1.1) if

  1. (a)

    for all \(t\in \mathbb {R}\), \(u(t,\cdot )\in L^2(\Omega ;H)\), i.e., \(\mathbb {E}\Vert u(t,\cdot )\Vert ^2_H<\infty \); and

  2. (b)

    the process \(u_k(t)\) satisfies \(u_k(t)=\langle V_k,\delta _{t}\rangle \) where \(\{V_k\}_{k\ge 1}\) are mutually independently weak stationary solutions of (1.4) in the sense of distribution given by Definition 3.6, \(\delta _t\) is the Dirac function centered at t, and \(\langle V_k,\delta _{t}\rangle \) denotes the action of \(V_k\) when applied to \(\delta _t\).

With regard to the memory kernel K, we first state the definition of the class of completely monotone functions, of which we will assume that K is a member.

Definition 2.3

A function \(K:(0,\infty )\rightarrow [0,\infty )\) is completely monotone if K is of class \(C^\infty (0,\infty )\) and \((-1)^n K^{(n)}(t)\ge 0\) for all \(n\ge 0\), \(t>0\).

Throughout this note, unless stated otherwise, we require that the memory kernel K satisfy the following assumption.

Assumption 2.4

We assume that \(K:\mathbb {R}\rightarrow [0,\infty )\) is symmetric and eventually decreasing to zero as \(t\rightarrow \infty \). Furthermore, K(t) is completely monotone on \(t\in (0,\infty )\).

The class of completely monotone functions includes a special case of memory kernels for which K can be be expressed as a sum of exponentials  [19, 20, 23, 30]. In addition, Assumption 2.4 allows us to include power-law decay functions, e.g., \((1+|t|)^{-\alpha }\), \(\alpha >0\) [29]. The advantage of completely monotone functions is that their Fourier transform can be explicitly computed (cf. Lemma 3.9) and thus is helpful for our analysis. We will see later in Sect. 4 that their Fourier structures play an important role when we investigate the well-posedness and regularity of (1.1).

With regards to the parameters \(\lambda _k\) and \(\alpha _k\), we first assume that they satisfy the following condition. See also [3, Hypothesis 2.10] and [15, Condition (5.40)].

Assumption 2.5

Let \(\{\alpha _k\}_{k\ge 1}\) be as in Assumption 2.1 and \(\{\lambda _k\}_{k\ge 1}\) be as in (1.2). We assume that

$$\begin{aligned} \sum _{k\ge 1}\frac{\lambda _k^2}{\alpha _k}<\infty . \end{aligned}$$

In Theorem 2.8 below, we will see not only that Assumption 2.5 be necessary, but it is also sufficient to guarantee the existence of weak stationary solutions of (1.1). Moreover, we note that Assumption 2.5 is only about the pairs \(\{(\alpha _k,\lambda _k)\}_{k \ge 1}\) and does not require information on \(\{c_k\}_{k\ge 1}\) as in Assumption 2.1. Concerning Hölder continuity, we assume the following further condition on the triples \(\{(c_k,\alpha _k, \lambda _k)\}_{k \ge 1}\).

Assumption 2.6

Let \(\{\alpha _k\}_{k\ge 1}\), \(\{c_k\}_{k\ge 1}\) be as in Assumption 2.1 and \(\{\lambda _k\}_{k\ge 1}\) be as in (1.2). There exists a constant \(\eta \in (0,1)\) such that

$$\begin{aligned} \sum _{k\ge 1}\frac{\lambda _k^2c_k^2}{\alpha _k^\eta }<\infty . \end{aligned}$$

Remark 2.7

Since \(c_k>0\) and \(\alpha _k\) are non-decreasing by Assumption 2.1, it is clear that Assumption 2.6 implies Assumption 2.5. We note however that Assumption 2.6 is also standard and can be found in the literature of linear SPDEs, see for instance [3, Lemma 3.27] and [15, Lemma 5.21].

We now state our first important result, giving the existence of stationary solutions for (1.1).

Theorem 2.8

(Well-posedness) Suppose that Assumptions 2.1 and 2.4 hold. Then, Eq. (1.1) admits stationary solutions \(u(t,\mathbf {x})\) in the sense of Definition 2.2 if and only if Assumption 2.5 is satisfied.

Remark 2.9

We note that in Theorem 2.8, we do not address the pathwise uniqueness as commonly found in SPDEs. Since stationary processes are characterized by their autocovariance function and spectral densities [13, 24], later on in Sect. 4, we will see that stationary solutions \(u(t,\mathbf {x})\) of (1.1) admit the unique covariance representation

$$\begin{aligned} \mathbb {E}[u(t,\mathbf {x})u(s,\mathbf {y})]=\sum _{k\ge 1}\int _\mathbb {R}e^{i(t-s)\omega }\rho _k(\omega )\mathrm {d}\omega \,e_k(\mathbf {x})e_k(\mathbf {y}), \end{aligned}$$

where \(\rho _k(\omega )\) given by (4.1) is the spectral density of \(u_k(t)\) as in Proposition 4.2 below. See also Proposition 4.1.

The proof of Theorem 2.8 makes use of a Fourier analysis on the memory kernels and will be carried out in Sect. 4. In particular, we will generalize results in [22, 23], where K has the form as a sum of exponentials, to calculate explicitly the second moment of \(u_k(t)\) as in the decomposition \(u(t,\mathbf {x})=\sum _{k\ge 1}u_k(t)e_k(\mathbf {x})\).

We now state the main result of the paper on the regularity of (1.1).

Theorem 2.10

(Time and space regularity) Suppose that Assumptions 2.12.4 and 2.6 are satisfied. Let \(u(t,\mathbf {x})\) be the solution of (1.1) as in Theorem 2.8 and \(\eta \) be the constant from Assumption 2.6. Then there exists a modification \(U(t,\mathbf {x})\) of \(u(t,\mathbf {x})\) such that U is \(\gamma \)-Hölder continuous in time and space for any \(\gamma \in (0,1-\eta )\).

The proof of Theorem 2.10 carried out in Sect. 4 will employ the classical Kolmogorov criterion on space-time regularity of random fields, which in turn relies heavily on delicate estimates on difference of solutions of (1.1). In Sect. 5, we will detail application of Theorem 2.10 to stochastic heat equations. In particular, one will see that while space regularity remains the same, time regularity of (1.1) is smoother than those of (1.6) and (1.7).

3 Mathematical Preliminaries

Throughout the rest of the paper, we will use C and c to denote generic positive constants, whose values may change from one line to the next. When the dependence on parameters is important, this will be indicated in parenthesis, e.g., \(c(\alpha ,q)\) depends on parameters \(\alpha \) and q.

3.1 Weak solutions of the GLE

In order to define weak solutions of (1.4), for the reader’s convenience, we briefly review the framework in [29]. For given \(\lambda ,\,\beta >0\), we consider a stochastic process \(v:\mathbb {R}\rightarrow \mathbb {R}\) governed by the formal stochastic integro-differential equation

$$\begin{aligned} {\dot{v}}(t)=-\beta \int _{-\infty }^t \! \! \! \! K(t-s)v(s)\mathrm {d}s+\lambda F(t), \end{aligned}$$

where F(t) is a stationary Gaussian process whose autocovariance function is K(t). Our methods rely heavily on Fourier analysis for spectral functions and so for a function \(f:\mathbb {R}\rightarrow \mathbb {C}\), we define the Fourier transform of f in the sense of improper integrals as

$$\begin{aligned} \widehat{f}(\omega )=\int _\mathbb {R}f(t) e^{-it\omega } \mathrm {d}t. \end{aligned}$$

If a function \(K(t)\in L^1_{\text {loc}}(\mathbb {R})\) is symmetric and eventually decreasing to zero as \(t\rightarrow \infty \), then the above integral is well-defined for every \(\omega \ne 0\) [32]. Let \(\mathcal {S}\) be the class of Schwarz functions defined on \(\mathbb {R}\) and \(\mathcal {S}'\) be the space of tempered distributions on \(\mathcal {S}\). The action of \(f\in \mathcal {S}'\) on \(\varphi \in \mathcal {S}\) is denoted by \(\langle f,\varphi \rangle \). Also, the Fourier transform \(\mathcal {F}\left[ f\right] \) of \(f\in \mathcal {S}'\) in the sense of distributions is defined as follows:

$$\begin{aligned} \forall \varphi \in \mathcal {S},\qquad \langle \mathcal {F}\left[ f\right] ,\varphi \rangle :=\langle f,\widehat{\varphi } \rangle . \end{aligned}$$

It is well known that the Fourier map \(\mathcal {F}:\mathcal {S}'\rightarrow \mathcal {S}'\) is a one-to-one relation. We begin with the definition of a (weak) stationary process [13].

Definition 3.1

A stochastic process \(\{F(t)\}_{t \in \mathbb {R}}\) is mean-square continuous and stationary if for all \(t, s\in \mathbb {R}\),

  1. (a)

    \(\mathbb {E}|F(t) |^2<\infty \) and \(\lim _{h\rightarrow 0}\mathbb {E}|F(t+h)-F(t) |^2 =0\);

  2. (b)

    \(\mathbb {E}[F(t)]=a\), for some constant a (we may assume \(a=0\)); and

  3. (c)

    the covariance function \(\mathbb {E}[F(t)\overline{F(s)}]\) depends only on the difference \((t-s)\).

By Bochner’s Theorem, a stationary process F is characterized by a positive definite function r and a finite Borel measure \(\nu \) such that the following holds for all \(t,\,s\in \mathbb {R}\) [13]:

$$\begin{aligned} \mathbb {E}\big [F(t)\overline{F(s)}\big ]=r(t-s)=\int _\mathbb {R}e^{i(t-s)y}\nu (\mathrm {d}y). \end{aligned}$$

In the above, r is called the covariance and \(\nu \) is called the spectral measure of F. Furthermore, there always exists a modified version \(\widetilde{F}\) of F such that \(\widetilde{F}\) is Gaussian [13]. A generalization of stationary processes is the class of stationary random distributions, introduced by Itô’s seminal work [24]. Denote by \(\tau _h\), the shift transform on \(\mathcal {S}\), \(\tau _h\varphi (x):=\varphi (x+h)\).

Definition 3.2

A linear functional \(F:\mathcal {S}\rightarrow L^2(\Omega )\), the space of all complex-valued random variables with finite variance, is called a stationary random distribution on \(\mathcal {S}\) if for all \(h\in \mathbb {R}\), \(\varphi _1,\varphi _2\in \mathcal {S}\),

$$\begin{aligned} \mathbb {E}\big [\langle F,\tau _h \varphi _1\rangle \overline{\langle F,\tau _h\varphi _2 \rangle }\big ]=\mathbb {E}\big [\langle F,\varphi _1\rangle \overline{\langle F,\varphi _2 \rangle }\big ]. \end{aligned}$$

Next, we recall the definition of a positive definite tempered distribution f [24], which is analogous to the positive definiteness of real-valued functions.

Definition 3.3

A tempered distribution \(f\in \mathcal {S}'\) is called positive definite if for any \(\varphi \in \mathcal {S}\),

$$\begin{aligned} \langle f,\varphi *{\widetilde{\varphi }}\rangle \ge 0, \end{aligned}$$

where \({\widetilde{\varphi }}(x)=\varphi (-x)\).

A stationary random distribution F is characterized by its associated positive definite tempered distribution r and a non-negative measure \(\nu \) in the following sense: [24]

$$\begin{aligned} \forall \varphi _1,\,\varphi _2\in \mathcal {S},\qquad \mathbb {E}\Big [\langle F,\varphi _1\rangle \overline{\langle F,\varphi _2\rangle }\Big ] = \langle r,\varphi _1*\widetilde{\varphi _2}\rangle = \int _\mathbb {R}\overline{\widehat{\varphi _1}(y)}\widehat{\varphi _2}(y)\nu (\mathrm {d}y), \end{aligned}$$

where \(\nu \) satisfies

$$\begin{aligned} \int _\mathbb {R}\frac{\nu ( \mathrm {d}x )}{(1+x^2)^k}<\infty , \end{aligned}$$

for some integer k. Furthermore, if \(\nu \) is absolutely continuous with respect to Lebesgue measure on \(\mathbb {R}\), then we are able to extend the above stationary random distribution F to a generalized random distribution \(\Phi :\text {Dom}(\Phi )\subset \mathcal {S}'\rightarrow L^2(\Omega )\) such that the following holds

$$\begin{aligned} \mathbb {E}\Big [\langle \Phi , f_1\rangle \overline{\langle \Phi ,f_2\rangle }\Big ] = \int _\mathbb {R}\overline{\mathcal {F}\left[ f_1\right] (y)}\mathcal {F}\left[ f_2\right] (y)\nu ( \mathrm {d}y ). \end{aligned}$$

Here, the domain of \(\Phi \) consists of those tempered distributions f such that \(\mathcal {F}\left[ f\right] \in L^2(\nu )\) where

$$\begin{aligned} L^2(\nu )=\big \{g:\mathbb {R}\rightarrow \mathbb {C},\int _\mathbb {R}|g(y)|^2\nu ( \mathrm {d}y )<\infty \big \}. \end{aligned}$$

Comparing (3.5) with (3.3), we readily see that \(\Phi \) is an extension of F since \(\mathcal {S}\subset \text {Dom}(\Phi )\), see [29] for a more detailed discussion. We now can define the function-valued version of \(\Phi \) as follows.

Definition 3.4

Let \(\Phi \) be a random operator associated with a measure \(\nu \) as in (3.5). Let \(\delta _t\) be the Dirac \(\delta -\)distribution centered at t. If \(\delta _t\) and \(1_{[0,t]}\) are in \(\text {Dom} (\Phi )\), then we define

$$\begin{aligned} v(t):=\langle \Phi ,\delta _t\rangle ,\quad \text { and }\quad x(t):=\langle \Phi ,1_{[0,t]}\rangle . \end{aligned}$$

We note that x(t) can be defined without v(t).

In view of [29, Lemma 2.17], \(\delta _t\in \text {Dom}(\Phi )\) if and only if the representation measure \(\nu \) is finite. Moreover v(t) is a stationary Gaussian process, which is consistent with (3.2). We now turn our attention to weak solutions of (3.1). Formally, we can multiply both sides of (3.1) by a Schwarz function \(\varphi \), then perform an integration by parts on the left-hand side and a change of variable on the convolution term on the right-hand side to arrive at

$$\begin{aligned} -\int _\mathbb {R}v(t)\varphi '(t)\mathrm {d}t=-\beta \int _\mathbb {R}v(t)\int _\mathbb {R}K^+(y)\varphi (t+y) \mathrm {d}y \mathrm {d}t+\lambda \int _\mathbb {R}F(t)\varphi (t)\mathrm {d}t, \end{aligned}$$

where \(K^+(t):=K(t)1_{\{t\ge 0\}}\). Bringing the convolution to the left-hand side now yields

$$\begin{aligned} \int _\mathbb {R}v(t)\big [-\varphi '(t)+\beta \widetilde{ K^+*{\widetilde{\varphi }} }(t)\mathrm {d}t \big ]\mathrm {d}t=\lambda \int _\mathbb {R}F(t)\varphi (t)\mathrm {d}t. \end{aligned}$$

Having introduced the notion of generalized random distributions, we rewrite the above equation in the following weak form

$$\begin{aligned} \langle V,-\varphi '+\beta \widetilde{ K^+*{\widetilde{\varphi }} }\rangle = \lambda \langle F,\varphi \rangle , \end{aligned}$$

where \(\widetilde{f}(x):=f(-x)\). In the above, the LHS is understood as an action of a generalized random distribution V applied to an element in its domain, whereas the RHS is the usual action of the stationary random distribution F applied to \(\varphi \in \mathcal {S}\). Furthermore, the random distribution F is characterized by its covariance structure

$$\begin{aligned} \forall \varphi _1,\varphi _2\in \mathcal {S},\qquad \mathbb {E}\Big [\langle F,\varphi _1\rangle \overline{\langle F,\varphi _2\rangle }\Big ]= \int _\mathbb {R}K(t)(\varphi _1*\widetilde{\varphi _2})(t)\mathrm {d}t. \end{aligned}$$

Remark 3.5

In general, a real-valued function K can be regarded as a tempered distribution by setting

$$\begin{aligned} \langle K,\varphi \rangle :=\int _\mathbb {R}K(t)\varphi (t)\mathrm {d}t, \end{aligned}$$

for any \(\varphi \in \mathcal {S}\). The above integral always converges as long as K belongs to \( L^1_{\text {loc}}(\mathbb {R})\) and does not grow exponentially fast [33]. In addition, if K satisfies Assumption 2.4, then K is indeed a positive definite tempered distribution [29], which in turn implies that F as in (3.7) is a stationary random distribution.

With the above observation, we have the following definition of a weak solution of (3.1).

Definition 3.6

[29] Let \(\nu \) be a non-negative measure satisfying condition (3.4) and V be the operator associated with \(\nu \) via the relation (3.5). Then V is a weak stationary solution for Eq. (3.1) if V satisfies the following conditions.

  1. (a)

    For all \(\varphi \in \mathcal {S}\), \(K^+*\varphi \) belongs to \(\text {Dom}(V)\).

  2. (b)

    For any \(\varphi ,\psi \in \mathcal {S}\), it holds that

    $$\begin{aligned} \mathbb {E}\bigg [\langle V,-\varphi '+\beta \widetilde{K^+*{\widetilde{\varphi }}}\rangle \overline{\langle V,-\psi '+\beta \widetilde{K^+*{\widetilde{\psi }}}\rangle }\bigg ]=\lambda ^2\mathbb {E}\big [\langle F,\varphi \rangle \overline{\langle F,\psi \rangle }\big ]. \end{aligned}$$

Using this definition, we will address the well-posedness of (1.4) in Sect. 4.

3.2 Completely Monotone Functions

In this subsection, we collect several properties of the class of completely monotone functions that are needed for the analysis of the regularity of stationary solutions of (1.1). It is known that such a function K can be characterized in terms of the Laplace transform of Radon measures.

Theorem 3.7

(Hausdorff–Bernstein–Widder Theorem) A function K is completely monotone if and only if K admits the formula

$$\begin{aligned} K(t)=\int _0^\infty \!\!\! e^{-tx} \mu (\mathrm {d}x ), \end{aligned}$$

where \(\mu \) is a positive measure on \([0,\infty )\).

It is convenient to denote

  1. (a)

    \(\mathcal {CM}\), the class of all completely monotone functions; and

  2. (b)

    \(\mathcal {CM}_b\), the class of all \(K\in \mathcal {CM}\) such that the measure \(\mu \) in (3.8) is finite.

Remark 3.8

Notice that if \(K\in \mathcal {CM}_b\), by setting \(K(0):=\int _0^\infty \mu ( \mathrm {d}x )=\mu ([0,\infty ))\), K can be extended to be continuous on \([0,\infty )\). Hence, \(\mathcal {CM}_b=C[0,\infty )\cap \mathcal {CM}\). In view of Assumption 2.4, the class of kernels that we consider is a subset of \(\mathcal {CM}_b\).

We now turn to Fourier transforms of \(\mathcal {CM}_b\) functions.

Lemma 3.9

Suppose that \(K\in \mathcal {CM}_b\) and that K is decreasing to 0 as \(t\rightarrow \infty \). Let \(\mu \) be the representation measure as in (3.8). Then for every \(\omega \ne 0\), it holds that

$$\begin{aligned} \int _0^\infty \!\!\!K(t)e^{-it\omega }\mathrm {d}t =\int _0^\infty \!\!\!\frac{x}{x^2+\omega ^2}\mu ( \mathrm {d}x )-i\int _0^\infty \!\!\!\frac{\omega }{x^2+\omega ^2}\mu ( \mathrm {d}x ). \end{aligned}$$


We first note that since \(\mu \) is a finite measure, this implies that

$$\begin{aligned} \int _0^\infty \!\!\!\frac{x}{x^2+\omega ^2}\mu ( \mathrm {d}x ) \le \frac{1}{2|\omega |}\int _\mathbb {R}\mu ( \mathrm {d}x )< \infty , \end{aligned}$$

and that

$$\begin{aligned} \int _0^\infty \!\!\!\frac{|\omega |}{x^2+\omega ^2}\mu ( \mathrm {d}x ) \le \frac{1}{|\omega |}\int _\mathbb {R}\mu ( \mathrm {d}x )<\infty . \end{aligned}$$

In addition, K(t) decreasing to 0 as \(t\rightarrow \infty \) implies that

$$\begin{aligned} \mu (\{0\})=\lim _{t\rightarrow \infty }\int _0^\infty \!\!\!e^{-tx}\mu (\mathrm {d}x )=\lim _{t\rightarrow \infty }K(t)=0. \end{aligned}$$

Now by the definition of improper integrals,

$$\begin{aligned} \int _0^\infty \!\!\!K(t)e^{-it\omega }\mathrm {d}t= \lim _{A\rightarrow \infty }\int _0^A\!\!\!K(t)e^{-it\omega }\mathrm {d}t. \end{aligned}$$

Substituting \(K(t)=\int _0^\infty e^{-tx}\mu ( \mathrm {d}x )\) on the above RHS, we have a chain of implications

$$\begin{aligned} \int _0^A\!\!\!K(t)e^{-it\omega }\mathrm {d}t&= \int _0^A \!\!\!\int _0^\infty \!\!\!e^{-tx}\mu ( \mathrm {d}x ) e^{-it\omega }\mathrm {d}t \\&= \int _0^\infty \!\!\!\int _0^A\!\!\!e^{-(x+i\omega )t}\mathrm {d}t \mu ( \mathrm {d}x )\\&= \int _0^\infty \!\!\frac{1-e^{-(x+i\omega )A}}{x+i\omega }\mu ( \mathrm {d}x )\\&= \int _0^\infty \!\! \frac{\left( 1-e^{-(x+i\omega )A}\right) x}{x^2+\omega ^2}\mu ( \mathrm {d}x )-i\int _0^\infty \!\!\!\frac{\left( 1-e^{-(x+i\omega )A}\right) \omega }{x^2+\omega ^2}\mu ( \mathrm {d}x ). \end{aligned}$$

Considering the first integral term, it is clear that for all \(x\ge 0\) and \(\omega \ne 0\),

$$\begin{aligned} \frac{\left| 1-e^{-(x+i\omega )A}\right| x}{x^2+\omega ^2}\le \frac{2x}{x^2+\omega ^2}, \qquad A\rightarrow \infty , \end{aligned}$$

and that

$$\begin{aligned} \frac{\left( 1-e^{-(x+i\omega )A}\right) x}{x^2+\omega ^2}\rightarrow \frac{x}{x^2+\omega ^2}, \qquad A\rightarrow \infty . \end{aligned}$$

By the Dominated Convergence Theorem, we obtain

$$\begin{aligned} \int _0^\infty \!\! \frac{\left( 1-e^{-(x+i\omega )A}\right) x}{x^2+\omega ^2}\mu ( \mathrm {d}x )\rightarrow \int _0^\infty \!\! \frac{x}{x^2+\omega ^2}\mu ( \mathrm {d}x ),\qquad A\rightarrow \infty . \end{aligned}$$

With regard to the second term, we note that \(\mu (\{0\})=0\) as reasoned above. It follows that for \(\mu -\)almost every \(x\in [0,\infty )\), we have

$$\begin{aligned} \frac{\left( 1-e^{-(x+i\omega )A}\right) \omega }{x^2+\omega ^2}\rightarrow \frac{\omega }{x^2+\omega ^2}. \end{aligned}$$

Also, for all \(x\ge 0\) and \(\omega \ne 0\),

$$\begin{aligned} \frac{\left| (1-e^{-(x+i\omega )A})\omega \right| }{x^2+\omega ^2}\le \frac{2x}{x^2+\omega ^2}, \end{aligned}$$

which implies

$$\begin{aligned} \int _0^\infty \!\!\!\frac{\left( 1-e^{-(x+i\omega )A}\right) \omega }{x^2+\omega ^2}\mu ( \mathrm {d}x )\rightarrow \int _0^\infty \!\!\!\frac{\omega }{x^2+\omega ^2}\mu ( \mathrm {d}x ),\qquad A\rightarrow \infty , \end{aligned}$$

by the Dominated Convergence Theorem. The proof is thus complete. \(\square \)

We finish this subsection by the following important observations on the class of \(\mathcal {CM}_b\) that will be the main ingredients for our analysis on the regularity of the solutions of (1.1) in Sect. 4.

Lemma 3.10

Suppose that \(K\in \mathcal {CM}_b\) and that \(K(t)\downarrow 0\) as \(t\rightarrow \infty \). Define for \(\omega \ne 0\),

$$\begin{aligned} \mathcal {K}_{\cos }(\omega ):=\int _0^\infty \!\!\!K(t)\cos (t\omega )\mathrm {d}t,\quad \text {and} \quad \mathcal {K}_{\sin }(\omega ):=\int _0^\infty \!\!\!K(t)\sin (t\omega )\mathrm {d}t. \end{aligned}$$

Then, the following properties hold.

  1. (a)

    \(\mathcal {K}_{\cos }(\omega )\) is decreasing, \(\omega \mathcal {K}_{\cos }(\omega )\) is bounded and \(\omega ^2\mathcal {K}_{\cos }(\omega )\) is increasing on \(\omega \in [0,\infty )\).

  2. (b)

    \(\omega \mathcal {K}_{\sin }(\omega )\) is increasing on \(\omega \in [0,\infty )\) and \(\lim _{\omega \rightarrow \infty }\omega \mathcal {K}_{\sin }(\omega )=K(0)\).

  3. (c)

    The ratio \(\frac{\mathcal {K}_{\sin }(\omega )}{\omega }\) is decreasing to 0 as \(\omega \rightarrow \infty \). Consequently, for k sufficiently large, the equation

    $$\begin{aligned} 1-\alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }=0, \end{aligned}$$

    has a unique solution \(\omega _k\) on \((0,\infty )\) where \(\alpha _k\) is as in Assumption 2.1. Moreover, \(\lim _{k\rightarrow \infty }\frac{\omega _k^2}{\alpha _k}=K(0)\).

  4. (d)

    There exists a constant \(c>0\) such that for any sufficiently large k, we have that \(\omega _k>1\) and, for any \(q \in (0,1)\), we have

    $$\begin{aligned} \left| \alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }-1\right| \ge \frac{c}{\omega _k}\left| \omega -\omega _k\right| , \end{aligned}$$

    for all \(\omega \in [\omega _k-\omega _k^q,\omega _k+\omega _k^q]\).

Remark 3.11

We will see later in the proof of Lemma 3.10 (d) that the assumption \(K(0)<\infty \) is crucial for our analysis, which explains why we are restricted to memory kernels that belong to \(\mathcal {CM}_b\), not those belonging to \(\mathcal {CM}\setminus \mathcal {CM}_b\), e.g., \(|t|^{-\alpha }\), \(\alpha >0\).

Proof of Lemma 3.10

  1. (a)

    In view of Lemma 3.9, the first assertion is evident since

    $$\begin{aligned} \mathcal {K}_{\cos }(\omega )=\int _0^\infty \frac{x}{x^2+\omega ^2}\mu ( \mathrm {d}x ). \end{aligned}$$

    To see the second claim, we employ Young’s inequality to find that

    $$\begin{aligned} \omega \mathcal {K}_{\cos }(\omega )=\int _0^\infty \!\!\!\frac{x\omega }{x^2+\omega ^2}\mu (\mathrm {d}x )\le \frac{1}{2}\int _0^\infty \!\!\!\mu (\mathrm {d}x )<\infty . \end{aligned}$$


    $$\begin{aligned} \omega ^2\mathcal {K}_{\cos }(\omega )=\int _0^\infty \!\!\!\frac{x\omega ^2}{x^2+\omega ^2}\mu (\mathrm {d}x ), \end{aligned}$$

    which is clearly increasing on \(\omega \in (0,\infty )\).

  2. (b)

    We note that

    $$\begin{aligned} \omega \mathcal {K}_{\sin }(\omega ) = \int _0^\infty \!\!\!\frac{\omega ^2}{x^2+\omega ^2}\mu ( \mathrm {d}x ). \end{aligned}$$

    The Monotone Convergence Theorem then implies that

    $$\begin{aligned} \lim _{\omega \rightarrow \infty }\omega \mathcal {K}_{\sin }(\omega )=\int _0^\infty \!\!\!\mu ( \mathrm {d}x )=K(0). \end{aligned}$$
  3. (c)

    The first assertion is evident since \(\mathcal {K}_{\sin }(\omega )/\omega \) admits the formula

    $$\begin{aligned} \frac{\mathcal {K}_{\sin }(\omega )}{\omega }=\int _0^\infty \frac{1}{x^2+\omega ^2}\mu ( \mathrm {d}x ). \end{aligned}$$

    By the Monotone Convergence Theorem, we see that \(\mathcal {K}_{\sin }(\omega )/\omega \downarrow 0\) as \(\omega \rightarrow \infty \). Now to verify that \(1-\alpha _k\mathcal {K}_{\sin }(\omega )/\omega =0\) must have a unique solution \(\omega _k\in (0,\infty )\) for k sufficiently large, we have the following observations

    $$\begin{aligned} \lim _{\omega \rightarrow 0^+}1-\alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }=1-\alpha _k\int _0^\infty \frac{1}{x^2}\mu (\mathrm {d}x),\,\,\text {and}\,\, \lim _{\omega \rightarrow \infty }1-\alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }=1, \end{aligned}$$

    where the left-hand side limit may be positive. If \(\int _0^\infty \frac{1}{x^2}\mu (\mathrm {d}x)\) diverges, then it is clear that

    $$\begin{aligned} \lim _{\omega \rightarrow 0^+}1-\alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }=1-\alpha _k\int _0^\infty \frac{1}{x^2}\mu (\mathrm {d}x)=-\infty . \end{aligned}$$

    Otherwise, since \(\{\alpha _k\}_{k\ge 1}\uparrow \infty \) by Assumption 2.1, there must exist an index \(k^*\) sufficiently large such that for all \(k\ge k^*\), the above left-hand side limit must be negative. Together with monotonic property, we can infer the existence and uniqueness of \(\omega _k\in (0,\infty )\) solving \(1-\alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }=0\). Finally, we note that

    $$\begin{aligned} \frac{\omega _k^2}{\alpha _k}=\omega _k\mathcal {K}_{\sin }(\omega _k)\rightarrow K(0), \qquad k\rightarrow \infty , \end{aligned}$$

    by virtue of part (b).

  4. (d)

    Since \(\{\alpha _k\}_{k\ge 1}\uparrow \infty \) and \(\omega _k^2/\alpha _k\rightarrow K(0)\) from part (c), it is clear that for k large enough, \(\omega _k>1\). To prove (3.12), we assume that \(\omega \ne \omega _k\) (it is trivial when \(\omega =\omega _k\)). Using the identity \(\omega _k=\alpha _k\mathcal {K}_{\sin }(\omega _k)\), we recast the left-hand side of (3.12) as

    $$\begin{aligned} \alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }-1&= \alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }-\alpha _k\frac{\mathcal {K}_{\sin }(\omega _k)}{\omega _k}\\&=\alpha _k\int _0^\infty \!\!\!\frac{1}{x^2+\omega ^2}-\frac{1}{x^2+\omega _k^2}\mu ( \mathrm {d}x )\\&=\alpha _k\int _0^\infty \!\!\!\frac{(\omega _k-\omega )(\omega _k+\omega )}{(x^2+\omega ^2)(x^2+\omega _k^2)}\mu ( \mathrm {d}x ). \end{aligned}$$

    It follows that (3.12) is equivalent to

    $$\begin{aligned} \alpha _k\int _0^\infty \!\!\!\frac{|\omega _k-\omega |(\omega _k+\omega )}{(x^2+\omega ^2)(x^2+\omega _k^2)}\mu ( \mathrm {d}x )\ge \frac{c}{\omega _k}|\omega _k-\omega |, \end{aligned}$$

    which is the same as

    $$\begin{aligned} \int _0^\infty \!\!\!\frac{\omega _k(\omega _k+\omega )}{(x^2+\omega ^2)(x^2+\omega _k^2)}\mu ( \mathrm {d}x )\ge \frac{c}{\alpha _k}=\frac{c\mathcal {K}_{\sin }(\omega _k)}{\omega _k}=c\int _0^\infty \!\!\!\frac{1}{x^2+\omega _k^2}\mu ( \mathrm {d}x ). \end{aligned}$$

    It therefore suffices to show that

    $$\begin{aligned} \int _0^\infty \!\!\!\frac{\omega _k(\omega _k+\omega )}{(x^2+\omega ^2)(x^2+\omega _k^2)}\mu (\mathrm {d}x )\ge c\int _0^\infty \!\!\!\frac{1}{x^2+\omega _k^2}\mu (\mathrm {d}x ). \end{aligned}$$

    For \(\omega \in [\omega _k-\omega _k^q,\omega _k+\omega _k^q]\), it is clear that \(x^2+\omega ^2\le 4(x^2+\omega _k^2)\). We then find a lower estimate as follows:

    $$\begin{aligned} \int _0^\infty \frac{\omega _k(\omega _k+\omega )}{(x^2+\omega ^2)(x^2+\omega _k^2)}\mu (\mathrm {d}x )&\ge \frac{1}{4}\int _0^\infty \!\!\!\frac{\omega _k^2}{(x^2+\omega _k^2)^2}\mu ( \mathrm {d}x )\\&=\frac{\omega _k^2}{4K(0)}\int _0^\infty \!\!\!\mu ( \mathrm {d}x )\int _0^\infty \!\!\!\frac{1}{(x^2+\omega _k^2)^2}\mu ( \mathrm {d}x ). \end{aligned}$$

    We invoke Hölder’s inequality (assuming k is large enough) to find

    $$\begin{aligned} \begin{aligned} \frac{\omega _k^2}{4K(0)}\int _0^\infty \!\!\!\mu ( \mathrm {d}x )\int _0^\infty \!\!\!\frac{1}{(x^2+\omega _k^2)^2}\mu ( \mathrm {d}x )&\ge \frac{\omega _k^2}{4K(0)}\Big (\int _0^\infty \!\!\!\frac{1}{x^2+\omega _k^2}\mu ( \mathrm {d}x )\Big )^2\\&=\frac{\omega _k\mathcal {K}_{\sin }(\omega _k)}{4K(0)}\int _0^\infty \!\!\!\frac{1}{x^2+\omega _k^2}\mu (\mathrm {d}x)\\&\ge \frac{\mathcal {K}_{\sin }(1)}{4K(0)}\int _0^\infty \!\!\!\frac{1}{x^2+\omega _k^2}\mu (\mathrm {d}x). \end{aligned} \end{aligned}$$

    where in the last implication, we have employed the fact that \(\omega \mathcal {K}_{\sin }(\omega )\) is increasing from part (b). Finally, setting \(c=\mathcal {K}_{\sin }(1)/4K(0)\) and collecting everything, we obtain (3.12), which completes the proof.

\(\square \)

4 Proof of Main Results

4.1 Proof of Theorem 2.8

We begin this section by recalling some results on the well-posedness of the GLE (1.4). The proof of these results can be found in [29].

Proposition 4.1

Suppose that the memory kernel K satisfies Assumption 2.4. For each k, \(V_k\) is a weak solution for (1.4) in the sense of Definition 3.6 if and only if the spectral measure \(\nu _k\) satisfies \(\nu _k(\mathrm {d}\omega )=\rho _k(\omega )\mathrm {d}\omega \) where \(\rho _k\) is given by

$$\begin{aligned} \rho _k(\omega ) :=\frac{\lambda _k^2\widehat{K}(\omega )}{2\pi \left| i\omega +\alpha _k\widehat{K^+}(\omega )\right| ^2}. \end{aligned}$$


See [29, Theorem 4.3]. \(\square \)

The function \(\rho _k\) is called the spectral density of \(V_k\). Having obtained weak solutions of (1.4), in the following proposition we assert that \(u_k(t)\) is a stationary Gaussian process.

Proposition 4.2

Under the hypotheses of Proposition 4.1, let \(V_k\) be the weak solution of (1.4) and \(\rho _k\) be the corresponding spectral density as in (4.1). Then,

  1. (a)

    \(\rho _k\in L^1(\mathbb {R})\);

  2. (b)

    \(\delta _t \) belongs to \(\text {Dom}(V_k)\) and the process \(u_k(t):=\langle V_k,\delta _t\rangle \) is a real valued Gaussian stationary process with zero mean in the sense of Definition 3.1, which is a.s. continuous; and

  3. (c)

    the autocovariance of \(u_k(t)\) admits the representation

    $$\begin{aligned} \mathbb {E}[u_k(t)u_k(s)] = \int _\mathbb {R}e^{i(t-s)\omega }\rho _k(\omega )\mathrm {d}\omega . \end{aligned}$$


See [29, Theorem 5.4]. \(\square \)

With regard to the existence of stationary solutions for (1.1), we remark that Proposition 4.2 only guarantees the second condition of Definition 2.2. In order to fulfill the first requirement, using the decomposition \(u(t,\mathbf {x})=\sum _{k\ge 1}u_k(t)e_k(\mathbf {x})\) and the fact that the processes \(u_k(t)\) are all independent, we observe that

$$\begin{aligned} \mathbb {E}\Vert u(t,\cdot )\Vert ^2_H=\sum _{k\ge 1}\mathbb {E}|u_k(t)|^2. \end{aligned}$$

It is thus necessary to obtain useful bounds on the second moment of \(u_k\), which by virtue of Proposition 4.2 (c), cf. (4.2), is equivalent to calculating \(\int _\mathbb {R}\rho _k(\omega )\mathrm {d}\omega \). To be more precise, we have the following lemma, which will be employed to prove Theorem 2.8.

Lemma 4.3

Suppose that the memory kernel K satisfies Assumption 2.4. Let \(\rho _k\) be the spectral density as in (4.1). Then,

$$\begin{aligned} \int _\mathbb {R}\rho _k(\omega )\mathrm {d}\omega =\frac{\lambda _k^2}{\alpha _k}. \end{aligned}$$

We note that Lemma 4.3 may be considered as an extension of previous results on the second moment of the solutions of 1D GLE, e.g., [22, Equation (2.7)] and [23, Proposition 2.1], in which the kernels considered have a special finite-sum-of-exponentials form. By employing a contour integral method similar to what is used in [22, 26], we are able to generalize these results to any kernel satisfying Assumption 2.4. The proof of Lemma 4.3 will be presented in detail at the end of the section. With Lemma 4.3 in hand, we are ready to assert the existence of weak stationary solutions of (1.1). Since the argument is short, we include it here for the sake of completeness.

Proof of Theorem 2.8

Let \(u_k\) be the stationary Gaussian process defined in Proposition 4.2. Verifying the existence of \(u(t,\mathbf {x})=\sum _{k\ge 1}u_k(t)e_k(\mathbf {x})\) essentially amounts to checking the first condition of Definition 2.2. To this end, in view of (4.2) together with (4.3), we have

$$\begin{aligned} \mathbb {E}\Vert u(t,\cdot )\Vert _H^2=\sum _{k\ge 1}\mathbb {E}|u_k(t)|^2=\sum _{k\ge 1}\int _0^\infty \!\!\!\rho _k(\omega )\mathrm {d}\omega =\sum _{k\ge 1}\frac{\lambda _k^2}{\alpha _k}. \end{aligned}$$

This implies that Definition 2.2 (a) (\(\mathbb {E}\Vert u(t,\cdot )\Vert _H^2\) being finite) is equivalent to Assumption 2.5 being satisfied. The proof is thus complete. \(\square \)

We now discuss the proof of Lemma 4.3. As mentioned earlier, we will follow closely the strategy in [17] and make use of contour integrals on the complex plane \(\mathbb {C}\) to evaluate \(\int _\mathbb {R}\rho _k(\omega )\mathrm {d}\omega \). We note that in previous results [22, 23] for memory kernels having a sum-of-exponentials form, the typical approach [22, 26] is to employ functions similar to \(f_k(z)\) as in (4.4) below and their contour integrals on the upper half complex plane. The arguments for these results rely on a careful analysis on the locations of the poles for the functions therein. In our method, instead of working with the upper half complex plane, we shift the analysis to the lower half plane. The novelty in this approach is that the function that we study, Eq. (4.4), is actually analytic, which allows us to include more general kernels, such as those described in Assumption 2.4. To be precise, we have the following lemma that will be employed to prove Lemma 4.3.

Lemma 4.4

Suppose that the memory kernel K satisfies Assumption 2.4. Let \(f_k(z)\) be a complex-valued function given by

$$\begin{aligned} f_k(z)=\frac{1}{i\big (z-\alpha _k\mathcal {K}_{\sin }(z)\big )+\alpha _k\mathcal {K}_{\cos }(z)}, \end{aligned}$$

where \(\alpha _k\) is as defined in Assumption 2.1. Then, \(f_k(z)\) is analytic on the lower half complex plane \(\mathbb {C}^-\setminus \{0\}\) where

$$\begin{aligned} \mathbb {C}^-=\{z\in \mathbb {C}:\text {Im} (z)\le 0\}. \end{aligned}$$


First of all, in view of (3.9) and (3.10), we recast \(f_k(z)\) as

$$\begin{aligned} f_k(z)&=\frac{1}{\alpha _k (\mathcal {K}_{\cos }(z)-i\mathcal {K}_{\sin }(z))+iz}=\frac{1}{\alpha _k \int _0^\infty \frac{1}{iz+x}\mu (\mathrm {d}x)+iz}. \end{aligned}$$

We now proceed to prove that \((f_k(z))^{-1}\) given by

$$\begin{aligned} (f_k(z))^{-1}=\alpha _k \int _0^\infty \!\!\!\frac{1}{iz+x}\mu (\mathrm {d}x)+iz, \end{aligned}$$

is analytic and does not admit any root in \(\mathbb {C}^-\setminus \{0\}\), which in turn implies that \(f_k(z)\) is analytic in \(\mathbb {C}^-\setminus \{0\}\).

To verify the analyticity of \((f_k(z))^{-1}\), it suffices to show \(\int _0^\infty \frac{1}{iz+x}\mu (\mathrm {d}x)\) is analytic in \(\mathbb {C}^-\setminus \{0\}\). To this end, for any \(z_0\in \mathbb {C}^-\setminus \{0\}\), consider \(z\in \mathbb {C}\) such that \(|z-z_0|<|z_0|/2\) and observe that

$$\begin{aligned} \int _0^\infty \!\!\!\frac{1}{iz+x}\mu (\mathrm {d}x)&=\int _0^\infty \!\!\!\frac{1}{(iz_0+x)\big (\frac{iz-iz_0}{iz_0+x}+1\big )}\mu (\mathrm {d}x)\\&=\sum _{n\ge 0}\int _0^\infty \!\!\!\frac{(-i)^n}{(iz_0+x)^{n+1}}\mu (\mathrm {d}x)(z-z_0)^n. \end{aligned}$$

We note that the last implication above is still formal. We now claim that for any z such that \(|z-z_0|<|z_0|/2 \), the right hand side series above actually converges absolutely. Indeed, by using the fact that \(z_0=u-iv\in \mathbb {C}^-\setminus \{0\}\) with \(u\in \mathbb {R}\), \(v\ge 0\) and \(u^2+v^2\ne 0\), we have the estimate

$$\begin{aligned} \sum _{n\ge 0}\int _0^\infty \!\!\!\frac{1}{|iz_0+x|^{n+1}}\mu (\mathrm {d}x)|z-z_0|^n&\le \sum _{n\ge 0}\int _0^\infty \!\!\!\frac{|z_0|^n}{2^n|iz_0+x|^{n+1}}\mu (\mathrm {d}x)\\&=\sum _{n\ge 0}\int _0^\infty \!\!\!\frac{(u^2+v^2)^{n/2}}{2^n|u^2+(x+v)^2|^{(n+1)/2}}\mu (\mathrm {d}x)\\&\le \sum _{n\ge 0}\int _0^\infty \!\!\!\frac{1}{2^n|u^2+v^2|^{1/2}}\mu (\mathrm {d}x)\\&=\frac{1}{|z_0|}\mu ([0,\infty ))<\infty , \end{aligned}$$

where the last implication follows from the fact that \(\mu \) is a finite measure as \(K\in \mathcal {CM}_b\), cf. Remark 3.8. This proves the analyticity of \((f_k(z))^{-1}\).

To verify that \((f_k(z))^{-1}\) does not have any roots in \(\mathbb {C}^-\setminus \{0\}\), similar to the above estimates, we rewrite \(z=u-iv\) where \(u\in \mathbb {R}\), \(v\ge 0\), \(u^2+v^2\ne 0\), and observe that after a tedious but routine calculation

$$\begin{aligned} \text {Re}\big ((f_k(z))^{-1}\big )=\alpha _k \int _0^\infty \!\!\!\frac{x+v}{(x+v)^2+u^2}\mu (\mathrm {d}x)+v>0, \end{aligned}$$

since \(\mu \) is not null on \([0,\infty )\). The proof is thus complete. \(\square \)

We now give the proof of Lemma 4.3, which is a slightly rework of the proof of [17, Lemma 4.4] tailored to our setting. See also [26, Theorem 4.2].

Proof of Lemma 4.3

We first note that the spectral density \(\rho _k\) in (4.1) can be written as

$$\begin{aligned} \rho _k(\omega )= \frac{1}{\pi }\cdot \frac{\lambda _k^2\mathcal {K}_{\cos }(\omega )}{\alpha ^2_k\mathcal {K}_{\cos }(\omega )^2+\big (\omega -\alpha _k\mathcal {K}_{\sin }(\omega )\big )^2}, \end{aligned}$$

since K is assumed to be even, and thus, \(\widehat{K}(\omega )=2\mathcal {K}_{\cos }(\omega )\). It follows that

$$\begin{aligned} \int _\mathbb {R}\rho _k(\omega )\mathrm {d}\omega =\frac{1}{\pi }\int _0^\infty \!\!\!\frac{2\lambda _k^2\mathcal {K}_{\cos }(\omega )}{\alpha ^2_k\mathcal {K}_{\cos }(\omega )^2+\big (\omega -\alpha _k\mathcal {K}_{\sin }(\omega )\big )^2}\mathrm {d}\omega . \end{aligned}$$

We aim to make use of contour integrals of \(f_k(z)\) as in (4.4) to calculate the above integral. For \(R>0\), we introduce the outer and inner half circles in \(\mathbb {C}^-\setminus \{0\}\) given by

$$\begin{aligned} C_R=\{Re^{i\theta }:-\pi \le \theta \le 0\} \quad \text {and}\quad C_{1/R}= \{e^{i\theta }/R:-\pi \le \theta \le 0\}. \end{aligned}$$

Also, let C(R) denote the closed curve in \(\mathbb {C}^-\setminus \{0\}\) oriented clockwise as follows:

$$\begin{aligned} C(R)=[-R,-1/R]\cup C_{1/R}\cup [1/R,R]\cup C_R. \end{aligned}$$

Recall \(f_k(z)\) from (4.4). In light of Lemma 4.4, \(f_k\) is analytic in \(\mathbb {C}^-\setminus \{0\}\), implying that for all \(R>0\)

$$\begin{aligned} \int _{C(R)}\!\!\!f_k(z)\mathrm {d}z=0. \end{aligned}$$

On the other hand, we can decompose the above contour integral as follows:

$$\begin{aligned} \int _{C(R)}\!\!\!f_k(z)\mathrm {d}z&=\Big \{\int _{-R}^{-1/R}\!\!\!+\int _{C_{1/R}}\!\!\!+\int _{1/R}^R+\int _{C_R}\Big \}f_k(z)\mathrm {d}z\\ {}&=I_1(R)+I_2(R)+I_3(R)+I_4(R). \end{aligned}$$

In view of the expression for \(f_k\) in (4.4), we have

$$\begin{aligned} I_3(R) =\int _{1/R}^R\frac{\mathrm {d}\omega }{\alpha _k\mathcal {K}_{\cos }(\omega ) + i(\omega -\alpha _k \mathcal {K}_{\sin }(\omega ))}. \end{aligned}$$

Concerning \(I_1(R)\), we recall that \(\mathcal {K}_{\cos }(\omega )\) is even whereas \(\mathcal {K}_{\sin }(\omega )\) is odd. Thus, by a change of variable \(z:=-\omega \), we obtain

$$\begin{aligned} I_1(R)&=\int _{1/R}^{R} \frac{\mathrm {d}\omega }{\alpha _k \mathcal {K}_{\cos }(\omega ) - i(\omega -\alpha _k \mathcal {K}_{\sin }(\omega ))}. \end{aligned}$$

It follows immediately that

$$\begin{aligned} I_1(R) +I_3(R) =\int _{1/R}^R\frac{2\alpha _k\mathcal {K}_{\cos }(\omega )}{\alpha _k^2 \mathcal {K}_{\cos }(\omega )^2 +\big (\omega -\alpha _k \mathcal {K}_{\sin }(\omega )\big )^2}\mathrm {d}\omega . \end{aligned}$$

Since \(\mathcal {K}_{\cos }(\omega )>0\), cf. (3.9), by the Monotone Convergence Theorem, we obtain

$$\begin{aligned} I_1(R) +I_3(R)\rightarrow \pi \frac{\alpha _k}{\lambda _k^2}\int _0^\infty \!\!\!\rho _k(\omega )\mathrm {d}\omega \quad \text {as}\quad R\rightarrow \infty , \end{aligned}$$

where \(\rho _k\) is as in (4.6).

Concerning \(I_2(R)\) on the inner half circle \(C_{1/R}\), we aim to show that its limit is zero as R tends to infinity. Indeed, recall the form of \(f_k(z)\) given in (4.5). For \(z=u-iv\in \mathbb {C}^-\setminus \{0\}\) such that |z| is small, namely

$$\begin{aligned} |z|<\min \Big \{1,\frac{\alpha _k}{8\sqrt{2}}\int _0^\infty \frac{1}{x+1}\mu (\mathrm {d}x)\Big \}, \end{aligned}$$

observe that

$$\begin{aligned} |f_k(z)^{-1}|&=\Big |\alpha _k\int _0^\infty \!\!\!\frac{x+v}{(x+v)^2+u^2}\mu (\mathrm {d}x)+v+i\Big (u-\alpha _k\int _0^\infty \!\!\!\frac{u}{(x+v)^2+u^2}\mu (\mathrm {d}x)\Big )\Big |\\&\ge \alpha _k\Big |\int _0^\infty \!\!\!\frac{x+v}{(x+v)^2+u^2}\mu (\mathrm {d}x)-i\int _0^\infty \!\!\!\frac{u}{(x+v)^2+u^2}\mu (\mathrm {d}x)\Big )\Big |-|v+iu|\\&\ge \frac{\alpha _k}{\sqrt{2}}\int _0^\infty \!\!\!\frac{x+v+|u|}{(x+v)^2+u^2}\mu (\mathrm {d}x)-|z|. \end{aligned}$$

To obtain the last inequality, we employed the lower bound \(\sqrt{2}|a-ib|\ge |a|+|b|\) for \(a,b\in \mathbb {R}\). To further bound \(f_k(z)^{-1}\) from below, we use the elementary inequality for \(x, v\ge 0\) and \(|v|, |u|\le 1\)

$$\begin{aligned} \frac{x+v+|u|}{(x+v)^2+u^2}\ge \frac{1}{2(x+1)}, \end{aligned}$$

to estimate

$$\begin{aligned} \frac{\alpha _k}{\sqrt{2}}\int _0^\infty \!\!\!\frac{x+v+|u|}{(x+v)^2+u^2}\mu (\mathrm {d}x)-|z|&\ge \frac{\alpha _k}{2\sqrt{2}}\int _0^\infty \!\!\!\frac{1}{x+1}\mu (\mathrm {d}x)-|z|\\&\ge \frac{\alpha _k}{4\sqrt{2}}\int _0^\infty \!\!\!\frac{1}{x+1}\mu (\mathrm {d}x)>0, \end{aligned}$$

since \(\mu \) is not null on \([0,\infty )\). As a consequence, we obtain

$$\begin{aligned} |f_k(z)^{-1}|\ge \frac{\alpha _k}{4\sqrt{2}}\int _0^\infty \!\!\!\frac{1}{x+1}\mu (\mathrm {d}x). \end{aligned}$$

Thus, by making the change of variable \(z:=R^{-1}e^{i\theta }\) for all R sufficiently large, we have

$$\begin{aligned} |I_2(R)|&=\Big |\int _{-\pi }^0\frac{R^{-1}e^{i\theta }i\mathrm {d}\theta }{\alpha _k \mathcal {K}_{\cos }(R^{-1} e^{i\theta }) + i(R^{-1} e^{i\theta }-\alpha _k \mathcal {K}_{\sin }(R^{-1} e^{i\theta }))}\Big |\\&\le \frac{4\sqrt{2}R^{-1}}{\alpha _k\int _0^\infty \frac{1}{x+1}\mu (\mathrm {d}x)} \int _{-\pi }^0 \mathrm {d}\theta , \end{aligned}$$

which clearly converges to zero as R tends to infinity.

Likewise, for \(I_4(R)\), we note that for \(z=u-iv\in \mathbb {C}^-\setminus \{0\}\) such that \(|z|>1\),

$$\begin{aligned} |\mathcal {K}_{\cos }(z)-i\mathcal {K}_{\sin }(z)|&=\Big |\int _0^\infty \frac{x+v-iu}{(x+v)^2+u^2}\mu (\mathrm {d}x) \Big |\\&\le \int _0^\infty \!\!\!\frac{x+v+|u|}{(x+v)^2+u^2}\mu (\mathrm {d}x)\\&\le \int _0^\infty \!\!\!\frac{2}{x+v+|u|}\mu (\mathrm {d}x)\\&\le 2\int _0^\infty \!\!\!\mu (\mathrm {d}x)=2\mu ([0,\infty )), \end{aligned}$$

since \(\mu \) is assumed to be finite measure on \([0,\infty )\), cf. Remark 3.8. It follows that by making the change of variable \(z:=Re^{i\theta }\), it holds that

$$\begin{aligned} I_4(R)&=\int _0^{-\pi }\!\!\!\frac{Re^{i\theta }i\mathrm {d}\theta }{\alpha _k \mathcal {K}_{\cos }(R e^{i\theta }) +i(R e^{i\theta }-\alpha _k \mathcal {K}_{\sin }(R e^{i\theta }))}\\&=\int _0^{-\pi }\!\!\!\frac{\mathrm {d}\theta }{\frac{\alpha _k }{iRe^{i\theta }} \big (\mathcal {K}_{\cos }(Re^{i\theta })-i\mathcal {K}_{\sin }(Re^{i\theta })\big ) +1 }, \end{aligned}$$

which converges to \(-\pi \) as R tends to infinity by virtue of the Dominated Convergence Theorem.

We collect the above limits to arrive at the the following

$$\begin{aligned} 0=\lim _{R\rightarrow \infty }\int _{C(R)}\!\!\!f_k(z)\mathrm {d}z = \pi \frac{\alpha _k}{\lambda _k^2}\int _0^\infty \!\!\!\rho _k(\omega )\mathrm {d}\omega -\pi , \end{aligned}$$

which in turn implies (4.3). This finishes the proof. \(\square \)

4.2 Proof of Theorem 2.10

We now turn our attention to the main theorem of paper, namely, the regularity of weak stationary solutions \(u(t,\mathbf {x})\). As discussed in Sect. 2, in order to prove Theorem 2.10, we will employ the classical Kolmogorov’s criterion to establish Hölder continuity. To this end, we must obtain useful estimates on differences in time, \(u_k(t)-u_k(s)\) (\(t,s\in \mathbb {R}\)), as well as their second moments under Assumption 2.4. In Proposition 4.5 below, we assert a bound on the difference \(u_k(t)-u_k(s)\), which will be employed later to prove Hölder regularities in time of \(u(t,\mathbf {x})\).

Proposition 4.5

Suppose that the memory kernel K satisfies Assumption 2.4. For \(k\ge 1\), let \(u_k(t)\) be the stationary process as in Proposition 4.2. Then there exists an index \(k^*\) large enough such that for any \(\alpha ,\,q\in (0,1)\), there exists a constant \(c=c(\alpha ,q,k^*)>0\) independent of all k such that for all \(t,\,s\in \mathbb {R}\), \(1\le k\le k^*\),

$$\begin{aligned} \mathbb {E}|u_k(t)-u_k(s)|^2<c|t-s|^\alpha , \end{aligned}$$

and for all \(k>k^*\),

$$\begin{aligned} \mathbb {E}|u_k(t)-u_k(s)|^2<c \frac{\lambda _k^2}{\alpha _k^{q-\alpha /2}}|t-s|^\alpha . \end{aligned}$$

As a consequence of Proposition 4.5, we have the following corollary asserting a useful bound on the second moment of \(u_k(t)\), which will be employed to prove the spatial Hölder regularity of \(u(t,\mathbf {x})\).

Corollary 4.6

Suppose that the memory kernel K satisfies Assumption 2.4. For \(k\ge 1\), let \(u_k(t)\) be the stationary process as in Proposition 4.2. Let \(k^*\) be the same index as in Proposition 4.5. Then for any \(q\in (0,1)\), there exists a constant \(c=c(q,k^*)>0\) such that for all \(t\in \mathbb {R}\), \(1\le k\le k^*\),

$$\begin{aligned} \mathbb {E}|u_k(t)|^2< c, \end{aligned}$$

and for all \(k>k^*\),

$$\begin{aligned} \mathbb {E}|u_k(t)|^2< c\frac{\lambda _k^2}{\alpha _k^{q}}. \end{aligned}$$

The proofs of Proposition 4.5 and Corollary 4.6 will be deferred to the end of this section. We are now in a position to prove Theorem 2.10.

Proof of Theorem 2.10

Let \(\eta \) be the constant from Assumption 2.6. Since the processes \(u_k(\cdot )\) are mutually independent with zero mean, in view of Proposition 4.5, cf. (4.9) and (4.10), we have the following estimates for any \(t,\,s\in \mathbb {R}\) and \(\mathbf {x}\in \mathcal {O}\)

$$\begin{aligned} \mathbb {E}|u(t,\mathbf {x})-u(s,\mathbf {x})|^2&= \sum _{k\ge 1}\mathbb {E}|u_k(t)-u_k(s)|^2|e_k(\mathbf {x})|^2 \\&\le c|t-s|^{\alpha }\sum _{k\ge 1}\frac{\lambda _k^2c_k^2}{\alpha _k^{q-\alpha /2}}\\&< c|t-s|^{\alpha }\sum _{k\ge 1}\frac{\lambda _k^2c_k^2}{\alpha _k^{\eta }} , \end{aligned}$$

where the last implication follows from the choice of \(q,\,\alpha \in (0,1)\) satisfying \(\eta <q-\alpha /2\), which in turn is always possible for any \(\alpha /2\in (0,1-\eta )\). Since \(u(t,\mathbf {x})-u(s,\mathbf {x})\) is Gaussian, we infer a constant \(C(m)>0\) for \(m>0\) such that

$$\begin{aligned} \mathbb {E}|u(t,\mathbf {x})-u(s,\mathbf {x})|^{2m} \le C(m)|t-s|^{m\alpha }. \end{aligned}$$

By Kolmogorov’s test for stochastic processes [15, Theorem 3.3], there exists a version of u that is Hölder continuous in t for every \(\gamma \in \big (0,\frac{\alpha }{2}-\frac{1}{2m}\big )\). By choosing m sufficiently large and \(\alpha /2\) as close to \(1-\eta \) as possible, we obtain Hölder continuity in time t for every \(\gamma \in (0,1-\eta )\).

With regard to spatial regularity, we note that Assumption 2.1 on \(\nabla e_k\) implies the following bound for \(\mathbf {x},\mathbf {y}\in \mathcal {O}\) and \(\alpha \in (0,1)\) (see [15, Lemma 5.21])

$$\begin{aligned} |e_k(\mathbf {x})-e_k(\mathbf {y})| \le c(\alpha ) \alpha _k^{\alpha /2}c_k|\mathbf {x}-\mathbf {y}|^\alpha . \end{aligned}$$

We then apply Corollary 4.6, cf. (4.11) and (4.12), for \(q,\,\alpha \in (0,1)\) to find

$$\begin{aligned} \mathbb {E}|u(t,\mathbf {x})-u(t,\mathbf {y})|^2&\le \sum _{k\ge 1}\mathbb {E}|u_k(t)|^2|e_k(\mathbf {x})-e_k(\mathbf {y})|^2 \\&\le c|\mathbf {x}-\mathbf {y}|^{2\alpha }\sum _{k\ge 1}\frac{\lambda _k^2 c_k^2}{\alpha _k^{q-\alpha }}\\&< c|\mathbf {x}-\mathbf {y}|^{2\alpha }\sum _{k\ge 1}\frac{\lambda _k^2 c_k^2}{\alpha _k^{\eta }}, \end{aligned}$$

which again is always possible for \(\alpha \in (0,1-\eta )\). We then arrive at the estimate

$$\begin{aligned} \mathbb {E}|u(t,\mathbf {x})-u(t,\mathbf {y})|^{2m} \le C(m)|\mathbf {x}-\mathbf {y}|^{2m\alpha }. \end{aligned}$$

By the Kolmogorov test for random fields [15, Theorem 3.5], u is Hölder continuous in space for any \(\gamma >0\) up to \(\frac{2m\alpha -d}{2m}=\alpha -\frac{d}{2m}\) where d is the spatial dimension. We finally choose m sufficiently large and \(\alpha \) as close to \(1-\eta \) as possible to obtain \(\gamma \)-Hölder continuity in space for any \(\gamma \in (0,1-\eta )\). This finises the proof. \(\square \)

We now turn to the proof of Proposition 4.5. In order to establish Proposition 4.5, we will make use of the following elementary inequality: for any \(\alpha \in (0,1)\) and \(x\in \mathbb {R}\), we have

$$\begin{aligned} 1-\cos (x)\le \frac{2}{\alpha }|x|^\alpha . \end{aligned}$$

Indeed, observe that if \(|x|\ge 1\), then since \(\alpha \in (0,1)\)

$$\begin{aligned} \frac{2}{\alpha }|x|^\alpha \ge 2\ge 1-\cos (x). \end{aligned}$$

On the other hand, if \(|x|<1\), then we have

$$\begin{aligned} 1-\cos (x)\le \frac{x^2}{2}\le \frac{2}{\alpha }|x|^\alpha . \end{aligned}$$

We are now in a position to prove Proposition 4.5.

Proof of Proposition 4.5

In view of (4.2) and (4.6), a straightforward calculation shows that

$$\begin{aligned} \mathbb {E}|u_k(t)-u_k(s)|^2&=\int _\mathbb {R}\!\!\!\big (2-e^{i(t-s)\omega }-e^{i(s-t)\omega }\big )\rho _k(\omega )\mathrm {d}\omega \\&=\frac{2}{\pi }\int _0^\infty \!\!\!\!\!\!\left( 1-\cos (\omega (t-s))\right) \frac{\lambda _k^2\mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\mathrm {d}\omega \\&\le \frac{4\lambda _k^2}{\pi \alpha }|t-s|^\alpha \int _0^\infty \!\!\!\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\mathrm {d}\omega , \end{aligned}$$

where in the last implication, for \(\alpha \in (0,1)\), we have invoked Inequality (4.13).

To verify (4.9), it suffices to prove that for any k, the above integral is finite. In view of Lemma 3.9, we see that

$$\begin{aligned} \lim _{\omega \rightarrow \infty }\mathcal {K}_{\cos }(\omega )=\lim _{\omega \rightarrow \infty }\mathcal {K}_{\sin }(\omega )=0, \end{aligned}$$

implying the integrand is dominated by \(\omega ^{\alpha -2}\), which is integrable at infinity. On the other hand, when \(\omega \rightarrow 0\), by Fatou’s Lemma, it holds that

$$\begin{aligned} \liminf _{\omega \rightarrow 0}\mathcal {K}_{\cos }(\omega )\ge \int _0^\infty \frac{1}{x}\mu (\mathrm {d}x)>0, \end{aligned}$$

implying the integrand is dominated by \(\omega ^\alpha (\alpha _k\int _0^\infty \frac{1}{x}\mu (\mathrm {d}x))^{-1}\) which is integrable around the origin. We thus combine two cases to infer the existence of a positive constant c(k) such that

$$\begin{aligned} \int _0^\infty \!\!\!\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\,\mathrm {d}\omega \le c(k), \end{aligned}$$

which proves (4.9).

Now, let \(k^*\) be a large constant such that for \(k>k^*\), \(\omega _k>1\) is the unique solution on \((0,\infty )\) of Eq. (3.11) from Lemma 3.10 (c) and (d). We now decompose the last integral as follows:

$$\begin{aligned}&\int _0^\infty \!\!\!\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\mathrm {d}\omega \\&\quad =\Big \{\int _0^{\omega _k-\omega _k^q}+\int _{\omega _k-\omega _k^q}^{\omega _k-1}+\int _{\omega _k-1}^{\omega _k+1}+\int _{\omega _k+1}^{\omega _k+\omega _k^q}+\int _{\omega _k+\omega _k^q}^\infty \Big \}\\&\qquad \times \,\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\mathrm {d}\omega = I_1+\cdots +I_5. \end{aligned}$$

To estimate \(I_1\), we recall from Lemma 3.10 (b) that \(\mathcal {K}_{\sin }(\omega )/\omega \) is decreasing on \(\omega \in (0,\infty )\). Thus, for any \(\omega \in (0,\omega _k-\omega _k^q]\), it follows that

$$\begin{aligned} \alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }-1\ge \alpha _k\frac{\mathcal {K}_{\sin }(\omega _k-\omega _k^q)}{\omega _k-\omega _k^q}-1. \end{aligned}$$

We then substitute \(\omega :=\omega _k-\omega _k^q\) in Inequality (3.12) to obtain

$$\begin{aligned} \alpha _k\frac{\mathcal {K}_{\sin }(\omega _k-\omega _k^q)}{\omega _k-\omega _k^q}-1\ge \frac{c}{\omega _k^{1-q}}. \end{aligned}$$

It follows that

$$\begin{aligned} I_1&=\int _0^{\omega _k-\omega _k^q}\!\!\!\!\!\!\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\mathrm {d}\omega \\&=\int _0^{\omega _k-\omega _k^q}\!\!\!\!\!\!\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\omega ^2\left( \alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }-1\right) ^2}\mathrm {d}\omega \\&\le \int _0^{\omega _k-\omega _k^q}\!\!\!\!\!\!\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+c\omega ^2/\omega _k^{2-2q} }\mathrm {d}\omega . \end{aligned}$$

We apply Young’s product inequality to the above denominator to infer

$$\begin{aligned} \int _0^{\omega _k-\omega _k^q}\!\!\!\!\!\!\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+c\omega ^2/\omega _k^{2-2q} }\mathrm {d}\omega&\le c \int _0^{\omega _k-\omega _k^q} \frac{\omega _k^{1-q}\omega ^{\alpha -1}}{\alpha _k }\mathrm {d}\omega \le c\frac{\omega _k^{1-q+\alpha }}{\alpha _k}\\&\le \frac{c}{\alpha _k^{(1+q-\alpha )/2}}, \end{aligned}$$

where in the last implication we have employed the fact that \(\omega _k^2/\alpha _k\) is bounded uniformly with respect to k since \(\omega _k^2/\alpha _k\rightarrow K(0)\) as \(k\rightarrow \infty \), by virtue of Lemma 3.10 (c).

Similar to the argument on \(I_1\), to estimate \(I_5\), we note that if \(\omega \in [\omega _k+\omega _k^q,\infty )\) then

$$\begin{aligned} 1-\alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }\ge 1- \alpha _k\frac{\mathcal {K}_{\sin }(\omega _k+\omega _k^q)}{\omega _k+\omega _k^q}, \end{aligned}$$

and that substituting \(\omega :=\omega _k+\omega _k^q\) in (3.12) yields

$$\begin{aligned} 1-\alpha _k\frac{\mathcal {K}_{\sin }(\omega _k+\omega _k^q)}{\omega _k+\omega _k^q}\ge \frac{c}{\omega _k^{1-q}}. \end{aligned}$$

We then have a chain of implications

$$\begin{aligned} \begin{aligned} I_5&=\int _{\omega _k+\omega _k^q}^\infty \frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\omega ^2\left( \alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }-1\right) ^2}\mathrm {d}\omega \\&\le \int _{\omega _k+\omega _k^q}^\infty \frac{\omega ^{\alpha }\mathcal {K}_{\cos }(\omega )}{\omega ^2\left( 1-\alpha _k\frac{\mathcal {K}_{\sin }(\omega _k+\omega _k^q)}{\omega _k+\omega _k^q}\right) ^2} \mathrm {d}\omega \\&\le c\int _{\omega _k+\omega _k^q}^\infty \frac{\omega ^{\alpha }\mathcal {K}_{\cos }(\omega )\omega _k^{2-2q}}{\omega ^2} \mathrm {d}\omega \\&=c \int _{\omega _k+\omega _k^q}^\infty \frac{\omega \mathcal {K}_{\cos }(\omega )\omega _k^{2-2q}}{c\omega ^{3-\alpha }}\mathrm {d}\omega \\&\le c\int _{\omega _k+\omega _k^q}^\infty \frac{\omega _k^{2-2q}}{\omega ^{3-\alpha }}\mathrm {d}\omega , \end{aligned} \end{aligned}$$

where the last inequality follows from the fact that \(\omega \mathcal {K}_{\cos }(\omega )\) is bounded, by virtue of Lemma 3.10 (a). We now integrate the above integral with respect to \(\omega \) to find

$$\begin{aligned} c\int _{\omega _k+\omega _k^q}^\infty \frac{\omega _k^{2-2q}}{\omega ^{3-\alpha }}\\d\omega \le c\frac{\omega _k^{2-2q}}{\omega _k^{2-\alpha }}\le \frac{c}{\alpha _k^{q-\alpha /2}}, \end{aligned}$$

which implies that

$$\begin{aligned} I_5\le \frac{c}{\alpha _k^{q-\alpha /2}}. \end{aligned}$$

Regarding \(I_2\), we invoke Inequality (3.12) again to find

$$\begin{aligned} \begin{aligned} I_2&=\int _{\omega _k-\omega _k^q}^{\omega _k-1}\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\omega ^2\left( \alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }-1\right) ^2}\mathrm {d}\omega \\&\le \int _{\omega _k-\omega _k^q}^{\omega _k-1}\frac{\omega ^{\alpha }\mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+c\frac{\omega ^2}{\omega _k^2}(\omega _k-\omega )^2}\mathrm {d}\omega . \end{aligned} \end{aligned}$$

Also, since \(q\in (0,1)\) and \(\omega _k\rightarrow \infty \) as \(k\rightarrow \infty \) (Lemma 3.10 (c)), for k sufficiently large and any \(\omega \in [\omega _k-\omega _k^q,\omega _k-1]\), the ratio \(\omega /\omega _k\) is bounded from below uniformly in k. We then infer that

$$\begin{aligned} \int _{\omega _k-\omega _k^q}^{\omega _k-1}\frac{\omega ^{\alpha }\mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+c\frac{\omega ^2}{\omega _k^2}(\omega _k-\omega )^2}\mathrm {d}\omega \le \int _{\omega _k-\omega _k^q}^{\omega _k-1}\frac{\omega _k^{\alpha }\mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+c(\omega _k-\omega )^2}\mathrm {d}\omega . \end{aligned}$$

Young’s inequality now implies

$$\begin{aligned} \begin{aligned} \int _{\omega _k-\omega _k^q}^{\omega _k-1}\frac{\omega _k^{\alpha }\mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+c(\omega _k-\omega )^2}\mathrm {d}\omega&\le \frac{c\omega _k^\alpha }{\alpha _k} \int _{\omega _k-\omega _k^q}^{\omega _k-1}\frac{1}{\omega _k-\omega }\mathrm {d}\omega = c\frac{\omega _k^\alpha \log (\omega _k^q)}{\alpha _k}\\ {}&\le c\frac{\log \alpha _k}{\alpha _k^{1-\alpha /2}}, \end{aligned} \end{aligned}$$

since \(\omega _k^2/\alpha _k=O(1)\) as \(k\rightarrow \infty \), by Lemma 3.10 (c).

A similar argument (by writing \(\omega -\omega _k\) instead of \(\omega _k-\omega \) wherever is applicable) yields the estimate

$$\begin{aligned} I_4=\int _{\omega _k+1}^{\omega _k+\omega _k^q}\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\omega ^2\left( \alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }-1\right) ^2}\mathrm {d}\omega \le c\frac{\log \alpha _k}{\alpha _k^{1-\alpha /2}}. \end{aligned}$$

To estimate \(I_3\), we recall from Lemma 3.10 (b) that \(\omega ^2\mathcal {K}_{\cos }(\omega )\) is increasing on \(\omega \in (0,\infty )\). It follows that

$$\begin{aligned} I_3=\int _{\omega _k-1}^{\omega _k+1}\frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\omega ^2\left( \alpha _k\frac{\mathcal {K}_{\sin }(\omega )}{\omega }-1\right) ^2}\mathrm {d}\omega&\le \int _{\omega _k-1}^{\omega _k+1}\frac{\omega ^\alpha }{\alpha _k^2\mathcal {K}_{\cos }(\omega )}\mathrm {d}\omega \\&\le \int _{\omega _k-1}^{\omega _k+1}\frac{\omega ^{\alpha +2}}{\alpha _k^2\mathcal {K}_{\cos }(1)}\mathrm {d}\omega \\&\le \frac{2(\omega _k+1)^{\alpha +2}}{\alpha _k^2\mathcal {K}_{\cos }(1)}\\&\le \frac{c}{\alpha _k^{1-\alpha /2}}. \end{aligned}$$

We finally collect everything to arrive at

$$\begin{aligned}&\int _0^\infty \frac{\omega ^\alpha \mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\mathrm {d}\omega \nonumber \\&\quad \le c\bigg (\frac{1}{\alpha _k^{1+q-\alpha /2}}+\frac{1}{\alpha _k^{q-\alpha /2}}+\frac{\log \alpha _k}{\alpha _k^{1-\alpha /2}}\bigg )\le \frac{c}{\alpha _k^{q-\alpha /2}}, \end{aligned}$$

which holds for sufficiently large k, since \(\alpha _k\uparrow \infty \) as \(k\rightarrow \infty \) by Assumption 2.1. We therefore conclude (4.10). The proof is thus complete. \(\square \)

Finally, we give the proof of Corollary 4.6.

Proof of Corollary 4.6

The argument for (4.11) is omitted as it is similar to that for (4.9) as in the proof of Proposition 4.5.

With regard to (4.12), we have

$$\begin{aligned} \begin{aligned} \mathbb {E}|u_k(t)|^2&=\frac{\lambda _k^2}{\pi }\int _0^\infty \!\!\!\frac{\mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\mathrm {d}\omega \\&=\frac{\lambda _k^2}{\pi }\Big \{\int _0^1+\int _1^\infty \Big \} \frac{\mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2} \mathrm {d}\omega . \end{aligned} \end{aligned}$$

To bound the first integral on the RHS we observe:

$$\begin{aligned}&\int _0^1\!\!\!\frac{\mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\mathrm {d}\omega \\&\le \int _0^1\!\!\!\frac{1}{\alpha _k^2\mathcal {K}_{\cos }(\omega )}\mathrm {d}\omega \le \int _0^1\frac{1}{\alpha _k^2\mathcal {K}_{\cos }(1)}\mathrm {d}\omega = \frac{1}{\alpha _k^2\mathcal {K}_{\cos }(1)}, \end{aligned}$$

since \(\mathcal {K}_{\cos }(\omega )\) is decreasing on \(\omega \in (0,\infty )\), by Lemma 3.10 (a). To estimate the second integral, we pick \(r_1,\,r_2\in (0,1)\) such that \(r_1-r_2/2=q\). We then invoke Inequality (4.14) as follows:

$$\begin{aligned} \int _1^\infty \!\!\!\!\!\!\frac{\mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\mathrm {d}\omega&\le \int _1^\infty \!\!\!\!\!\!\frac{\omega ^{r_2}\mathcal {K}_{\cos }(\omega )}{\alpha _k^2\mathcal {K}_{\cos }^2(\omega )+\left( \omega -\alpha _k\mathcal {K}_{\sin }(\omega )\right) ^2}\mathrm {d}\omega \\&\le \frac{c}{\alpha _k^{r_1-r/2}}=\frac{c}{\alpha _k^q}. \end{aligned}$$

We combine these two estimates to obtain (4.12), which concludes the proof. \(\square \)

5 Discussion

We have rigorously analyzed a stochastic integro-partial-differential equation with memory (1.1) satisfying the Fluctuation–Dissipation relationship that arises from statistical mechanical considerations in the study of thermally fluctuating viscoelastic media. Using the framework of generalized stationary processes from [24, 29], we obtain stationary solutions of  (1.1) when the memory belongs to a large subclass \(\mathcal {CM}_b\) of the completely monotone functions. Furthermore, we establish space-time Hölder regularity of the solutions. As we demonstrate below, when we compare the stochastic heat equation with memory to the classical formulation, the noise structure arising from the Fluctuation–Dissipation relationship yields greater regularity in time.

The form of the equations studied here was directly motivated by the work of [23] on thermally fluctuating viscoelastic fluids. In that work, only a finite number of Fourier modes were used to define the space-time noise, and it is natural to ask when (1.1) is well-posed if infinitely many Fourier modes are used. The result we present is quite general and requires only Assumption 2.5, which is commonly seen in the linear SPDE literature [3, 14, 15]. It is worth noting though, that in [23], a particular form of the memory kernel was used, namely, a finite sum of exponentials. As we demonstrated, the well-posedness result that we obtain, Theorem 2.8, is applicable to a subclass of completely monotone functions of which sum-of-exponentials functions are members. We however remark that the sum-of-exponential form is not an artificial or highly restrictive one. Members of this family can approximate the class of completely monotone functions in such a way that the GLE has what is sometimes called transient anomalous diffusion, which is to say that the associated processes are subdiffusive over arbitrarily large time intervals despite being diffusive in the large time limit.

Before moving on to the application to stochastic heat equations, we remark that our notion of solution, as well as the subsequent analysis, relies heavily on the linear structure of (1.1). It also uses explicit calculations that exploit the Fluctuation–Dissipation form. It remains an open-ended question to explore well-posedness and (more interestingly) Hölder regularity of solutions when Fluctuation–Dissipation is not assumed. Well-posedness becomes even more of a question when one considers non-linear terms to encode, for example, external forces acting on the fluid. We consider this to be an important open question.

We now discuss the regularity in the case that A is the usual Laplacian operator in \(\mathbb {R}^d\) with Dirichlet boundary condition on \(\mathcal {O}\). For the reader’s convenience, we first recall the following stochastic heat equation for \(u(t,\mathbf {x}):[0,\infty )\times \mathcal {O}\rightarrow \mathbb {R}^d\)

$$\begin{aligned} {\dot{u}}(t,\mathbf {x}) = A\,u(t,\mathbf {x})+{\dot{W}}(t,\mathbf {x}),\qquad (t,\mathbf {x})\in \mathbb {R}\times \mathcal {O}. \end{aligned}$$

Here \(W(t,\mathbf {x})\) is a cylindrical Wiener process with the decomposition

$$\begin{aligned} W(t,\mathbf {x})=\sum _{k\ge 1}\lambda _k W_k(t)e_k(\mathbf {x}), \end{aligned}$$

where \(\{W_k\}_{k\ge 1}\) are i.i.d standard Brownian motions and \(\{\lambda _k\}_{k\ge 1}\) are as in Assumption 2.5. It is known that [15, Example 5.24] there exists a modification \(U(t,\mathbf {x})\) of \(u(t,\mathbf {x})\) the solution of (5.1) such that \(U(t,\mathbf {x})\) is \(\gamma \)-Hölder continuous in time for \(\gamma \in (0,(1-\eta )/2)\) and in space for \(\gamma \in (0,1-\eta )\) where \(\eta \) is as in Assumption 2.5. Particularly, in the case of 1D heat equation with white noise (\(\lambda _k=1\) for every k), the pair of Hölder constants is (1/4, 1/2) in \((t,\mathbf {x})\) [21, P. 6].

Alternatively, we consider (5.1) with memory as follows.

$$\begin{aligned} {\dot{u}}(t,\mathbf {x}) = k_0A\,u(t,\mathbf {x})-\int _{-\infty }^t \!\!\!K(t-s)A\, u(s,\mathbf {x})\mathrm {d}s+{\dot{W}}(t),\quad (t,\mathbf {x})\in \mathbb {R}\times \mathcal {O}, \end{aligned}$$

where \(k_0\) is a positive constant such that \(k_0>\int _0^\infty K(t)\mathrm {d}t\) and K is a completely monotone function. It was shown in [3, Lemma 3.7] that under Assumption 2.5, the solution of (5.2) has the same Hölder regularity as the solution of (5.1). In other words, with the same noise but adding a small memory effect, (5.2) does not differ from (5.1) in terms of regularity. Intuitively, this invariance can be explained as the memory effect being dominated by the dissipation because of the assumption \(k_0>\int _0^\infty K(t)\mathrm {d}t\). We note that this condition also requires that K be integrable.

In contrast, recalling our system (1.1)

$$\begin{aligned} {\dot{u}}(t,\mathbf {x})&=\int _{-\infty }^t \!\!\!K(t-s)A\, u(s,\mathbf {x})\mathrm {d}s+ \mathbf {F}(t,\mathbf {x}),\qquad (t,\mathbf {x})\in \mathbb {R}\times \mathcal {O},\\ \mathbf {F}(t,\mathbf {x})&=\sum _{k\ge 1}\lambda _k F_k(t)e_k(\mathbf {x}),\quad \text { and }\quad \mathbb {E}[F(t)F(s)]=K(|t-s|), \end{aligned}$$

where K is not necessarily integrable, cf. Assumption 2.4. In view of Theorem 2.10, while spatial regularity is the same as in the case of (5.1) and (5.2), (1.1) enjoys better regularity in time, namely \(\gamma \)-Hölder continuity holds in time for \(\gamma \in (0,1-\eta )\). Particularly, in 1D, when \(\lambda _k=1\), \(k=1,2,\ldots \), the pair of Hölder constants for (1.1) is \((1/2-\epsilon ,1/2-\epsilon )\) in \((t,\mathbf {x})\) for every \(\epsilon \in (0,1/2)\).