1 Introduction

This paper is motivated by the study of sample path properties of stochastic partial differential equations (SPDEs) and its applications to questions like the polarity of sets for the path process and its Hausdorff dimension (a.s.). We focus on a system of stochastic linear biharmonic heat equations on a d-dimensional torus, \(d=1,2,3\) (see (2.3)). This SPDE is the linearization at zero of a Cahn–Hilliard equation with a space-time white noise forcing term (see e.g. [1]).

In the last two decades, there has been many contributions to the subject of this paper. A large part of them concern Gaussian random fields, the case addressed in this work. A representative sample of results can be found in [2,3,4, 11, 12], and references therein. Central to the study is obtaining upper and lower bounds on the probabilities that the random field hits a Borel set A, in terms of the Hausdorff measure and the capacity, respectively, of A. In the derivation of the bounds –named criteria for hitting probabilities– a major role is played by the canonical pseudo-distance associated to the process. For a random field \((v(t,x),\ (t,x)\in [0,T]\times \mathbb {D})\), \(\mathbb {D}\subset {\mathbb {R}}^d\), this notion is defined by

$$\begin{aligned} \mathfrak {d}_{v}((t,x),(s,y))=\Vert v(t,x)-v(s,y)\Vert _{L^2(\varOmega )},\ (t,x), (s,y)\in [0,T]\times \mathbb {D}. \end{aligned}$$

When \(\mathfrak {d}_{v}((t,x),(s,y))\) compares, up to multiplicative constants, with \(\vert t-s\vert ^{\alpha _0}+\sum _{j=1}^d\vert x_j-y_j\vert ^{\alpha _j}\), \(\alpha _0, \alpha _j\in (0,1)\), [12][Theorem 7.6] and [3][Theorems 2.1, 2.4 and 2.6]) provide useful criteria for hitting probabilities.

Let \((u(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\), \(d=1,2,3\), be the random field solution to the biharmonic heat equation driven by space-time noise, given in Theorem 2.1. We prove in Theorem 3.1 that the associated canonical pseudo-distance \(\mathfrak {d}_{u}((t,x),(s,y))\) compares with

$$\begin{aligned} \left( \vert t-s\vert ^{1-d/4}+\left( \log \frac{C(d)}{\vert x-y\vert }\right) ^\beta \vert x-y\vert ^{2\wedge (4-d)}\right) ^{\frac{1}{2}}, \ \beta =1_{\{d=2\}}. \end{aligned}$$

Thus, when \(d=2\) this example does not fall into the range of applications of the criteria cited above.

In [5][Theorems 3.2, 3.3, 3.4, 3.5], we proved extensions of [12][Theorem 7.6] to cover cases where the canonical pseudo-distance has anisotropies described by gauge functions other than power functions. This was initially motivated by the study of a linear heat equation with fractional noise (see [5][Section 4]). From the above discussion, we see that the biharmonic heat equation provides a new case of application of such extended criteria.

The structure and contents of the paper are as follows. Section 2 is about preliminaries. We formulate and prove the existence of a random field solution to the biharmonic heat equation, and recall the notions of Hausdorff measure relative to a gauge function and capacity relative to a symmetric potential. Section 3 is devoted to find the equivalent pseudo-distance for the canonical metric –a result of independent interest. The proof relies on a careful analytical study of the Green’s function of the biharmonic operator \(\mathcal {L} = \frac{\partial }{\partial t} +(-\varDelta )^2\) on \((0,T)\times \mathbb {T}^d\). With this fundamental result at hand and some additional properties of (u(tx)) proved in Sects. 4 and 5, we are in a position to apply Theorems 3.4 and 3.5 of [5]. We deduce Theorem 6.1 on upper and lower bounds for the hitting probabilities of D-dimensional random vectors consisting of independent copies of (u(tx)). These are in terms of the \(\bar{g}_q\)-Hausdorff measure and the \(({\bar{g}}_q)^{-1}\)-capacity, respectively, with \({\bar{g}}_q\) defined in (6.1). Notice that for \(d=1,3\), the bounds are given by the classical Hausdorff mesure and the Bessel-Riesz capacity, respectively. In the second part of Sect. 6, we highlight some consequences of Theorem 6.1 on polarity of sets and Hausdorff dimension of the path process. The application of Theorems 3.4 and 3.5 of [5] imposes the restriction \(D>D_0\), where \(D_0=[(4-d)/8]^{-1}+d[1\wedge (2-d/2)]^{-1}\). We also discuss the case \(D<D_0\) and present some conjectures concerning the critical case \(D=D_0\) in the last part of Sect. 6.

2 Notations and preliminaries

We introduce some notation used throughout the paper. As usually, \(\mathbb {N}\) denotes the set of natural numbers \(\{ 0,1,2,...\}\); we set \(\mathbb {Z}_2 = \{ 0,1\}\), and for any integer \(d\ge 1\), \(\mathbb {N}^{d,*} = (\mathbb {N}\setminus \{ 0\})^d\). For any multiindex \(k=(k_1,\ldots ,k_d)\in {{\mathbb {N}}}^d\), we set \(|k|=(\sum _{j=1}^d k_j^2)^{1/2}\), and denote by n(k) the number of null components of k.

Let \(\mathbb {S}^1\) be the circle and \(\mathbb {T}^d= \mathbb {S}^1\times {\mathop {\ldots }\limits ^{d}}\times \mathbb {S}^1\) the d-dimensional torus. For \(x\in \mathbb {T}^d\), |x| denotes the Euclidean norm. If we identify \(\mathbb {T}^d\) with the periodic cube \([-\pi ,\pi ]^d\), meaning that opposite sides coincide, |x| can be interpreted as the distance of x to the origin.

For \(x\in [0,2\pi )\), let \(\varepsilon _{0,k}(x)=\pi ^{-1/2}\sin (kx)\), \(\varepsilon _{1,k}(x)=\pi ^{-1/2}\cos (kx)\), \(k\in \mathbb {N}^*\), and \(\varepsilon _{1,0}(x) = (2\pi )^{-1/2}\). The set of functions \(\mathbf{B} \) defined on \(\mathbb {T}^d\) consisting of

$$\begin{aligned} \varepsilon _{i,k}:=\varepsilon _{i_1,k_1}\otimes \cdots \otimes \varepsilon _{i_d,k_d},\ i=(i_1,\ldots ,i_d)\in \mathbb {Z}_2^d, \end{aligned}$$

with \(k_j\in \mathbb {N}^*\) if \(i_j=0\), and \(k_j\in {{\mathbb {N}}}\) if \(i_j=1\), is an orthonormal basis for \(L^2(\mathbb {T}^d)\).

Define

$$\begin{aligned} (\mathbb {Z}_2\times \mathbb {N})^d_{+} = \{(i,k)\in (\mathbb {Z}_2\times \mathbb {N})^d: (i_j,k_j)\ne (0,0),\ \forall j=1,\ldots ,d\}. \end{aligned}$$

Notice that \(\mathbf{B} = \{\varepsilon _{i,k} =\varepsilon _{i_1,k_1}\otimes \cdots \otimes \varepsilon _{i_d,k_d},\ (i,k)\in (\mathbb {Z}_2\times \mathbb {N})^d_{+}\}\).

The following equality is a straightforward consequence of the formula for the cosinus of a sum of angles: For any \(x,y\in \mathbb {T}^d\),

$$\begin{aligned} \sum _{i\in \mathbb {Z}^d_2}\varepsilon _{i,k}(x)\varepsilon _{i,k}(y) =\frac{1}{2^{n(k)}\pi ^d}\prod _{j=1}^d\cos (k_j (x_j-y_j) ),\ k\in \mathbb {N}^{d}\ {\text {with}}\ (i,k)\in (\mathbb {Z}_2\times \mathbb {N})^d_{+}. \end{aligned}$$
(2.1)

Let \((-\varDelta )^2\) be the biharmonic operator (also called the bilaplacian) on \(L^2(\mathbb {T}^d)\). The basis \(\mathbf{B} \) is a set of eigenfunctions of \((-\varDelta )^2\) with associated eigenvalues \(\lambda _k=\sum _{j=1}^d k_j^4\), \(k\in \mathbb {N}^{d}\). Observe that \(d^{-1}|k|^4\le \lambda _k\le |k|^4\), and \(\inf _{k\in {{\mathbb {N}}}^{d,*}} \lambda _k=d\).

The Green’s function of the biharmonic heat operator \(\mathcal {L} = \frac{\partial }{\partial t} +(-\varDelta )^2\) on \((0,T] \times \mathbb {T}^d\) is given by

$$\begin{aligned} G(t;x,y)=\sum _{(i,k)\in (\mathbb {Z}_2\times \mathbb {N})^d_{+} } e^{-\lambda _k t}\varepsilon _{i,k}(x)\varepsilon _{i,k}(y) =\sum _{k\in \mathbb {N}^d}\frac{e^{-\lambda _k t}}{2^{n(k)}\pi ^d}\prod _{j=1}^d\cos (k_j (x_j-y_j) ), \end{aligned}$$
(2.2)

the last equality being a consequence of (2.1).

This paper concerns the linear stochastic biharmonic heat equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \left( \frac{\partial }{\partial t}+(-\varDelta )^2\right) v(t,x)= \sigma \dot{W}(t,x), &{} (t,x)\in (0,T]\times \mathbb {T}^d,\\ v(0,x)=v_0(x), \end{array}\right. } \end{aligned}$$
(2.3)

where \((\dot{W}(t,x))\) is a space-time white noise on \([0,T]\times \mathbb {T}^d\), \(\sigma \in {\mathbb {R}}\setminus \{0\}\) and \(v_0: \ \mathbb {T}^d\longrightarrow {\mathbb {R}}\).

We consider the random field solution to (2.3), that is, the stochastic process

$$\begin{aligned} v(t,x)=\int _{\mathbb {T}^d} G(t;x,z)v_0(z)dz + \sigma \int _0^t\int _{\mathbb {T}^d} G(t-r;x,z)W(dr,dz), \end{aligned}$$
(2.4)

with G given in (2.2), and the stochastic integral is a Wiener integral with respect to space-time white noise.

We assume that, for any \((t,x)\in (0,T]\times \mathbb {T}^d\), the function \(\mathbb {T}^d\ni z\mapsto G(t;x,z) v_0(z)\) belongs to \(L^1(\mathbb {T}^d)\). Along with the next Theorem, this yields that \((v(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\) is a well-defined Gaussian process.

Theorem 2.1

Let

$$\begin{aligned} u(t,x) = \int _0^t\int _{\mathbb {T}^d} G(t-r;x,z)W(dr,dz),\quad (t,x)\in [0,T]\times \mathbb {T}^d. \end{aligned}$$

The stochastic process \((u(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\) is well-defined if and only if \(d=1, 2, 3\). In this case,

$$\begin{aligned} \sup _{(t,x)\in [0,T]\times \mathbb {T}^d}E(\vert u(t,x)\vert ^2)<\infty . \end{aligned}$$
(2.5)

Proof

Fix \((t,x)\in (0,T]\times \mathbb {T}^d\). By (2.2) and applying Fubini’s theorem, we have

$$\begin{aligned} \int _0^t dr \int _{\mathbb {T}^d} dz\ G^2(t-r; x,z)&= \sum _{(i,k)\in (\mathbb {Z}_2\times \mathbb {N})^d_{+} }\varepsilon _{i,k}^2(x) \left( \int _0^t dr\ e^{-2\lambda _k r}\right) \nonumber \\&=\sum _{k\in \mathbb {N}^{d}}\frac{1}{2^{n(k)}\pi ^d}\int _0^t dr\ e^{-2\lambda _k r}\nonumber \\&= \frac{t}{(2\pi )^d} + \sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ 0\le n(k)\le d-1 \end{array}} \frac{1-e^{-2\lambda _k t}}{2^{n(k)+1}\pi ^d\lambda _k}. \end{aligned}$$
(2.6)

Use the inequalities \(\frac{u}{1+u}\le 1-e^{-u}\le 1\), valid for all \(u\ge 0\), to see that the series in (2.6) is equivalent to a harmonic series \(\sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ 0\le n(k)\le d-1 \end{array}}\frac{1}{|k|^4}\), which converges if and only if \(d\le 3\). Equivalently, the Wiener integral defining u(tx) is well-defined if and only if \(d\le 3\). This finishes the proof of the first statement.

By the isometry property of the Wiener integral, \(E((u(t,x))^2)\) is equal to the right-hand side of (2.6). Taking the supremum in (2.6), we have

$$\begin{aligned} \sup _{(t,x)\in [0,T]\times \mathbb {T}^d} E((u(t,x))^2)&\le \frac{T}{(2\pi )^d}+ \sup _{t\in [0,T]} \sum _{k\in \mathbb {N}^{d}}\frac{1-e^{-2\lambda _k t}}{2^{n(k)+1}\pi ^d\lambda _k}\\&\le \frac{T}{(2\pi )^d} + \sum _{k\in \mathbb {N}^{d}}\frac{1}{2^{n(k)+1}\pi ^d\lambda _k} \le C(T,d). \end{aligned}$$

\(\square \)

In the sequel, \(d\in \{1,2,3\}\).

In the last part of this section, we recall the notions of Hausdorff measure and capacity that will be used in of Sect. 6.

g -Hausdorff measure

Let \(\varepsilon _0>0\) and \(g: [0,\varepsilon _0 ]\rightarrow {\mathbb {R}}_+\) be a continuous strictly increasing function satisfying \(g(0)=0\). The g-Hausdorff measure of a Borel set \(A\subset \mathbb {R}^D\) is defined by

$$\begin{aligned} \mathcal {H}_g(A)=\lim _{\varepsilon \downarrow 0} \inf \left\{ \sum _{i=1}^\infty g(2r_i): A\subset \bigcup _{i=1}^\infty B_{r_i}(x_i),\ \sup _{i\ge 1}r_i\le \varepsilon \right\} \end{aligned}$$

(see e.g. [10]). In this paper, we will use this notion referred to two examples: (i) \(g(\tau )= \tau ^\gamma \), with \(\gamma >0\); this is the classical \(\gamma \)-dimensional Hausdorff measure. (ii) \(g(\tau ) = \tau ^{\nu _1}\left( q^{-1}(\tau )\right) ^{-\eta }\), with \(q(\tau ) = \tau ^{\nu _2}\left( \log \frac{c}{\tau }\right) ^{\delta }\), \(\nu _1, \nu _2, \eta , \delta >0\).

By coherence with the definition of the \(\gamma \)-dimensional Hausdorff measure when \(\gamma <0\), if \(g(0)=\infty \), we set \(\mathcal {H}_g(A)=\infty \).

Capacity relative to a symmetric potential kernel

Let be continuous on \({\mathbb {R}}^D\setminus \{0\}\), symmetric, , for all \(z\ne 0\), . This function is called a symmetric potential. The -capacity of a Borel set \(A\subset \mathbb {R}^D\) is defined by

where and \(\mathbb {P}(A)\) denotes the set of probability measures on A. If , we set , by convention.

In this article, we will use this notion with , where g is as in the examples (i) and (ii) above. Observe that, in the example (i), the -capacity is the Bessel-Riesz capacity, usually denoted by \(\text {Cap}_\gamma (A)\) (see e.g. [7, p. 376]).

Throughout the article, positive real constants are denoted by C, or variants, like \({\bar{C}}\), \({\tilde{C}}\), c, etc. If we want to make explicit the dependence on some parameters \(a_1, a_2,\ldots \), we write \(C(a_1, a_2, \ldots )\) or \(C_{a_1,a_2,\ldots }\). When writing \(\log \left( \frac{C}{z}\right) \), we will assume that C is large enough to ensure \(\log \left( \frac{C}{z}\right) \ge 1\).

3 Equivalence for the canonical metric

For the process u of Theorem 2.1, we define

$$\begin{aligned} \mathfrak {d}_{u}((t,x),(s,y))=\Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}. \end{aligned}$$
(3.1)

This is the canonical pseudo-distance associated with u. This section is devoted to establish an equivalent (anisotropic) pseudo-distance for \(\mathfrak {d}_{u}\).

Throughout the proofs, we will make frequent use of the identity

$$\begin{aligned}&\Vert u(t,x)-u(s,y)\Vert ^2_{L^2(\varOmega )}\nonumber \\&\quad = \frac{1}{2^{n(k)+1}\pi ^d}\nonumber \\&\qquad \times \sum _{k\in {{\mathbb {N}}}^{d,*}} \frac{1-e^{-2 \lambda _k s}}{\lambda _k} \left( e^{-2\lambda _k(t-s)} + 1 - 2 e^{-\lambda _k(t-s)}\prod _{j=1}^d\cos (k_j (x_j-y_j) )\right) \nonumber \\&\qquad +\frac{1}{2^{n(k)+1}\pi ^d}\sum _{k\in {{\mathbb {N}}}^{d,*}} \frac{1-e^{-2\lambda _k (t-s)}}{\lambda _k} +\frac{t-s}{(2\pi )^d}, \end{aligned}$$
(3.2)

\(0\le s\le t\). This formula is proved using the Wiener isometry

$$\begin{aligned} \Vert u(t,x)-u(s,y)\Vert ^2_{L^2(\varOmega )} =&\int _0^t dr \int _{\mathbb {T}^d} dz\ (G(t-r; x,z) - G(s-r; y,z))^2\nonumber \\ =&\int _0^s dr \int _{\mathbb {T}^d} dz\ (G(t-r; x,z) - G(s-r; y,z))^2 \nonumber \\&+ \int _s^t dr \int _{\mathbb {T}^d} dz\ G^2(t-r; x,z), \end{aligned}$$
(3.3)

(the last equality holds because the Green’s function G(ryz) vanishes if \(r<0\)) and using the definition (2.2). The first (respectively, second) series term in (3.2) equals the first (respectively second) integral on the rignt-hand side of (3.3).

We start by analyzing the \(L^2(\varOmega )\)-increments in the time variable of the process (u(tx)).

Proposition 3.1

  1. 1.

    There exist constants \(c_1(d,T)\) and \(c_2(d)\) such that, for all \(s, t\in [0,T]\), \(x\in \mathbb {T}^d\),

    $$\begin{aligned} c_1(d,T)\vert t-s\vert ^{1-d/4}\le \Vert u(t,x)-u(s,x)\Vert ^2_{L^2(\varOmega )} \le c_2(d)\vert t-s\vert ^{1-d/4}. \end{aligned}$$
    (3.4)
  2. 2.

    For any \((t,x), (s,y)\in [0,T]\times \mathbb {T}^d\),

    $$\begin{aligned} c_1(d,T)\vert t-s\vert ^{1-d/4}\le \Vert u(t,x)-u(s,y)\Vert ^2_{L^2(\varOmega )}, \end{aligned}$$
    (3.5)

    where \(c_1(d,T)\) is the same constant as in (3.4).

Proof

Without loss of generality, we suppose \(0\le s< t\le T\).

Use the first equality in (3.3) and then apply Lemma 7.1 with \(h:=t-s\). This yields the second inequality in (3.4).

From (3.2), we have

$$\begin{aligned} \Vert u(t,x)-u(s,x)\Vert ^2_{L^2(\varOmega )}\ge \frac{1}{2^{n(k)+1}\pi ^d} \sum _{k\in \mathbb {N}^{d,*}} \frac{1-e^{-2\lambda _k(t-s)}}{\lambda _k}. \end{aligned}$$
(3.6)

Let \(r\ge d\). Applying the inequality \(1-e^{-u}\ge \frac{u}{1+u}\), \(u\ge 0\), we obtain

$$\begin{aligned} \sum _{k\in \mathbb {N}^{d,*}} \frac{1-e^{-2 \lambda _k(t-s)}}{\lambda _k} \ge&2(t-s) \sum _{\begin{array}{c} k\in \mathbb {N}^{d,*}\\ |k|> r \end{array} } \frac{1}{1+2 \lambda _k(t-s)}\\ \ge&\frac{2(t-s)}{r^{-4} + 2(t-s)}\sum _{\begin{array}{c} k\in \mathbb {N}^{d,*}\\ |k| > r \end{array}} \frac{1}{|k|^4} = C_d\ \frac{2(t-s)}{r^{-4} + 2(t-s)} r^{d-4}, \end{aligned}$$

since \(\lambda _k\le |k|^4\). Choosing \(r=\left( \tfrac{d^4 T}{t-s}\right) ^{1/4}\), the inequality above yields

$$\begin{aligned} \Vert u(t,x)-u(s,x)\Vert ^2_{L^2(\varOmega )} \ge c_1(d,T)(t-s)^{1-d/4}, \end{aligned}$$

with \( c_1(d,T) = C_d \frac{2d^d T^{d/4}}{1+2d^4 T}\). This is the lower bound in (3.4).

Notice that from (3.2) we deduce

$$\begin{aligned} \Vert u(t,x)-u(s,y)\Vert ^2_{L^2(\varOmega )}\ge \frac{1}{2^{n(k)+1}\pi ^d} \sum _{k\in \mathbb {N}^{d,*}} \frac{1-e^{-2\lambda _k(t-s)}}{\lambda _k}. \end{aligned}$$

Hence the proof above yields (3.5). \(\square \)

For any \(j=1,\ldots ,d\), fix real numbers \(0<c_{0,j}<2\pi \) and define \(J_j=[c_{0,j},2\pi -c_{0,j}]\) and \(J= J_1\times \ldots \times J_d\subsetneq \mathbb {T}^d\). The next statement deals with increments in space.

Proposition 3.2

Let \((u(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\) be the stochastic process defined in Theorem 2.1 and let J be a compact set as described before. There exist positive constants c(d), C(d), \(c_3(d)\) and \(c_4(d)\) such that, for any \(t>0\), \(x, y \in J\),

$$\begin{aligned}&c_3(d) C_t \left( \log \frac{c(d)}{\vert x-y\vert }\right) ^\beta \vert x-y\vert ^{2\wedge (4-d)}\nonumber \\&\quad \le \Vert u(t,x)-u(t,y)\Vert ^2_{L^2(\varOmega )} \le c_4(d)\left( \log \frac{C(d)}{\vert x-y\vert }\right) ^\beta \vert x-y\vert ^{2\wedge (4-d)}, \end{aligned}$$
(3.7)

where \(C_t = (1-e^{-2dt})\), and \(\beta = 1_{\{d=2\}}\).

The upper bound holds for any \((t,x)\in [0,T]\times \mathbb {T}^d\). The lower bound holds for any \(x,y\in \mathbb {T}^d\) if \(|x-y|\) is small enough. For \(t=0\), the lower bound is non informative.

Proof

Upper bound. From (3.2) we deduce

$$\begin{aligned} \Vert u(t,x)-u(t,y)\Vert ^2_{L^2(\varOmega )} = \frac{1}{2^{n(k)}\pi ^d} \sum _{k\in {{\mathbb {N}}}^{d,*}} \frac{1-e^{-2\lambda _k t}}{\lambda _k} \left( 1 - \prod _{j=1}^d\cos (k_j (x_j-y_j))\right) . \end{aligned}$$
(3.8)

Observe also that, because of (2.1),

$$\begin{aligned} 1-\prod _{j=1}^d \cos (k_j(x_j-y_j))= & {} 2^{n(k)-1}\pi ^d \sum _{i\in {{\mathbb {Z}}}_2^d} (\varepsilon _{i,k}(x) - \varepsilon _{i,k}(y))^2\nonumber \\\le & {} {\bar{C}}(d)(1\wedge (\vert k\vert \ \vert x-y\vert )^2). \end{aligned}$$
(3.9)

for any \((i,k)\in ({{\mathbb {Z}}}_2\times {{\mathbb {N}}})^d\).

Case \(d=1\). Since \(\sum _{k\ge 1}\frac{1}{k^2}< \infty \), from (3.8) and (3.9) we have

$$\begin{aligned} \Vert u(t,x)-u(t,y)\Vert ^2_{L^2(\varOmega )}= \frac{1}{\pi }\sum _{k\ge 1} \frac{1-e^{-2\lambda _kt}}{\lambda _k}(1-\cos (k(x-y))) \le C_d |x-y|^2.\nonumber \\ \end{aligned}$$
(3.10)

Case \(d=2,3\). For any \(k\in {{\mathbb {N}}}^{d}\), let \(I_k=[k_1,k_1+1)\times \cdots \times [k_d,k_d+1)\). Observe that for any d-dimensional vector \(z\in I_k\), we have \(|z| \le |k| + \sqrt{d}\). Fix \(\rho _0\ge \lfloor 3\sqrt{d}\rfloor + 1\) and let \(\alpha >0\). Then,

$$\begin{aligned} T_1(\alpha ,\rho _0)&:=\sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ |k| \ge \rho _0 \end{array}} \frac{1}{|k|^\alpha } \le \sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ |k| \ge \rho _0 \end{array}} \int _{I_k} \frac{dz}{(|z|-\sqrt{d})^\alpha } \le C_d\int _{\rho _0}^\infty \frac{\rho ^{d-1}\ d\rho }{(\rho -\sqrt{d})^\alpha }\nonumber \\&\le C_{d,\alpha } \int _{\rho _0}^\infty \rho ^{d-1-\alpha }\ d\rho , \end{aligned}$$
(3.11)

where the last inequality holds because on \([\rho _0,\infty )\), \(\rho -\sqrt{d}\ge 1/2\rho \).

Let \(\rho _0\) be as above, \(\rho _1 = \left\lfloor (3/2)\sqrt{d}\right\rfloor + 1\), and \(\beta >0\). By arguments similar to those used to obtain (3.11), we deduce

$$\begin{aligned} T_2(\beta ,\rho _0)&= \sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ \rho _1\le |k|<\rho _0 \end{array}} \frac{1}{|k|^\beta } \le \sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ \rho _1\le |k| <\rho _0 \end{array}} \int _{I_k} \frac{dz}{(|z|-\sqrt{d})^\beta } \le C_d\int _{\rho _1}^{\rho _0}\frac{\rho ^{d-1}\ d\rho }{(\rho -\sqrt{d})^\beta }\nonumber \\&\le C_{d,\beta } \int _{\rho _1}^{\rho _0} \rho ^{d-1-\beta }\ d\rho , \end{aligned}$$
(3.12)

where in the last inequality, we have used that on \([\rho _1,\rho _0]\), \(\rho -\sqrt{d}\ge (1/5)\rho \).

Set \(h= |x-y|\) and \(\rho _0 = \left\lfloor c_d h^{-\frac{2\wedge (4-d)}{4-d}}\right\rfloor +1\), where \(c_d= 3\sqrt{d} (2\pi \sqrt{d})^{\frac{2\wedge (4-d)}{4-d}}\). Notice that \(\rho _0\ge \lfloor 3\sqrt{d}\rfloor + 1\). Then, from (3.8) we have

$$\begin{aligned} \Vert u(t,x)-u(t,y)\Vert ^2_{L^2(\varOmega )} \le C(d)\left[ T_1(4,\rho _0)+ h^2 \left( T_2(2,\rho _0) + \sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ 1\le |k| <\rho _1 \end{array}} \frac{1}{|k|^2}\right) \right] .\nonumber \\ \end{aligned}$$
(3.13)

Using (3.11), with the choice of \(\rho _0\) specified above, we see that \(T_1(4,\rho _0)\le C_d h^{2\wedge (4-d)}\) and

$$\begin{aligned} T_2(2,\rho _0) \le C_d\times {\left\{ \begin{array}{ll} \log \left( \frac{C}{h}\right) ,&{} \ d=2,\\ h^{-1},&{}\ d=3. \end{array}\right. } \end{aligned}$$

Since \( \sum _{k\in \mathbb {N}^{d},\ 1\le |k|<\rho _1} \frac{1}{|k|^2}={\tilde{c}}_d <\infty \), substituting the above estimates in the right-hand side of (3.13) we obtain the upper bound in (3.7).

Lower bound. Case \(|x-y|\) small. We start from (3.8) to obtain

$$\begin{aligned} \Vert u(t,x)-u(t,y)\Vert ^2_{L^2(\varOmega )} \ge \frac{1-e^{-2t}}{ 2^{n(k)}\pi ^d} \sum _{k\in {{\mathbb {N}}}^{d,*}} \frac{1 - \prod _{j=1}^d\cos (k_j (x_j-y_j) )}{|k|^4}.\nonumber \\ \end{aligned}$$
(3.14)

Let T(xy) denote the series on the right-hand side of (3.14). Because for any \(z\in [-\pi /2,\pi /2]\), we have \(\cos z \le 1-(\tfrac{2}{\pi }z)^2\), we deduce

$$\begin{aligned} T(x,y)\ge \sum _{\begin{array}{c} k\in \mathbb {N}^{d,*}\\ k_j|x_j-y_j|\le \pi /2 \end{array}} \frac{1-\prod _{j=1}^d(1-[(2/\pi )k_j\vert x_j-y_j\vert ]^2)}{\vert k\vert ^4 }. \end{aligned}$$
(3.15)

Case \(d=1\). Using (3.15), we obtain

$$\begin{aligned} T(x,y)&\ge (\tfrac{2}{\pi })^2 |x-y|^2 \sum _{\begin{array}{c} k\in {{\mathbb {N}}}\setminus \{0\}\\ k|x-y|\le \pi /2 \end{array}} \frac{1}{k^2} \ge (\tfrac{2}{\pi })^2 |x-y|^2 \int _1^{\frac{\pi }{2}|x-y|^{-1}} \rho ^{-2}\ d\rho \\&= (\tfrac{2}{\pi })^2 |x-y|^2\left( 1-\frac{2}{\pi }|x-y|\right) . \end{aligned}$$

Assume \(|x-y|\le \tfrac{c_0\pi }{2}\), with \(0< c_0<1\) arbitrarily close to 1. Then \(1-\frac{2}{\pi }|x-y|\ge 1-c_0\) and, in this case,

$$\begin{aligned} \Vert u(t,x)-u(t,y)\Vert ^2_{L^2(\varOmega )} \ge 4 (1-c_0)\frac{1-e^{-2t}}{\pi ^3} |x-y|^2. \end{aligned}$$
(3.16)

Case \(d=2,3\). Consider the series on the right-hand side of (3.15) and apply the formula (7.2) of Lemma 7.2 with \(m:=d\) and \(p_j=[(2/\pi ) k_j|x_j-y_j|]^2\), to see that

$$\begin{aligned} T(x,y)\ge (2/\pi )^2 \left[ S_1(x,y) - (2/\pi )^2 S_2(x,y)\right] , \end{aligned}$$
(3.17)

where

$$\begin{aligned} S_1(x,y)&=\sum _{\begin{array}{c} k\in \mathbb {N}^{d,*} \\ k_j\vert x_j-y_j\vert \le \pi /4 \end{array}}\sum _{j=1}^d\frac{(k_j\vert x_j-y_j\vert )^2}{\vert k\vert ^4 },\\ S_2(x,y)&= \sum _{\begin{array}{c} k\in \mathbb {N}^{d,*} \\ k_j\vert x_j-y_j\vert \le \pi /4 \end{array}} \sum _{\begin{array}{c} j_1,j_2\in \{1,\ldots ,d\},\\ j_1< j_2 \end{array}}\frac{(k_{j_1}\vert x_{j_1}-y_{j_1}\vert k_{j_2}\vert x_{j_2}-y_{j_2}\vert )^2}{\vert k\vert ^{4}}. \end{aligned}$$

Note that the condition \(k_j\vert x_j-y_j\vert \le \pi /4\) implies \(1-(2/\pi )^2(k_j\vert x_j-y_j\vert )^2\ge 3/4\). Hence, for \(d=2\) we see that

$$\begin{aligned}&\sum _{j=1}^2 (k_j|x_j-y_j|)^2 - (2/\pi )^2(k_1|x_1-y_1|)^2(k_2|x_2-y_2|)^2\\&\quad = (k_1|x_1-y_1|)^2\left( 1- (2/\pi )^2(k_2|x_2-y_2|)^2\right) + (k_2|x_2-y_2|)^2\\&\quad \ge \frac{3}{4} \sum _{j=1}^2 (k_j|x_j-y_j|)^2. \end{aligned}$$

Similarly, for \(d=3\) we have

$$\begin{aligned} \sum _{j=1}^3 (k_j|x_j-y_j|)^2 \left( 1-(2/\pi )^2(k_{j+1}|x_{j+1}-y_{j+1})^2\right) \ge \frac{3}{4} \sum _{j=1}^3 (k_j|x_j-y_j|)^2, \end{aligned}$$

where in the sum above, we set \(j+1=1\) if \(j=3\).

Thus, in both dimensions \(d=2, 3\),

$$\begin{aligned} S_1(x,y)-(2/\pi )^2 S_2(x,y) \ge (3/4)S_1(x,y). \end{aligned}$$

The next goal is to find a lower bound for \(S_1(x,y)\). Without loss of generality we may and will assume \(\vert x_1-y_1\vert \le \vert x_2-x_2\vert \le ...\le \vert x_d-y_d\vert \). Set \(\mathbb {N}^{d,*}_{\le }:=\{k\in \mathbb {N}^{d,*}:k_1\le k_2\le ...\le k_d\}\). Then,

$$\begin{aligned} S_1(x,y)\ge \sum _{\begin{array}{c} k\in \mathbb {N}^{d,*}_{\le } \\ k_j\vert x_j-y_j\vert \le \pi /4 \end{array}}\sum _{j=1}^d\frac{(k_j\vert x_j-y_j\vert )^2}{\vert k\vert ^4 } \ge \frac{1}{\sqrt{2} d}\ \vert x-y\vert ^2\sum _{\begin{array}{c} k\in \mathbb {N}^{d,*}_{\le }\\ k_j\vert x_j-y_j\vert \le \pi /4 \end{array}}\frac{1}{\vert k\vert ^2}. \end{aligned}$$
(3.18)

Indeed, set \(K=(k_j^2)_j\), \(Z=(|x_j-y_j|^2)_j\) and let \(\xi \) be the angle between the vectors K and Z. Because \(\sum _{j=1}^d (k_j\vert x_j-y_j\vert )^2\) is the Euclidean scalar product between K and Z and \(\xi \in [0,\pi /4]\),

$$\begin{aligned} \sum _{j=1}^d (k_j\vert x_j-y_j\vert )^2\ge \cos (\pi /4) \left( \sum _{j=1}^d k_j^4\right) ^{1/2} \left( \sum _{j=1}^d |x_j-y_j|^4\right) ^{1/2}\ge \frac{1}{\sqrt{2}}\frac{|k|^2|x-y|^2}{d}. \end{aligned}$$

Assume that \(|x-y|\le \frac{\pi }{5\sqrt{d}}\). The set \(\{k\in {{\mathbb {N}}}^{d,*}: |k|\le \frac{\pi }{4}|x-y|^{-1}\}\) is non empty and is included in \(\{k\in {{\mathbb {N}}}^{d,*}: k_j\le \frac{\pi }{4}|x_j-y_j|^{-1}, j=1,\ldots ,d\}\). Hence,

$$\begin{aligned} \sum _{\begin{array}{c} k\in \mathbb {N}^{d,*}_{\le }\\ k_j\vert x_j-y_j\vert \le \pi /4 \end{array}}\frac{1}{\vert k\vert ^2} \ge \frac{1}{d!} \sum _{\begin{array}{c} k\in \mathbb {N}^{d,*}\\ |k|\le \frac{\pi }{4}|x-y|^{-1} \end{array}} \frac{1}{|k|^2} \ge C_d\int _{\sqrt{d}}^{\frac{\pi }{4}|x-y|^{-1}}\rho ^{d-3} \ d\rho . \end{aligned}$$

For \(d=2\), the last integral equals \(\log \left( \frac{\pi }{4\sqrt{d} |x-y|}\right) \), while for \(d=3\), it is equal to \((\pi /4) |x-y|^{-1}-\sqrt{d}\). Observe that if \(|x-y|\le \frac{\pi }{5\sqrt{d}}\) this expression is bounded below by \((\pi /20)|x-y|^{-1}\).

Summarizing, from (3.18) and assuming \(|x-y|\le \frac{\pi }{5\sqrt{d}}\), the discussion above proves

$$\begin{aligned} S_1(x,y)\ge C_d \times {\left\{ \begin{array}{ll} \log \left( \frac{\pi }{4\sqrt{d} |x-y|}\right) |x-y|^2, &{} d=2,\\ |x-y|, &{} d=3. \end{array}\right. } \end{aligned}$$
(3.19)

Therefore, for any \(x,y\in \mathbb {T}^d\) such that \(0\le |x-y|\le \frac{\pi }{5\sqrt{d}}\), we have proved that the lower bound of (3.7) holds with the constant \(c_3(d)\) depending only on d and \(C_t = 1-e^{-2t}\).

Lower bound. Case \(|x-y|\) large. We recall a standard “continuity-compactness” argument that we will use to extend the validity of the lower bound established in the previous step, to every \(x,y\in J\) satisfying \(\frac{\pi }{5\sqrt{d}}<|x-y|<2\pi \).

Consider the function

$$\begin{aligned} J^2\ni (x,y) \mapsto \varphi _t(x,y) = \Vert u(t,x)-u(t,y)\Vert ^2_{L^2(\varOmega )}, \end{aligned}$$

where \(t>0\) is fixed. Because of the upper bound in (3.7), this is a continuous function. Furthermore, from (3.8), we see that it is strictly positive. Thus, for any \(c_0>0\), the minimun value m of \(\varphi _t\) over the compact set \(\{\varphi _t(x,y); (x,y)\in J^2: |x-y|\ge c_0\}\) is achieved, and \(m>0\). Referring to the left hand-side of (3.7), let M be the maximum of the function

$$\begin{aligned} J^2\ni (x,y) \mapsto \left( \log \frac{c(d)}{|x-y|}\right) ^\beta \ |x-y|^{2\wedge (4-d)},\quad \beta =1_{\{d=2\}}. \end{aligned}$$

Taking \(c_0 = \tfrac{\pi }{5\sqrt{d}}\), we deduce,

$$\begin{aligned} \Vert u(t,x)-u(t,y)\Vert ^2_{L^2(\varOmega )}\ge \frac{m}{M}\left( \log \frac{c(d)}{|x-y|}\right) ^\beta \ |x-y|^{2\wedge (4-d)},\quad \beta =1_{\{d=2\}}, \end{aligned}$$

for any \(x,y\in J\) such that \(\frac{\pi }{5\sqrt{d}}<|x-y|<2\pi \).

This ends the proof of the lower bound and of the Proposition. \(\square \)

With Propositions 3.1 and 3.2 we obtain an equivalent expression of the canonical pseudo-distance (3.1), as stated in the next theorem.

Theorem 3.1

Let \((u(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\) be the stochastic process defined in Theorem 2.1.

  1. 1.

    There exist constants \(c_5(d)\), C(d) such that for any \((t,x),(s,y)\in [0,T]\times \mathbb {T}^d\),

    $$\begin{aligned} \Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^2\le c_5(d) \left( \vert t-s\vert ^{1-d/4}+\left( \log \frac{C(d)}{\vert x-y\vert }\right) ^\beta \vert x-y\vert ^{2\wedge (4-d)}\right) ,\nonumber \\ \end{aligned}$$
    (3.20)

    with \(\beta =1_{\{d=2\}}\).

  2. 2.

    Fix \(t_0\in (0,T]\) and let J be a compact subset of \(\mathbb {T}^d\) as in Proposition 3.2. There exist constants \(c_6(d, t_0,T)\) and c(d) such that, for any \((t,x),(s,y)\in [t_0,T]\times J,\)

    $$\begin{aligned} \Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^2\ge&c_6(d,t_0,T)\nonumber \\&\times \left( \vert t-s\vert ^{1-d/4}+\left( \log \frac{c(d)}{\vert x-y\vert }\right) ^\beta \vert x-y\vert ^{2\wedge (4-d)}\right) , \end{aligned}$$
    (3.21)

    with \(\beta =1_{\{d=2\}}\).

Proof

The estimate from above follows by applying the triangle inequality and the upper bounds in (3.4) and (3.7), which hold for any \((t,x),(s,y)\in [0,T]\times \mathbb {T}^d\). The value of the multiplicative constant in the upper bound is \(c_5(d)=2[c_2(d)+c_4(d)]\), where \(c_2(d)\), \(c_4(d)\) are given in (3.4), (3.7), respectively.

To prove the lower bound, we consider two cases (see Propositions 3.1 and 3.2 for the notations of the constants).

Case 1: \(c_2(d) |t-s|^{1-d/4}\le \frac{c_3(d)C_{t_0}}{4} \left( \log \frac{c(d)}{|x-y|}\right) ^\beta |x-y|^{2\wedge (4-d)}\), where \(C_{t_0}= 1-e^{-2t_0}\).

Applying the triangle inequality and then, using the lower bound in (3.7) and the upper bound in (3.4) we obtain,

$$\begin{aligned} \Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^2&\ge \frac{1}{2}\Vert u(t,x)-u(t,y)\Vert _{L^2(\varOmega )}^2-\Vert u(t,y)-u(s,y)\Vert _{L^2(\varOmega )}^2\\&\ge \frac{c_3(d) C_{t_0}}{2}\left( \log \frac{c(d)}{\vert x-y\vert }\right) ^\beta \vert x-y\vert ^{2\wedge (4-d)}-c_2(d)\vert t-s\vert ^{1-d/4}\\&\ge \frac{c_3(d) C_{t_0}}{8}\left( \log \frac{c(d)}{\vert x-y\vert }\right) ^\beta \vert x-y\vert ^{2\wedge (4-d)}+\frac{c_2(d)}{2}\vert t-s\vert ^{1-\frac{d}{4}}. \end{aligned}$$

Case 2: \(c_2(d) |t-s|^{1-d/4}> \frac{c_3(d)C_{t_0}}{4} \left( \log \frac{c(d)}{|x-y|}\right) ^\beta |x-y|^{2\wedge (4-d)}\).

By (3.5), we have

$$\begin{aligned}&\Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^2\ge c_1(d,T)\vert t-s\vert ^{1-d/4}= \frac{c_1(d,T)}{c_2(d)}\left[ c_2(d)\vert t-s\vert ^{1-d/4}\right] \\&\quad \ge \frac{c_1(d,T)}{c_2(d)}\left( \frac{c_2(d)}{2}\vert t-s\vert ^{1-d/4} +\frac{c_3(d)C_{t_0}}{8} \left( \log \frac{c(d)}{|x-y|}\right) ^\beta |x-y|^{2\wedge (4-d)}\right) . \end{aligned}$$

The proof of the theorem is complete. \(\square \)

4 Further second order properties of the random field u

Throughout this section, we use the notation

$$\begin{aligned} \sigma _{t,x} = E((u(t,x))^2),\ \rho _{(t,x),(s,y)} =\text {Corr}(u(t,x),u(s,y)),\ s,t\in (0,\infty ),\ x,y\in \mathbb {T}^d. \end{aligned}$$

Lemma 4.1

  1. 1.

    There exists a constant \(c_{d,T}\) such that for all \(s,t\in (0,T]\), \(x,y\in \mathbb {T}^d\),

    $$\begin{aligned} \vert \sigma _{t,x}^2-\sigma _{s,y}^2\vert \le c_{d,T}\Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^{2}. \end{aligned}$$
    (4.1)
  2. 2.

    Fix \(t_0\in (0,T]\). There exist constants \(0<c_{d,t_0} < C_{d,T}\) such that for any \((t,x)\in [t_0,T]\times \mathbb {T}^d\),

    $$\begin{aligned} c_{d,t_0} \le \sigma _{t,x}^2 \le C_{d,T}. \end{aligned}$$
    (4.2)
  3. 3.

    Fix \(t_0\in (0,T]\). For any \((t,x),(s,y)\in [t_0,T]\times \mathbb {T}^d\) such that \((t,x)\ne (s,y)\),

    $$\begin{aligned} \rho _{(t,x),(s,y)}=\frac{E(u(t,x)u(s,y))}{\sigma _{t,x}\sigma _{s,y}}<1. \end{aligned}$$

Proof

  1. 1.

    Without loss of generality we may assume \(0<s\le t\). Applying (2.6) yields

    $$\begin{aligned} \vert \sigma _{t,x}^2-\sigma _{s,y}^2\vert = \frac{t-s}{(2\pi )^d} + \frac{1}{2^{n(k)+1}\pi ^d}\sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ 0\le n(k)\le d-1 \end{array}} \frac{e^{-2\lambda _ks}\left( 1-e^{-2\lambda _k(t-s)}\right) }{\lambda _k}. \end{aligned}$$

    Use the inequality (3.5) to get \(\frac{t-s}{(2\pi )^d}\le {\bar{c}}_{d,T}\Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^2\). Since \(e^{-2\lambda _ks}\le 1\) and because of (3.2), we see that the second term on the right-hand side of this equality is bounded above by \(\Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^2\). This ends the proof of (4.1).

  2. 2.

    The claim follows from (2.6), observing that

    $$\begin{aligned} \sigma _{t,x}^2 \ge \sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ n(k)= d-1 \end{array}} \frac{1-e^{-2\lambda _k t}}{2^{n(k)+1}\pi ^d\lambda _k} \ge \frac{1-e^{-2t}}{2^d\pi ^d}, \quad t\ge 0. \end{aligned}$$
  3. 3.

    Assume that \(\rho _{(t,x),(s,y)}=1\). Then, there would exist \(\lambda \in \mathbb {R}\setminus \{0\}\) such that \(\Vert u(t,x)-\lambda u(s,y)\Vert _{L^2(\varOmega )}=0\). This leads to a contradiction. Indeed, consider first the case \(0<s<t\). By the isometry property of the Wiener integral,

    $$\begin{aligned} \Vert u(t,x)-\lambda u(s,y)\Vert _{L^2(\varOmega )}^2 =&\int _0^s dr \int _{\mathbb {T}^d} dz (G(t-r;x,z)-\lambda G(s-r;y,z))^2\nonumber \\&+\int _s^t dr \int _{\mathbb {T}^d} dz\ G^2(t-r;x,z)\nonumber \\ \ge&\int _0^{t-s} dr \int _{\mathbb {T}^d} dz\ G^2(r;x,z)>0, \end{aligned}$$
    (4.3)

    by the properties of G.

Next, we assume \(t=s\) and \(x\ne y\). If \(\lambda =1\), we see that

$$\begin{aligned} \Vert u(t,x)-\lambda u(t,y)\Vert _{L^2(\varOmega )}=\Vert u(t,x)-u(t,y)\Vert _{L^2(\varOmega )}=0 \end{aligned}$$

is in contradiction with the lower bound in (3.7). If \(\lambda \ne 1\), we apply Lemma 3.4 in [5] to the stochastic process \((u(t,x), \ x\in \mathbb {T}^d)\), with \(t\in [t_0,T]\) fixed. Notice that, because of the statements 1. and 2. proved above and Proposition 3.2, the hypotheses of that Lemma hold. We deduce

$$\begin{aligned} \Vert u(t,x)-\lambda u(t,y)\Vert _{L^2(\varOmega )}^2\ge c(1-\lambda )^2>0. \end{aligned}$$

\(\square \)

5 Solution to the deterministic homogeneous equation

In this section, we consider the Eq. (2.3) with \(\sigma =0\) whose solution in the classical sense and in finite time horizon is given by the function

$$\begin{aligned} {}[0,T]\times \mathbb {T}^d\ni (t,x)\ \longrightarrow I_0(t,x) = \int _{\mathbb {T}^d} G(t;x,z) v_0(z) dz. \end{aligned}$$

In the next proposition, we prove the joint continuity of this mapping.

Proposition 5.1

Let \(v_0\in L^1(\mathbb {T}^d)\). Then, the function \((t,x) \mapsto I_0(t,x)\) is jointly Lipschitz continuous.

Proof

Increments in time. Fix \(0<s\le t\le T\), Using the definition of G(txz) given in (2.2), we see that for any \(x\in \mathbb {T}^d\),

$$\begin{aligned}&|I_0(t,x) - I_0(s,x)|\\&\quad = \left| \int _{\mathbb {T}^d} dz\ v_0(z) \sum _{k\in {{\mathbb {N}}}^{d,*}} \left( e^{-\lambda _k t} - e^{-\lambda _k s}\right) \sum _{\begin{array}{c} i\in {{\mathbb {Z}}}_2^d\\ (i,k)\in ({{\mathbb {Z}}}_2\times {{\mathbb {N}}})^d_+ \end{array}}\varepsilon _{i,k}(x)\varepsilon _{i,k}(z)\right| \\&\quad \le \int _{\mathbb {T}^d} dz \vert v_0(z)\vert \sum _{k\in {{\mathbb {N}}}^{d,*}}\frac{t-s}{\lambda _k}\ \frac{1}{2^{n(k)}\pi ^d} \left| \prod _{j=1}^d \cos (k_j(x_j-z_j))\right| \\&\quad \le C_d (t-s)\Vert v_0\Vert _{L^1(\mathbb {T}^d)} \sum _{k\in {{\mathbb {N}}}^{d,*}}\frac{1}{\lambda _k} \le \left[ C_d \Vert v_0\Vert _{L^1(\mathbb {T}^d)}\right] (t-s). \end{aligned}$$

Increments in space. Let \(x,y\in \mathbb {T}^d\). Then, for any \(t\in [0,T]\),

$$\begin{aligned} |I_0(t,x) - I_0(t,y)|&= \left| \int _{\mathbb {T}^d} dz\ v_0(z) \sum _{(i,k)\in ({{\mathbb {Z}}}_2\times {{\mathbb {N}}})^d_+} e^{-\lambda _k t} (\varepsilon _{i,k}(x)-\varepsilon _{i,k}(y)) \varepsilon _{i,k}(z)\right| \\&\le |x-y|\ \sum _{k\in {{\mathbb {N}}}^{d,*}} |k| e^{-\lambda _k t}\int _{\mathbb {T}^d} dz \vert v_0(z)\vert \end{aligned}$$

Up to a multiplicative constant depending on d, the series in the above expression is bounded by \(\int _0^\infty \rho ^d e^{-\frac{\rho ^4}{2^{d-1}}}= C_d\varGamma _E\left( \frac{d+1}{4}\right) \), where \(\varGamma _E\) denotes the Euler Gamma function.

The proof of the proposition is complete. \(\square \)

Remark 5.1

Combining Proposition 5.1 with the estimate (3.20) yields the following. The sample paths of the stochastic process \((v(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\) are Hölder continuous, jointly in (tx), of degree \((\eta _1,\eta _2)\) with

$$\begin{aligned} \eta _1 \in \left( 0, \tfrac{4-d}{8}\right) ,\quad \eta _2\in \left( 0, \left( 1\wedge \tfrac{4-d}{2}\right) \right) . \end{aligned}$$

Indeed, \(v(t,x) = I_0(t,x) + u(t,x)\), and the process (u(tx)) is Gaussian. Hence, the claim follows from Kolmogorov’s continuity criterion (see e.g. [9]).

6 Hitting probabilities and polarity of sets

Consider the Gaussian random field

$$\begin{aligned} V=(V(t,x) = \left( v_1(t,x), \ldots ,v_D(t,x)),\ (t,x)\in [0,T]\times \mathbb {T}^d\right) , \end{aligned}$$

where \((v_j(t,x)),\ j=1,\ldots , D\), are independent copies of the process (v(tx)) defined in (2.4). For simplicity, we will take \(\sigma = 1\) there. Recall that \(A\in \mathcal {B}({\mathbb {R}}^D)\) is called polar for the random field V if \(P(V(I\times J)\cap A\ne \emptyset )=0\), and is nonpolar otherwise. In this section, we discuss this notion using basically the results of [5]. We first introduce some notation. For \(\tau \in {\mathbb {R}}_+\), let

$$\begin{aligned} q_1(\tau )=&\tau ^{(4-d)/8},\quad q_2(\tau )=\left( \log \frac{C(d)}{\tau }\right) ^{\frac{\beta }{2}}\tau ^{1\wedge ((4-d)/2)},\quad \beta = 1_{\{d=2\}},\nonumber \\ \bar{g}_q(\tau )=&\tau ^D \left( q_1^{-1}(\tau )\right) ^{-1} \left( q_2^{-1}(\tau )\right) ^{-d}, \end{aligned}$$
(6.1)

where the subscript q in the last expression refers to the couple \((q_1,q_2)\).

Let \(D_0 = [(4-d)/8]^{-1}+d[1\wedge ((4-d)/2)]^{-1}\). If \(D>D_0\), the functions \(\bar{g}_q\) and \((\bar{g}_q)^{-1}\) satisfy the conditions required by the definitions of the \(\bar{g}_q\)-Hausdorff measure and the \((\bar{g}_q)^{-1}\)-capacity, respectively (see [5][Section 5] for details).

In the next theorem, \(I=[t_0,T]\) and \(J=[0,M]^d\), where \(0<t_0\le T\) and \(M\in (0,2\pi )\).

Theorem 6.1

The hitting probabilities relative to the D-dimensional random field V satisfy the following bounds.

  1. 1.

    Let \(D>D_0\).

    1. (a)

      There exists a constant \(C:=C(I,J,D,d)\) such that for any Borel set \(A\in \mathcal {B}({\mathbb {R}}^D)\),

      $$\begin{aligned} P(V(I\times J)\cap A\ne \emptyset ))\le C\mathcal {H}_{\bar{g}_q}(A). \end{aligned}$$
      (6.2)
    2. (b)

      Let \(N>0\) and \(A\in \mathcal {B}({\mathbb {R}}^D)\) be such that \(A\subset B_N(0)\). There exists a constant \(c:=c(I,J,N,D,d)\) such that

      $$\begin{aligned} P(V(I\times J)\cap A\ne \emptyset ))\ge c\text {Cap}_{(\bar{g}_q)^{-1}}(A). \end{aligned}$$
      (6.3)
  2. 2.

    Let \(D<D_0\) and \(A\in \mathcal {B}({\mathbb {R}}^D)\).

    1. (a)

      \(\mathcal {H}_{\bar{g}_q}(A)= \infty \) and therefore (6.2) holds, but is non informative.

    2. (b)

      If A is bounded, there exists a constant \(c:=c(I,J,N,D,d)>0\) such that

      $$\begin{aligned} P(V(I\times J)\cap A\ne \emptyset ))\ge c = c\text {Cap}_{(\bar{g}_q)^{-1}}(A). \end{aligned}$$
      (6.4)

      Hence, (6.3) holds.

Proof

Consider first the case \(D>D_0\). The upper bound (6.2) follows by applying [5][Thm. 3.3.], while the lower bound (6.2) follows from [5][Thm. 3.5]. Indeed, from Theorem 3.1, Lemma 4.1 and Proposition 5.1, we deduce that the random field V satisfies the assumptions of those two theorems. As for the hypotheses required on \(q_1\), \(q_2\) and \({\bar{g}}_q\), they are proved in [5][Section 5].

Let \(D<D_0\). We have \(\lim _{\tau \downarrow 0}\bar{g}_q(\tau ):= \bar{g}_q(0)=\infty \) and then, by convention, \(\mathcal {H}_{\bar{g}_q}(A)= \infty \).

Next, we prove (6.4) by using arguments similar to those in [3][Theorem 2.1, p. 1348].

For \(\varepsilon \in (0,1)\) and \(z\in A\), we denote by \(B_\varepsilon (z)\) the ball centred at z with radius \(\varepsilon \) and define

$$\begin{aligned} J_\varepsilon (z) = \frac{1}{(2\varepsilon )^D}\int _I\int _J ds dy 1_{B_\varepsilon (z)}(V(s,y)). \end{aligned}$$

Since \(\{J_\varepsilon (z)>0\} \subset \{V(I\times J) \cap A^{(\varepsilon )}\ne \emptyset \}\), it suffices to prove that \(P(J_\varepsilon (z)>0)>C\), for some positive constant C. Using the Paley-Zygmund inequality, this amounts to check

$$\begin{aligned} E\left( J_\varepsilon (z)\right) > C_1,\quad E\left[ \left( J_\varepsilon (z)\right) ^2\right] < C_2, \end{aligned}$$

for some \(C_1, C_2 >0\).

Because of (4.2), the one-point density of V(tx) is bounded uniformly on \((t,x)\in [t_0,T]\times \mathbb {T}^d\). This yields \(E\left( J_\varepsilon (z)\right) >C_1\).

From Theorem 3.1, we deduce that the two-point densities of (V(sy), V(tx)) satisfy

$$\begin{aligned} p_{s,y;t,x}(z_1,z_2) \le \frac{C}{[\rho ((s,y),(t,x))]^D}\exp \left( -\frac{c |z_1-z_2|^2}{[\rho ((s,y),(t,x))]^2}\right) , \ z_1,z_2\in A, \end{aligned}$$

where \(\rho ((s,y),(t,x))= |t-s|^{\frac{4-d}{8}}+ \left( \log \frac{C(d)}{|x-y|}\right) ^{\frac{\beta }{2}}|x-y|^{1\wedge ((4-d)/2)}\), \(\beta =1_{\{d=2\}}\) (apply the arguments of [3][Proposition 3.1]). Consequently,

$$\begin{aligned} E\left[ \left( J_\varepsilon (z)\right) ^2\right] \le {\tilde{C}} \int _{I\times J} ds dy \int _{I\times J} dt dx\ [\rho ((s,y),(t,x))]^{-D}. \end{aligned}$$

Set \(\alpha _1 = \frac{4-d}{8}\), \(\alpha _2 = 1\wedge ((4-d)/2)\), so that \(D_0 =\frac{1}{\alpha _1} + \frac{d}{\alpha _2} \). Since the constant C(d) is such that \(\log \frac{C(d)}{|x-y|}\ge 1\), the last integral is bounded from above by

$$\begin{aligned} I= C\int _{I\times J} ds dy \int _{I\times J} dt dx\ [|t-s|^{\alpha _1} + |x-y|^{\alpha _2}]^{-D}. \end{aligned}$$

After some computations, we see that \(I\le C \int _0^{c_0} \rho ^{-D + \frac{1}{\alpha _1} + \frac{d}{\alpha _2}-1}\ d\rho \), which is finite if \(D<D_0\). This ends the proof of the inequality in (6.4).

Since \(\lim _{\tau \downarrow 0}[\bar{g}_q(\tau )]^{-1}:= [\bar{g}_q(0)]^{-1}=0\), by convention \(\text {Cap}_{(\bar{g}_q)^{-1}}(A)=1\). This yields the last equality in (6.4). \(\square \)

Theorem 6.1 1. implies the following.

Corollary 6.1

Let \(A\in \mathcal {B}({\mathbb {R}}^D)\) and assume \(D>D_0\).

  1. 1.

    If \(\mathcal {H}_{\bar{g}_q}(A)=0\) then A is polar for V.

  2. 2.

    If A is bounded and \(\text {Cap}_{(\bar{g}_q)^{-1}}(A)>0\), then A is nonpolar for V.

Corollary 6.2

If \(D>D_0\), points \(z\in {\mathbb {R}}^D\) are polar for V and are nonpolar if \(D<D_0\).

Proof

Assume first \(D>D_0\). By the definition of the \(\bar{g}_q\)-Hausdorff measure, we have \(\mathcal {H}_{\bar{g}_q}(\{z\})=0\). Hence, the polarity of \(\{z\}\) follows from (6.2).

One can give another proof of this fact without appealing to Theorem 6.1. Indeed, we have \(\lim _{\tau \downarrow 0}\bar{g}_q(\tau ):= \bar{g}_q(0)=0\). This is obvious for \(d=1,3\). For \(d=2\), it is proved in [5][Lemma 5.1]. This property implies \(P(V(I\times J)\cap \{z\}\ne \emptyset )=0\) (see [5][Corollary 3.2]). Therefore \(\{z\}\) is polar for V.

If \(D<D_0\), we apply (6.4) to \(A=\{z\}\) and deduce that \(\{z\}\) is nonpolar. Actually, if \(D<D_0\) any bounded Borel set A is nonpolar for V. \(\square \)

Consider the case \(d=1,3\), for which the definitions of the \(\mathcal {H}_{\bar{g}_q}\)-Hausdorff measure and \((\bar{g}_q)^{-1}\)-capacity are those of the classical Hausdorff measure and Bessel-Riesz capacity, respectively. Assume \(D>D_0\). From Theorem 6.1 and using the same proof as that of Corollary 5.3 (a) in [2], we obtain the geometric type property on the path of V:

$$\begin{aligned} {\text {dim}_\mathrm{H}}(V(I\times J)) = D_0,\ a.s, \end{aligned}$$

where \(\text {dim}_\mathrm{H}\) refers to the Hausdorff dimension (see e.g. [6][Chapter 10, Section 2, p. 130])

We end this section with some open questions for further investigations.

It would certainly be interesting to have a statement on the Hausdorff dimension of the path of V also in dimension \(d=2\). Looking back to (6.1), we see that, in this dimension, there is a logarithmic factor in the definition of \(\bar{g}_q\). This leads to the question of giving a notion of Hausdorff dimension based on the \({\bar{g}}_q\)-Hausdorff measure. A suggestion can be found in [8]. Indeed, the family \(\mathcal {T}\) of functions

$$\begin{aligned} {\mathbb {R}}_+\ni \tau \longrightarrow f_{\nu }(\tau ):= \tau ^\nu \left( \log \frac{C}{\tau }\right) ^{1/2},\ \nu \in (0,\nu _0), \end{aligned}$$

satisfies \(f_{\nu _1}(\tau ) = \mathrm{o} (f_{\nu _2})(\tau ), \ \tau \downarrow 0\), whenever \(\nu _1<\nu _2\); therefore, \(\mathcal {T}\) is a scale in the sense of [8][Definition 2.1]. According to [8][Definition 2.3], we can define the generalized notion of Hausdorff dimension (relative to \(\mathcal {T}\)),

$$\begin{aligned} \mathrm{dim}_{\mathrm{H}}^{(f)}(A)&= \sup \{\eta \in (0,\nu _0) : \mathcal {H}_{f_\nu }(A)= \infty \} =\sup \{\eta \in (0,\nu _0): \mathcal {H}_{f_\nu }(A) >0\}\\&=\inf \{\eta \in (0,\nu _0): \mathcal {H}_{f_\nu }(A) =0\} = \inf \{\eta \in (0,\nu _0): \mathcal {H}_{f_\nu }(A) <\infty \}. \end{aligned}$$

We conjecture that \(\mathrm{dim}_{\mathrm{H}}^{(f)}(V(I\times J))= D_0\), a.s.

A second conjecture, related to Corollary 6.2, is that singletons are polar if \(D=D_0\). This question may be approached using [4][Theorem 2.6], which gives sufficient conditions on Gaussian random fields ensuring polarity of points. Preliminary investigations rise some technical challenges, due to the complex expression of the harmonizable representation of the random field V. On the other hand, in dimension \(d=1\), the processs V is very regular in space and therefore, the approach based on [4] might have a simplification or an alternative.