Abstract
Consider the linear stochastic biharmonic heat equation on a d–dimen- sional torus (\(d=1,2,3\)), driven by a space-time white noise and with periodic boundary conditions:
\(v(0,x)=v_0(x)\). We find the canonical pseudo-distance corresponding to the random field solution, therefore the precise description of the anisotropies of the process. We see that for \(d=2\), they include a \(z(\log \tfrac{c}{z})^{1/2}\) term. Consider D independent copies of the random field solution to (0.1). Applying the criteria proved in Hinojosa-Calleja and Sanz-Solé (Stoch PDE Anal Comp 2021. https://doi.org/10.1007/s40072-021-00190-1), we establish upper and lower bounds for the probabilities that the path process hits bounded Borel sets.This yields results on the polarity of sets and on the Hausdorff dimension of the path process.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper is motivated by the study of sample path properties of stochastic partial differential equations (SPDEs) and its applications to questions like the polarity of sets for the path process and its Hausdorff dimension (a.s.). We focus on a system of stochastic linear biharmonic heat equations on a d-dimensional torus, \(d=1,2,3\) (see (2.3)). This SPDE is the linearization at zero of a Cahn–Hilliard equation with a space-time white noise forcing term (see e.g. [1]).
In the last two decades, there has been many contributions to the subject of this paper. A large part of them concern Gaussian random fields, the case addressed in this work. A representative sample of results can be found in [2,3,4, 11, 12], and references therein. Central to the study is obtaining upper and lower bounds on the probabilities that the random field hits a Borel set A, in terms of the Hausdorff measure and the capacity, respectively, of A. In the derivation of the bounds –named criteria for hitting probabilities– a major role is played by the canonical pseudo-distance associated to the process. For a random field \((v(t,x),\ (t,x)\in [0,T]\times \mathbb {D})\), \(\mathbb {D}\subset {\mathbb {R}}^d\), this notion is defined by
When \(\mathfrak {d}_{v}((t,x),(s,y))\) compares, up to multiplicative constants, with \(\vert t-s\vert ^{\alpha _0}+\sum _{j=1}^d\vert x_j-y_j\vert ^{\alpha _j}\), \(\alpha _0, \alpha _j\in (0,1)\), [12][Theorem 7.6] and [3][Theorems 2.1, 2.4 and 2.6]) provide useful criteria for hitting probabilities.
Let \((u(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\), \(d=1,2,3\), be the random field solution to the biharmonic heat equation driven by space-time noise, given in Theorem 2.1. We prove in Theorem 3.1 that the associated canonical pseudo-distance \(\mathfrak {d}_{u}((t,x),(s,y))\) compares with
Thus, when \(d=2\) this example does not fall into the range of applications of the criteria cited above.
In [5][Theorems 3.2, 3.3, 3.4, 3.5], we proved extensions of [12][Theorem 7.6] to cover cases where the canonical pseudo-distance has anisotropies described by gauge functions other than power functions. This was initially motivated by the study of a linear heat equation with fractional noise (see [5][Section 4]). From the above discussion, we see that the biharmonic heat equation provides a new case of application of such extended criteria.
The structure and contents of the paper are as follows. Section 2 is about preliminaries. We formulate and prove the existence of a random field solution to the biharmonic heat equation, and recall the notions of Hausdorff measure relative to a gauge function and capacity relative to a symmetric potential. Section 3 is devoted to find the equivalent pseudo-distance for the canonical metric –a result of independent interest. The proof relies on a careful analytical study of the Green’s function of the biharmonic operator \(\mathcal {L} = \frac{\partial }{\partial t} +(-\varDelta )^2\) on \((0,T)\times \mathbb {T}^d\). With this fundamental result at hand and some additional properties of (u(t, x)) proved in Sects. 4 and 5, we are in a position to apply Theorems 3.4 and 3.5 of [5]. We deduce Theorem 6.1 on upper and lower bounds for the hitting probabilities of D-dimensional random vectors consisting of independent copies of (u(t, x)). These are in terms of the \(\bar{g}_q\)-Hausdorff measure and the \(({\bar{g}}_q)^{-1}\)-capacity, respectively, with \({\bar{g}}_q\) defined in (6.1). Notice that for \(d=1,3\), the bounds are given by the classical Hausdorff mesure and the Bessel-Riesz capacity, respectively. In the second part of Sect. 6, we highlight some consequences of Theorem 6.1 on polarity of sets and Hausdorff dimension of the path process. The application of Theorems 3.4 and 3.5 of [5] imposes the restriction \(D>D_0\), where \(D_0=[(4-d)/8]^{-1}+d[1\wedge (2-d/2)]^{-1}\). We also discuss the case \(D<D_0\) and present some conjectures concerning the critical case \(D=D_0\) in the last part of Sect. 6.
2 Notations and preliminaries
We introduce some notation used throughout the paper. As usually, \(\mathbb {N}\) denotes the set of natural numbers \(\{ 0,1,2,...\}\); we set \(\mathbb {Z}_2 = \{ 0,1\}\), and for any integer \(d\ge 1\), \(\mathbb {N}^{d,*} = (\mathbb {N}\setminus \{ 0\})^d\). For any multiindex \(k=(k_1,\ldots ,k_d)\in {{\mathbb {N}}}^d\), we set \(|k|=(\sum _{j=1}^d k_j^2)^{1/2}\), and denote by n(k) the number of null components of k.
Let \(\mathbb {S}^1\) be the circle and \(\mathbb {T}^d= \mathbb {S}^1\times {\mathop {\ldots }\limits ^{d}}\times \mathbb {S}^1\) the d-dimensional torus. For \(x\in \mathbb {T}^d\), |x| denotes the Euclidean norm. If we identify \(\mathbb {T}^d\) with the periodic cube \([-\pi ,\pi ]^d\), meaning that opposite sides coincide, |x| can be interpreted as the distance of x to the origin.
For \(x\in [0,2\pi )\), let \(\varepsilon _{0,k}(x)=\pi ^{-1/2}\sin (kx)\), \(\varepsilon _{1,k}(x)=\pi ^{-1/2}\cos (kx)\), \(k\in \mathbb {N}^*\), and \(\varepsilon _{1,0}(x) = (2\pi )^{-1/2}\). The set of functions \(\mathbf{B} \) defined on \(\mathbb {T}^d\) consisting of
with \(k_j\in \mathbb {N}^*\) if \(i_j=0\), and \(k_j\in {{\mathbb {N}}}\) if \(i_j=1\), is an orthonormal basis for \(L^2(\mathbb {T}^d)\).
Define
Notice that \(\mathbf{B} = \{\varepsilon _{i,k} =\varepsilon _{i_1,k_1}\otimes \cdots \otimes \varepsilon _{i_d,k_d},\ (i,k)\in (\mathbb {Z}_2\times \mathbb {N})^d_{+}\}\).
The following equality is a straightforward consequence of the formula for the cosinus of a sum of angles: For any \(x,y\in \mathbb {T}^d\),
Let \((-\varDelta )^2\) be the biharmonic operator (also called the bilaplacian) on \(L^2(\mathbb {T}^d)\). The basis \(\mathbf{B} \) is a set of eigenfunctions of \((-\varDelta )^2\) with associated eigenvalues \(\lambda _k=\sum _{j=1}^d k_j^4\), \(k\in \mathbb {N}^{d}\). Observe that \(d^{-1}|k|^4\le \lambda _k\le |k|^4\), and \(\inf _{k\in {{\mathbb {N}}}^{d,*}} \lambda _k=d\).
The Green’s function of the biharmonic heat operator \(\mathcal {L} = \frac{\partial }{\partial t} +(-\varDelta )^2\) on \((0,T] \times \mathbb {T}^d\) is given by
the last equality being a consequence of (2.1).
This paper concerns the linear stochastic biharmonic heat equation
where \((\dot{W}(t,x))\) is a space-time white noise on \([0,T]\times \mathbb {T}^d\), \(\sigma \in {\mathbb {R}}\setminus \{0\}\) and \(v_0: \ \mathbb {T}^d\longrightarrow {\mathbb {R}}\).
We consider the random field solution to (2.3), that is, the stochastic process
with G given in (2.2), and the stochastic integral is a Wiener integral with respect to space-time white noise.
We assume that, for any \((t,x)\in (0,T]\times \mathbb {T}^d\), the function \(\mathbb {T}^d\ni z\mapsto G(t;x,z) v_0(z)\) belongs to \(L^1(\mathbb {T}^d)\). Along with the next Theorem, this yields that \((v(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\) is a well-defined Gaussian process.
Theorem 2.1
Let
The stochastic process \((u(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\) is well-defined if and only if \(d=1, 2, 3\). In this case,
Proof
Fix \((t,x)\in (0,T]\times \mathbb {T}^d\). By (2.2) and applying Fubini’s theorem, we have
Use the inequalities \(\frac{u}{1+u}\le 1-e^{-u}\le 1\), valid for all \(u\ge 0\), to see that the series in (2.6) is equivalent to a harmonic series \(\sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ 0\le n(k)\le d-1 \end{array}}\frac{1}{|k|^4}\), which converges if and only if \(d\le 3\). Equivalently, the Wiener integral defining u(t, x) is well-defined if and only if \(d\le 3\). This finishes the proof of the first statement.
By the isometry property of the Wiener integral, \(E((u(t,x))^2)\) is equal to the right-hand side of (2.6). Taking the supremum in (2.6), we have
\(\square \)
In the sequel, \(d\in \{1,2,3\}\).
In the last part of this section, we recall the notions of Hausdorff measure and capacity that will be used in of Sect. 6.
g -Hausdorff measure
Let \(\varepsilon _0>0\) and \(g: [0,\varepsilon _0 ]\rightarrow {\mathbb {R}}_+\) be a continuous strictly increasing function satisfying \(g(0)=0\). The g-Hausdorff measure of a Borel set \(A\subset \mathbb {R}^D\) is defined by
(see e.g. [10]). In this paper, we will use this notion referred to two examples: (i) \(g(\tau )= \tau ^\gamma \), with \(\gamma >0\); this is the classical \(\gamma \)-dimensional Hausdorff measure. (ii) \(g(\tau ) = \tau ^{\nu _1}\left( q^{-1}(\tau )\right) ^{-\eta }\), with \(q(\tau ) = \tau ^{\nu _2}\left( \log \frac{c}{\tau }\right) ^{\delta }\), \(\nu _1, \nu _2, \eta , \delta >0\).
By coherence with the definition of the \(\gamma \)-dimensional Hausdorff measure when \(\gamma <0\), if \(g(0)=\infty \), we set \(\mathcal {H}_g(A)=\infty \).
Capacity relative to a symmetric potential kernel
Let be continuous on \({\mathbb {R}}^D\setminus \{0\}\), symmetric, , for all \(z\ne 0\), . This function is called a symmetric potential. The -capacity of a Borel set \(A\subset \mathbb {R}^D\) is defined by
where and \(\mathbb {P}(A)\) denotes the set of probability measures on A. If , we set , by convention.
In this article, we will use this notion with , where g is as in the examples (i) and (ii) above. Observe that, in the example (i), the -capacity is the Bessel-Riesz capacity, usually denoted by \(\text {Cap}_\gamma (A)\) (see e.g. [7, p. 376]).
Throughout the article, positive real constants are denoted by C, or variants, like \({\bar{C}}\), \({\tilde{C}}\), c, etc. If we want to make explicit the dependence on some parameters \(a_1, a_2,\ldots \), we write \(C(a_1, a_2, \ldots )\) or \(C_{a_1,a_2,\ldots }\). When writing \(\log \left( \frac{C}{z}\right) \), we will assume that C is large enough to ensure \(\log \left( \frac{C}{z}\right) \ge 1\).
3 Equivalence for the canonical metric
For the process u of Theorem 2.1, we define
This is the canonical pseudo-distance associated with u. This section is devoted to establish an equivalent (anisotropic) pseudo-distance for \(\mathfrak {d}_{u}\).
Throughout the proofs, we will make frequent use of the identity
\(0\le s\le t\). This formula is proved using the Wiener isometry
(the last equality holds because the Green’s function G(r; y, z) vanishes if \(r<0\)) and using the definition (2.2). The first (respectively, second) series term in (3.2) equals the first (respectively second) integral on the rignt-hand side of (3.3).
We start by analyzing the \(L^2(\varOmega )\)-increments in the time variable of the process (u(t, x)).
Proposition 3.1
-
1.
There exist constants \(c_1(d,T)\) and \(c_2(d)\) such that, for all \(s, t\in [0,T]\), \(x\in \mathbb {T}^d\),
$$\begin{aligned} c_1(d,T)\vert t-s\vert ^{1-d/4}\le \Vert u(t,x)-u(s,x)\Vert ^2_{L^2(\varOmega )} \le c_2(d)\vert t-s\vert ^{1-d/4}. \end{aligned}$$(3.4) -
2.
For any \((t,x), (s,y)\in [0,T]\times \mathbb {T}^d\),
$$\begin{aligned} c_1(d,T)\vert t-s\vert ^{1-d/4}\le \Vert u(t,x)-u(s,y)\Vert ^2_{L^2(\varOmega )}, \end{aligned}$$(3.5)where \(c_1(d,T)\) is the same constant as in (3.4).
Proof
Without loss of generality, we suppose \(0\le s< t\le T\).
Use the first equality in (3.3) and then apply Lemma 7.1 with \(h:=t-s\). This yields the second inequality in (3.4).
From (3.2), we have
Let \(r\ge d\). Applying the inequality \(1-e^{-u}\ge \frac{u}{1+u}\), \(u\ge 0\), we obtain
since \(\lambda _k\le |k|^4\). Choosing \(r=\left( \tfrac{d^4 T}{t-s}\right) ^{1/4}\), the inequality above yields
with \( c_1(d,T) = C_d \frac{2d^d T^{d/4}}{1+2d^4 T}\). This is the lower bound in (3.4).
Notice that from (3.2) we deduce
Hence the proof above yields (3.5). \(\square \)
For any \(j=1,\ldots ,d\), fix real numbers \(0<c_{0,j}<2\pi \) and define \(J_j=[c_{0,j},2\pi -c_{0,j}]\) and \(J= J_1\times \ldots \times J_d\subsetneq \mathbb {T}^d\). The next statement deals with increments in space.
Proposition 3.2
Let \((u(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\) be the stochastic process defined in Theorem 2.1 and let J be a compact set as described before. There exist positive constants c(d), C(d), \(c_3(d)\) and \(c_4(d)\) such that, for any \(t>0\), \(x, y \in J\),
where \(C_t = (1-e^{-2dt})\), and \(\beta = 1_{\{d=2\}}\).
The upper bound holds for any \((t,x)\in [0,T]\times \mathbb {T}^d\). The lower bound holds for any \(x,y\in \mathbb {T}^d\) if \(|x-y|\) is small enough. For \(t=0\), the lower bound is non informative.
Proof
Upper bound. From (3.2) we deduce
Observe also that, because of (2.1),
for any \((i,k)\in ({{\mathbb {Z}}}_2\times {{\mathbb {N}}})^d\).
Case \(d=1\). Since \(\sum _{k\ge 1}\frac{1}{k^2}< \infty \), from (3.8) and (3.9) we have
Case \(d=2,3\). For any \(k\in {{\mathbb {N}}}^{d}\), let \(I_k=[k_1,k_1+1)\times \cdots \times [k_d,k_d+1)\). Observe that for any d-dimensional vector \(z\in I_k\), we have \(|z| \le |k| + \sqrt{d}\). Fix \(\rho _0\ge \lfloor 3\sqrt{d}\rfloor + 1\) and let \(\alpha >0\). Then,
where the last inequality holds because on \([\rho _0,\infty )\), \(\rho -\sqrt{d}\ge 1/2\rho \).
Let \(\rho _0\) be as above, \(\rho _1 = \left\lfloor (3/2)\sqrt{d}\right\rfloor + 1\), and \(\beta >0\). By arguments similar to those used to obtain (3.11), we deduce
where in the last inequality, we have used that on \([\rho _1,\rho _0]\), \(\rho -\sqrt{d}\ge (1/5)\rho \).
Set \(h= |x-y|\) and \(\rho _0 = \left\lfloor c_d h^{-\frac{2\wedge (4-d)}{4-d}}\right\rfloor +1\), where \(c_d= 3\sqrt{d} (2\pi \sqrt{d})^{\frac{2\wedge (4-d)}{4-d}}\). Notice that \(\rho _0\ge \lfloor 3\sqrt{d}\rfloor + 1\). Then, from (3.8) we have
Using (3.11), with the choice of \(\rho _0\) specified above, we see that \(T_1(4,\rho _0)\le C_d h^{2\wedge (4-d)}\) and
Since \( \sum _{k\in \mathbb {N}^{d},\ 1\le |k|<\rho _1} \frac{1}{|k|^2}={\tilde{c}}_d <\infty \), substituting the above estimates in the right-hand side of (3.13) we obtain the upper bound in (3.7).
Lower bound. Case \(|x-y|\) small. We start from (3.8) to obtain
Let T(x, y) denote the series on the right-hand side of (3.14). Because for any \(z\in [-\pi /2,\pi /2]\), we have \(\cos z \le 1-(\tfrac{2}{\pi }z)^2\), we deduce
Case \(d=1\). Using (3.15), we obtain
Assume \(|x-y|\le \tfrac{c_0\pi }{2}\), with \(0< c_0<1\) arbitrarily close to 1. Then \(1-\frac{2}{\pi }|x-y|\ge 1-c_0\) and, in this case,
Case \(d=2,3\). Consider the series on the right-hand side of (3.15) and apply the formula (7.2) of Lemma 7.2 with \(m:=d\) and \(p_j=[(2/\pi ) k_j|x_j-y_j|]^2\), to see that
where
Note that the condition \(k_j\vert x_j-y_j\vert \le \pi /4\) implies \(1-(2/\pi )^2(k_j\vert x_j-y_j\vert )^2\ge 3/4\). Hence, for \(d=2\) we see that
Similarly, for \(d=3\) we have
where in the sum above, we set \(j+1=1\) if \(j=3\).
Thus, in both dimensions \(d=2, 3\),
The next goal is to find a lower bound for \(S_1(x,y)\). Without loss of generality we may and will assume \(\vert x_1-y_1\vert \le \vert x_2-x_2\vert \le ...\le \vert x_d-y_d\vert \). Set \(\mathbb {N}^{d,*}_{\le }:=\{k\in \mathbb {N}^{d,*}:k_1\le k_2\le ...\le k_d\}\). Then,
Indeed, set \(K=(k_j^2)_j\), \(Z=(|x_j-y_j|^2)_j\) and let \(\xi \) be the angle between the vectors K and Z. Because \(\sum _{j=1}^d (k_j\vert x_j-y_j\vert )^2\) is the Euclidean scalar product between K and Z and \(\xi \in [0,\pi /4]\),
Assume that \(|x-y|\le \frac{\pi }{5\sqrt{d}}\). The set \(\{k\in {{\mathbb {N}}}^{d,*}: |k|\le \frac{\pi }{4}|x-y|^{-1}\}\) is non empty and is included in \(\{k\in {{\mathbb {N}}}^{d,*}: k_j\le \frac{\pi }{4}|x_j-y_j|^{-1}, j=1,\ldots ,d\}\). Hence,
For \(d=2\), the last integral equals \(\log \left( \frac{\pi }{4\sqrt{d} |x-y|}\right) \), while for \(d=3\), it is equal to \((\pi /4) |x-y|^{-1}-\sqrt{d}\). Observe that if \(|x-y|\le \frac{\pi }{5\sqrt{d}}\) this expression is bounded below by \((\pi /20)|x-y|^{-1}\).
Summarizing, from (3.18) and assuming \(|x-y|\le \frac{\pi }{5\sqrt{d}}\), the discussion above proves
Therefore, for any \(x,y\in \mathbb {T}^d\) such that \(0\le |x-y|\le \frac{\pi }{5\sqrt{d}}\), we have proved that the lower bound of (3.7) holds with the constant \(c_3(d)\) depending only on d and \(C_t = 1-e^{-2t}\).
Lower bound. Case \(|x-y|\) large. We recall a standard “continuity-compactness” argument that we will use to extend the validity of the lower bound established in the previous step, to every \(x,y\in J\) satisfying \(\frac{\pi }{5\sqrt{d}}<|x-y|<2\pi \).
Consider the function
where \(t>0\) is fixed. Because of the upper bound in (3.7), this is a continuous function. Furthermore, from (3.8), we see that it is strictly positive. Thus, for any \(c_0>0\), the minimun value m of \(\varphi _t\) over the compact set \(\{\varphi _t(x,y); (x,y)\in J^2: |x-y|\ge c_0\}\) is achieved, and \(m>0\). Referring to the left hand-side of (3.7), let M be the maximum of the function
Taking \(c_0 = \tfrac{\pi }{5\sqrt{d}}\), we deduce,
for any \(x,y\in J\) such that \(\frac{\pi }{5\sqrt{d}}<|x-y|<2\pi \).
This ends the proof of the lower bound and of the Proposition. \(\square \)
With Propositions 3.1 and 3.2 we obtain an equivalent expression of the canonical pseudo-distance (3.1), as stated in the next theorem.
Theorem 3.1
Let \((u(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\) be the stochastic process defined in Theorem 2.1.
-
1.
There exist constants \(c_5(d)\), C(d) such that for any \((t,x),(s,y)\in [0,T]\times \mathbb {T}^d\),
$$\begin{aligned} \Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^2\le c_5(d) \left( \vert t-s\vert ^{1-d/4}+\left( \log \frac{C(d)}{\vert x-y\vert }\right) ^\beta \vert x-y\vert ^{2\wedge (4-d)}\right) ,\nonumber \\ \end{aligned}$$(3.20)with \(\beta =1_{\{d=2\}}\).
-
2.
Fix \(t_0\in (0,T]\) and let J be a compact subset of \(\mathbb {T}^d\) as in Proposition 3.2. There exist constants \(c_6(d, t_0,T)\) and c(d) such that, for any \((t,x),(s,y)\in [t_0,T]\times J,\)
$$\begin{aligned} \Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^2\ge&c_6(d,t_0,T)\nonumber \\&\times \left( \vert t-s\vert ^{1-d/4}+\left( \log \frac{c(d)}{\vert x-y\vert }\right) ^\beta \vert x-y\vert ^{2\wedge (4-d)}\right) , \end{aligned}$$(3.21)with \(\beta =1_{\{d=2\}}\).
Proof
The estimate from above follows by applying the triangle inequality and the upper bounds in (3.4) and (3.7), which hold for any \((t,x),(s,y)\in [0,T]\times \mathbb {T}^d\). The value of the multiplicative constant in the upper bound is \(c_5(d)=2[c_2(d)+c_4(d)]\), where \(c_2(d)\), \(c_4(d)\) are given in (3.4), (3.7), respectively.
To prove the lower bound, we consider two cases (see Propositions 3.1 and 3.2 for the notations of the constants).
Case 1: \(c_2(d) |t-s|^{1-d/4}\le \frac{c_3(d)C_{t_0}}{4} \left( \log \frac{c(d)}{|x-y|}\right) ^\beta |x-y|^{2\wedge (4-d)}\), where \(C_{t_0}= 1-e^{-2t_0}\).
Applying the triangle inequality and then, using the lower bound in (3.7) and the upper bound in (3.4) we obtain,
Case 2: \(c_2(d) |t-s|^{1-d/4}> \frac{c_3(d)C_{t_0}}{4} \left( \log \frac{c(d)}{|x-y|}\right) ^\beta |x-y|^{2\wedge (4-d)}\).
By (3.5), we have
The proof of the theorem is complete. \(\square \)
4 Further second order properties of the random field u
Throughout this section, we use the notation
Lemma 4.1
-
1.
There exists a constant \(c_{d,T}\) such that for all \(s,t\in (0,T]\), \(x,y\in \mathbb {T}^d\),
$$\begin{aligned} \vert \sigma _{t,x}^2-\sigma _{s,y}^2\vert \le c_{d,T}\Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^{2}. \end{aligned}$$(4.1) -
2.
Fix \(t_0\in (0,T]\). There exist constants \(0<c_{d,t_0} < C_{d,T}\) such that for any \((t,x)\in [t_0,T]\times \mathbb {T}^d\),
$$\begin{aligned} c_{d,t_0} \le \sigma _{t,x}^2 \le C_{d,T}. \end{aligned}$$(4.2) -
3.
Fix \(t_0\in (0,T]\). For any \((t,x),(s,y)\in [t_0,T]\times \mathbb {T}^d\) such that \((t,x)\ne (s,y)\),
$$\begin{aligned} \rho _{(t,x),(s,y)}=\frac{E(u(t,x)u(s,y))}{\sigma _{t,x}\sigma _{s,y}}<1. \end{aligned}$$
Proof
-
1.
Without loss of generality we may assume \(0<s\le t\). Applying (2.6) yields
$$\begin{aligned} \vert \sigma _{t,x}^2-\sigma _{s,y}^2\vert = \frac{t-s}{(2\pi )^d} + \frac{1}{2^{n(k)+1}\pi ^d}\sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ 0\le n(k)\le d-1 \end{array}} \frac{e^{-2\lambda _ks}\left( 1-e^{-2\lambda _k(t-s)}\right) }{\lambda _k}. \end{aligned}$$Use the inequality (3.5) to get \(\frac{t-s}{(2\pi )^d}\le {\bar{c}}_{d,T}\Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^2\). Since \(e^{-2\lambda _ks}\le 1\) and because of (3.2), we see that the second term on the right-hand side of this equality is bounded above by \(\Vert u(t,x)-u(s,y)\Vert _{L^2(\varOmega )}^2\). This ends the proof of (4.1).
-
2.
The claim follows from (2.6), observing that
$$\begin{aligned} \sigma _{t,x}^2 \ge \sum _{\begin{array}{c} k\in \mathbb {N}^{d}\\ n(k)= d-1 \end{array}} \frac{1-e^{-2\lambda _k t}}{2^{n(k)+1}\pi ^d\lambda _k} \ge \frac{1-e^{-2t}}{2^d\pi ^d}, \quad t\ge 0. \end{aligned}$$ -
3.
Assume that \(\rho _{(t,x),(s,y)}=1\). Then, there would exist \(\lambda \in \mathbb {R}\setminus \{0\}\) such that \(\Vert u(t,x)-\lambda u(s,y)\Vert _{L^2(\varOmega )}=0\). This leads to a contradiction. Indeed, consider first the case \(0<s<t\). By the isometry property of the Wiener integral,
$$\begin{aligned} \Vert u(t,x)-\lambda u(s,y)\Vert _{L^2(\varOmega )}^2 =&\int _0^s dr \int _{\mathbb {T}^d} dz (G(t-r;x,z)-\lambda G(s-r;y,z))^2\nonumber \\&+\int _s^t dr \int _{\mathbb {T}^d} dz\ G^2(t-r;x,z)\nonumber \\ \ge&\int _0^{t-s} dr \int _{\mathbb {T}^d} dz\ G^2(r;x,z)>0, \end{aligned}$$(4.3)by the properties of G.
Next, we assume \(t=s\) and \(x\ne y\). If \(\lambda =1\), we see that
is in contradiction with the lower bound in (3.7). If \(\lambda \ne 1\), we apply Lemma 3.4 in [5] to the stochastic process \((u(t,x), \ x\in \mathbb {T}^d)\), with \(t\in [t_0,T]\) fixed. Notice that, because of the statements 1. and 2. proved above and Proposition 3.2, the hypotheses of that Lemma hold. We deduce
\(\square \)
5 Solution to the deterministic homogeneous equation
In this section, we consider the Eq. (2.3) with \(\sigma =0\) whose solution in the classical sense and in finite time horizon is given by the function
In the next proposition, we prove the joint continuity of this mapping.
Proposition 5.1
Let \(v_0\in L^1(\mathbb {T}^d)\). Then, the function \((t,x) \mapsto I_0(t,x)\) is jointly Lipschitz continuous.
Proof
Increments in time. Fix \(0<s\le t\le T\), Using the definition of G(t; x, z) given in (2.2), we see that for any \(x\in \mathbb {T}^d\),
Increments in space. Let \(x,y\in \mathbb {T}^d\). Then, for any \(t\in [0,T]\),
Up to a multiplicative constant depending on d, the series in the above expression is bounded by \(\int _0^\infty \rho ^d e^{-\frac{\rho ^4}{2^{d-1}}}= C_d\varGamma _E\left( \frac{d+1}{4}\right) \), where \(\varGamma _E\) denotes the Euler Gamma function.
The proof of the proposition is complete. \(\square \)
Remark 5.1
Combining Proposition 5.1 with the estimate (3.20) yields the following. The sample paths of the stochastic process \((v(t,x),\ (t,x)\in [0,T]\times \mathbb {T}^d)\) are Hölder continuous, jointly in (t, x), of degree \((\eta _1,\eta _2)\) with
Indeed, \(v(t,x) = I_0(t,x) + u(t,x)\), and the process (u(t, x)) is Gaussian. Hence, the claim follows from Kolmogorov’s continuity criterion (see e.g. [9]).
6 Hitting probabilities and polarity of sets
Consider the Gaussian random field
where \((v_j(t,x)),\ j=1,\ldots , D\), are independent copies of the process (v(t, x)) defined in (2.4). For simplicity, we will take \(\sigma = 1\) there. Recall that \(A\in \mathcal {B}({\mathbb {R}}^D)\) is called polar for the random field V if \(P(V(I\times J)\cap A\ne \emptyset )=0\), and is nonpolar otherwise. In this section, we discuss this notion using basically the results of [5]. We first introduce some notation. For \(\tau \in {\mathbb {R}}_+\), let
where the subscript q in the last expression refers to the couple \((q_1,q_2)\).
Let \(D_0 = [(4-d)/8]^{-1}+d[1\wedge ((4-d)/2)]^{-1}\). If \(D>D_0\), the functions \(\bar{g}_q\) and \((\bar{g}_q)^{-1}\) satisfy the conditions required by the definitions of the \(\bar{g}_q\)-Hausdorff measure and the \((\bar{g}_q)^{-1}\)-capacity, respectively (see [5][Section 5] for details).
In the next theorem, \(I=[t_0,T]\) and \(J=[0,M]^d\), where \(0<t_0\le T\) and \(M\in (0,2\pi )\).
Theorem 6.1
The hitting probabilities relative to the D-dimensional random field V satisfy the following bounds.
-
1.
Let \(D>D_0\).
-
(a)
There exists a constant \(C:=C(I,J,D,d)\) such that for any Borel set \(A\in \mathcal {B}({\mathbb {R}}^D)\),
$$\begin{aligned} P(V(I\times J)\cap A\ne \emptyset ))\le C\mathcal {H}_{\bar{g}_q}(A). \end{aligned}$$(6.2) -
(b)
Let \(N>0\) and \(A\in \mathcal {B}({\mathbb {R}}^D)\) be such that \(A\subset B_N(0)\). There exists a constant \(c:=c(I,J,N,D,d)\) such that
$$\begin{aligned} P(V(I\times J)\cap A\ne \emptyset ))\ge c\text {Cap}_{(\bar{g}_q)^{-1}}(A). \end{aligned}$$(6.3)
-
(a)
-
2.
Let \(D<D_0\) and \(A\in \mathcal {B}({\mathbb {R}}^D)\).
-
(a)
\(\mathcal {H}_{\bar{g}_q}(A)= \infty \) and therefore (6.2) holds, but is non informative.
-
(b)
If A is bounded, there exists a constant \(c:=c(I,J,N,D,d)>0\) such that
$$\begin{aligned} P(V(I\times J)\cap A\ne \emptyset ))\ge c = c\text {Cap}_{(\bar{g}_q)^{-1}}(A). \end{aligned}$$(6.4)Hence, (6.3) holds.
-
(a)
Proof
Consider first the case \(D>D_0\). The upper bound (6.2) follows by applying [5][Thm. 3.3.], while the lower bound (6.2) follows from [5][Thm. 3.5]. Indeed, from Theorem 3.1, Lemma 4.1 and Proposition 5.1, we deduce that the random field V satisfies the assumptions of those two theorems. As for the hypotheses required on \(q_1\), \(q_2\) and \({\bar{g}}_q\), they are proved in [5][Section 5].
Let \(D<D_0\). We have \(\lim _{\tau \downarrow 0}\bar{g}_q(\tau ):= \bar{g}_q(0)=\infty \) and then, by convention, \(\mathcal {H}_{\bar{g}_q}(A)= \infty \).
Next, we prove (6.4) by using arguments similar to those in [3][Theorem 2.1, p. 1348].
For \(\varepsilon \in (0,1)\) and \(z\in A\), we denote by \(B_\varepsilon (z)\) the ball centred at z with radius \(\varepsilon \) and define
Since \(\{J_\varepsilon (z)>0\} \subset \{V(I\times J) \cap A^{(\varepsilon )}\ne \emptyset \}\), it suffices to prove that \(P(J_\varepsilon (z)>0)>C\), for some positive constant C. Using the Paley-Zygmund inequality, this amounts to check
for some \(C_1, C_2 >0\).
Because of (4.2), the one-point density of V(t, x) is bounded uniformly on \((t,x)\in [t_0,T]\times \mathbb {T}^d\). This yields \(E\left( J_\varepsilon (z)\right) >C_1\).
From Theorem 3.1, we deduce that the two-point densities of (V(s, y), V(t, x)) satisfy
where \(\rho ((s,y),(t,x))= |t-s|^{\frac{4-d}{8}}+ \left( \log \frac{C(d)}{|x-y|}\right) ^{\frac{\beta }{2}}|x-y|^{1\wedge ((4-d)/2)}\), \(\beta =1_{\{d=2\}}\) (apply the arguments of [3][Proposition 3.1]). Consequently,
Set \(\alpha _1 = \frac{4-d}{8}\), \(\alpha _2 = 1\wedge ((4-d)/2)\), so that \(D_0 =\frac{1}{\alpha _1} + \frac{d}{\alpha _2} \). Since the constant C(d) is such that \(\log \frac{C(d)}{|x-y|}\ge 1\), the last integral is bounded from above by
After some computations, we see that \(I\le C \int _0^{c_0} \rho ^{-D + \frac{1}{\alpha _1} + \frac{d}{\alpha _2}-1}\ d\rho \), which is finite if \(D<D_0\). This ends the proof of the inequality in (6.4).
Since \(\lim _{\tau \downarrow 0}[\bar{g}_q(\tau )]^{-1}:= [\bar{g}_q(0)]^{-1}=0\), by convention \(\text {Cap}_{(\bar{g}_q)^{-1}}(A)=1\). This yields the last equality in (6.4). \(\square \)
Theorem 6.1 1. implies the following.
Corollary 6.1
Let \(A\in \mathcal {B}({\mathbb {R}}^D)\) and assume \(D>D_0\).
-
1.
If \(\mathcal {H}_{\bar{g}_q}(A)=0\) then A is polar for V.
-
2.
If A is bounded and \(\text {Cap}_{(\bar{g}_q)^{-1}}(A)>0\), then A is nonpolar for V.
Corollary 6.2
If \(D>D_0\), points \(z\in {\mathbb {R}}^D\) are polar for V and are nonpolar if \(D<D_0\).
Proof
Assume first \(D>D_0\). By the definition of the \(\bar{g}_q\)-Hausdorff measure, we have \(\mathcal {H}_{\bar{g}_q}(\{z\})=0\). Hence, the polarity of \(\{z\}\) follows from (6.2).
One can give another proof of this fact without appealing to Theorem 6.1. Indeed, we have \(\lim _{\tau \downarrow 0}\bar{g}_q(\tau ):= \bar{g}_q(0)=0\). This is obvious for \(d=1,3\). For \(d=2\), it is proved in [5][Lemma 5.1]. This property implies \(P(V(I\times J)\cap \{z\}\ne \emptyset )=0\) (see [5][Corollary 3.2]). Therefore \(\{z\}\) is polar for V.
If \(D<D_0\), we apply (6.4) to \(A=\{z\}\) and deduce that \(\{z\}\) is nonpolar. Actually, if \(D<D_0\) any bounded Borel set A is nonpolar for V. \(\square \)
Consider the case \(d=1,3\), for which the definitions of the \(\mathcal {H}_{\bar{g}_q}\)-Hausdorff measure and \((\bar{g}_q)^{-1}\)-capacity are those of the classical Hausdorff measure and Bessel-Riesz capacity, respectively. Assume \(D>D_0\). From Theorem 6.1 and using the same proof as that of Corollary 5.3 (a) in [2], we obtain the geometric type property on the path of V:
where \(\text {dim}_\mathrm{H}\) refers to the Hausdorff dimension (see e.g. [6][Chapter 10, Section 2, p. 130])
We end this section with some open questions for further investigations.
It would certainly be interesting to have a statement on the Hausdorff dimension of the path of V also in dimension \(d=2\). Looking back to (6.1), we see that, in this dimension, there is a logarithmic factor in the definition of \(\bar{g}_q\). This leads to the question of giving a notion of Hausdorff dimension based on the \({\bar{g}}_q\)-Hausdorff measure. A suggestion can be found in [8]. Indeed, the family \(\mathcal {T}\) of functions
satisfies \(f_{\nu _1}(\tau ) = \mathrm{o} (f_{\nu _2})(\tau ), \ \tau \downarrow 0\), whenever \(\nu _1<\nu _2\); therefore, \(\mathcal {T}\) is a scale in the sense of [8][Definition 2.1]. According to [8][Definition 2.3], we can define the generalized notion of Hausdorff dimension (relative to \(\mathcal {T}\)),
We conjecture that \(\mathrm{dim}_{\mathrm{H}}^{(f)}(V(I\times J))= D_0\), a.s.
A second conjecture, related to Corollary 6.2, is that singletons are polar if \(D=D_0\). This question may be approached using [4][Theorem 2.6], which gives sufficient conditions on Gaussian random fields ensuring polarity of points. Preliminary investigations rise some technical challenges, due to the complex expression of the harmonizable representation of the random field V. On the other hand, in dimension \(d=1\), the processs V is very regular in space and therefore, the approach based on [4] might have a simplification or an alternative.
Availability of data and material.
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
References
Cardon-Weber, C.: Cahn–Hilliard stochastic equation: existence of the solution and of its density. Bernoulli 7(5), 777–816 (2001)
Dalang, R.C., Khoshnevisan, D., Nualart, E.: Hitting probabilities for systems of non-linear stochastic heat equations with additive noise. Latin Am. J. Prob. Math. Stat. 3, 231–271 (2007)
Dalang, R.C., Sanz-Solé, M.: Criteria for hitting probabilities with applications to systems of stochastic wave equations. Bernoulli 16, 1343–1368 (2010)
Dalang, R.C., Mueller, C., Xiao, Y.: Polarity of points for Gaussian random fields. Ann. Probab. 45(6B), 4700–4751 (2017)
Hinojosa-Calleja, A., Sanz-Solé, M.: Anisotropic Gaussian random fields: criteria for hitting probabilities and applications. Stoch PDE Anal. Comp. (2021). https://doi.org/10.1007/s40072-021-00190-1
Kahane, J.-P.: Some Random Series of Functions, 2nd edn. Cambridge University Press, New York (1985)
Khoshnevisan, D.: Multiparametric Processes: An Introduction to Random Fields. Springer, Berlin (2002)
Kloeckner, B.: A generalization of Hausdorff dimension applied to Hilbert cubes and Wasserstein spaces. J. Topol. Anal. 4(2), 203–235 (2012)
Kunita, H.: Stochastic differential equations and stochastic flow of diffeomorphisms. In: P.L. (ed.) École d’Eté de Probabilités de St. Flour, XII–1982, pp.144–303, Lecture Notes in Mathematics 1097. Springer-Verlag, Berlin, Heidelberg, New York, Tokyo (1984)
Rogers, C.A.: Hausdorff Measures. Cambridge Mathematical Library. Cambridge University Press, Cambridge (1998)
Sanz-Solé, M., Viles, N.: Systems of stochastic Poisson equations: Hitting probabilities. Stochast. Process. Appl. 128, 1857–1888 (2018)
Xiao, Y.: Sample path properties of anisotropic gaussian random fields. In: Dalang, R., Khoshnevisan, D., Mueller, C., Nualart, D., Xiao, Y. (eds.) A Minicourse on Stochastic Partial Differential Equations, pp.145-212. Lecture Notes in Math.1962, Springer, Berlin (2009)
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. M. Sanz-Solé is partially supported by the Grant PID2020-118339GB-I00 and A. Hinojosa-Calleja is supported by the Grant BES-2016077051. Both funding is from Ministerio de Ciencia e Innovación, Spain.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest.
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is dedicated to István Gyöngy on the occasion of his 70th birthday.
Appendix
Appendix
In this section, we gather some auxiliary results used in the paper.
Lemma 7.1
Let \(d\in \{1,2,3\}\). There exists a constant \(C_d\) such that for any \(h\ge 0\) and \(x\in \mathbb {T}^d\),
Proof
Using the expression (2.2), we see that
Case \(h\ge 1\). We have \(\min \left( |k|^{-4}, |k|^4h^2\right) = |k|^{-4}\). Thus, \(T(h)= C<\infty \), which implies \(T(h)\le C h\).
Case \(0<h <1\). Let \(T(h)\le T_1(h) + T_2(h)\), where
For the first term, we have
For the second term, we have
Since \(1-d/4<1\), the estimates obtained in the two instances of h imply (7.1). \(\square \)
Lemma 7.2
For \(p_j\in [0,1]\), \(j=1,\ldots ,m\), the following formula holds:
Proof
On a probability space, consider independent events \((A_j)_{1\le j\le m}\) such that \(p_j = P(A_j)\). Then,
and (7.2) follows from the well-known inclusion-exclusion formula in probability theory. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hinojosa-Calleja, A., Sanz-Solé, M. A linear stochastic biharmonic heat equation: hitting probabilities. Stoch PDE: Anal Comp 10, 735–756 (2022). https://doi.org/10.1007/s40072-021-00234-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-021-00234-6
Keywords
- Systems of linear SPDEs
- Sample paths properties
- Hitting probabilities
- Polar sets
- Capacity
- Hausdorff measure