1 Introduction

The stochastic wave equation is a fundamental stochastic partial differential equation (SPDE) of hyperbolic type. The wave equation driven by the space-time white noise and temporally white, spatially correlated noise has been studied by many authors; see, e.g., [12, 13, 21, 22, 37, 39, 42, 48]. Since the recent development of stochastic calculus with respect to fractional Brownian motion, there have been growing interests in the studies of SPDEs driven by fractional Brownian motion and other fractional Gaussian noise in time and/or in space, for which we refer, among others, to [3, 4, 6, 15, 28, 29, 36, 40, 44].

In this paper, we study the stochastic wave equation driven by a Gaussian noise that is fractional with Hurst index \(1/2< H < 1\) in time (or white in time), and is colored in space with some spatial covariance. More precisely, we consider the following system of linear stochastic wave equations

$$\begin{aligned} \begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \frac{\partial ^2}{\partial t^2} u_j(t, x) = \Delta u_j(t, x) + \dot{W}_j(t, x) &{} \text {for } t \ge 0, x \in {\mathbb R}^N,\\ u_j(0, x) = 0, \quad \displaystyle \frac{\partial }{\partial t}u_j(0, x) = 0, &{} j = 1, \dots , d, \end{array}\right. } \end{aligned} \end{aligned}$$
(1.1)

where \(\Delta \) is the Laplacian in variable x and \(\dot{W} = (\dot{W}_1, \dots , \dot{W}_d)\) is a d-dimensional Gaussian noise. We assume that \(\dot{W}_1, \dots , \dot{W}_d\) are i.i.d. and formally, each \(\dot{W}_j\) has covariance

$$\begin{aligned} {\mathbb E}[\dot{W}_j(t, x) \dot{W}_j(s, y)] = \rho _H(t-s) f_\beta (x-y), \end{aligned}$$
(1.2)

where

$$\begin{aligned} \rho _H(t-s) = {\left\{ \begin{array}{ll} \delta (t-s) &{} \text {if }H = 1/2,\\ |t-s|^{2H-2} &{} \text {if }1/2< H < 1 \end{array}\right. } \end{aligned}$$

and the spatial covariance function \(f_\beta \) is the Fourier transform of a nonnegative tempered measure \(\mu _\beta \) which has a density \(h_\beta (\xi )\) (with respect to the Lebesgue measure) that is comparable to \(|\xi |^{-\beta }\), where \(0< \beta < N\).

For the stochastic wave equation driven by the space-time white noise, there is no function-valued solution when the spatial dimension N is greater than 1. An approach to study this equation in higher dimensions is to consider a Gaussian noise that is white in time but has some correlation in space, so that the equation admits a solution as a real-valued process (or a random field); see [17, 20, 21]. In particular, their results show that when \(\dot{W}\) is white in time, i.e., \(H = 1/2\), the Eq. (1.1) has a unique random field solution if and only if \(N-\beta < 2\). The fractional case \(1/2< H < 1\) is studied in [5]. It is proved that (1.1) has a unique random field solution if and only if \(N-\beta < 2H+1\). Besides existence and uniqueness, results on space-time Hölder regularities and hitting probabilities for the solution of (1.1) can be found in [16, 23]. To the best of the author’s knowledge, not many other fine properties of (1.1) are known in the case of \(H \ne 1/2\) and \(N \ge 2\).

The purpose of this paper is to study the local times (or occupation densities) of the stochastic wave equation driven by fractional-colored noise. The solution of (1.1) is an \({\mathbb R}^d\)-valued Gaussian random field \(u = \{ u(t, x) : t \ge 0, x \in {\mathbb R}^N\}\). Our approach to the study of local times of u is based on the Fourier analytic method due to Berman [7, 11] and the use of local nondeterminism, which is one of the main tools for studying local times of Gaussian random fields. This approach was used in [41] to study local times of the stochastic heat equation in the time variable; a different approach was used in [47].

The notion of local nondeterminism (LND) for Gaussian processes was first introduced by Berman [11] to study the existence of jointly continuous local times, and was extended by Pitt [43] to study the local times of multivariate Gaussian random fields. Various forms of LND have been studied in the literature, e.g., [18, 19, 31, 38, 53]. In particular, the property of strong local nondeterminism has been developed. An example of Gaussian random field that satisfies this stronger property of LND is the multiparameter fractional Brownian motion [43]. The investigation of the strong LND property is of interest because it has found applications in studying various properties of Gaussian random fields such as exact modulus of continuity of sample paths, small ball probabilities and fractal properties. We refer to the survey of Xiao [52] for details.

A result of Walsh [48, Theorem 3.1] shows that the solution of the linear stochastic wave equation in one spatial dimension driven by the space-time white noise can be represented by a modified Brownian sheet. It is known that the Brownian sheet does not satisfy the LND property in the sense of Berman or Pitt, but it satisfies a different type of LND called sectorial LND [30, 31]. This leads to the natural question of whether the solution of the stochastic wave equation satisfies the LND property.

Recently, Lee and Xiao [34] considered the linear stochastic wave equation driven by a Gaussian noise that is white in time and colored in space with spatial covariance given by the Riesz kernel, and showed that the solution satisfies a new type of strong LND in the form of a spherical integral, which turns out to be useful in proving the exact modulus of continuity of the sample functions.

The contributions of the present paper are as follows. We extend the result of [34] to the case of fractional-colored noise and prove that the solution u of (1.1) satisfies a spherical integral form of strong LND in (tx) (Proposition 3.2). We also study the LND property of u in t or x when the other variable is held fixed. Since the LND property in joint variable (tx) takes a different form from those that are studied previously, e.g., in [11, 27, 43, 50, 51], their local times results cannot be applied directly. In this case, we exploit this new type of LND property to study the local times of the stochastic wave equation. More specifically, in Theorem 4.8, we prove that u, as a Gaussian random field \((t, x) \mapsto u(t, x)\), has a jointly continuous local time if \(\frac{1}{2}(2H+1-N+\beta )d < 1+N\). Moreover, we obtain differentiability results for the local times in the space variable, and the local and uniform moduli of continuity of the local times in the set variable (Theorems 5.1 and 5.3). Our results lead to sample function properties that are new for the stochastic wave equation with fractional-colored noise, including the exact uniform modulus of continuity of \((t, x) \mapsto u(t, x)\) (Theorem 3.7) and the property that \((t, x) \mapsto u(t, x)\) is nowhere differentiable (Theorem 5.5).

The rest of the paper is organized as follows. In Section 2, we recall the theory of the stochastic wave equation with fractional-colored noise and some facts about local times. In Section 3, we study the local nondeterminism property for the solution u of (1.1) and give the exact modulus of continuity of the sample functions. In Section 4, we derive moment estimates for the local times of u using the LND property and prove the existence of jointly continuous local times. Finally, in Section 5, we study regularities of the local times including the differentiability in the set variable and the moduli of continuity in the set variable, and we use the latter to derive lower envelopes of the sample path oscillations.

2 Preliminaries

This section contains preliminaries about the stochastic wave equation and local times. We first introduce some notations and review the theory for fractional-colored noise and the solution of the Eq. (1.1). After that, we will recall the definition and some properties of the local times of stochastic processes.

The Fourier transform of a function \(\phi : {\mathbb R}^N \rightarrow {\mathbb R}\) (or \(\mathbb {C}\)) is defined by

$$\begin{aligned} \mathscr {F}\phi (\xi ) = \widehat{\phi }(\xi ) = \int _{{\mathbb R}^N} e^{-ix \cdot \xi }\phi (x) dx\end{aligned}$$

whenever the integral is well-defined. We write \(x \cdot \xi = \sum _{j = 1}^N x_j \xi _j\) for the usual dot product in \({\mathbb R}^N\). We assume that the function \(f_\beta \) in (1.2), where \(0< \beta < N\), is the Fourier transform of a nonnegative tempered measure \(\mu _\beta \) in the sense that

$$\begin{aligned} \int _{{\mathbb R}^N} f_\beta (x) \phi (x) dx = \int _{{\mathbb R}^N} \mathscr {F}\phi (\xi ) \mu _\beta (d\xi ) \end{aligned}$$

for all \(\phi \) in the Schwartz space \(\mathcal {S}({\mathbb R}^N)\) of rapidly decreasing smooth functions, and that \(\mu _\beta \) has a density \(h_\beta (\xi )\) with respect to the Lebesgue measure satisfying

$$\begin{aligned} C_1|\xi |^{-\beta } \le h_\beta (\xi ) \le C_2 |\xi |^{-\beta }, \end{aligned}$$
(2.1)

where \(C_1\) and \(C_2\) are positive finite constants. A typical example of \(f_\beta \) is the Riesz kernel:

$$\begin{aligned} f_\beta (\xi ) = |\xi |^{-(N-\beta )}, \quad 0< \beta < N, \end{aligned}$$

which is the Fourier transform of \(\mu _\beta (d\xi ) = C |\xi |^{-\beta } d\xi \), where C is a suitable constant depending on \(\beta \) and N. See [45, §V].

Let us recall the random field approach for the fractional-colored noise \(\dot{W}\) and the construction of the solution of (1.1) proposed by Balan and Tudor [4, 5]. Let \(C_c^\infty ({\mathbb R}_+ \times {\mathbb R}^N)\) denote the space of smooth, compactly supported functions on \({\mathbb R}_+ \times {\mathbb R}^N\), and define the inner product \(\langle \cdot , \cdot \rangle _{\mathcal {HP}}\) on \(C_c^\infty ({\mathbb R}_+ \times {\mathbb R}^N)\) by

$$\begin{aligned} \begin{aligned} \langle \varphi , \psi \rangle _{\mathcal {HP}}&= \int _{{\mathbb R}_+} dt \int _{{\mathbb R}_+} ds \int _{{\mathbb R}^N} dx \int _{{\mathbb R}^N} dy\, \varphi (t, x) \, \rho _H(t-s) f_\beta (x-y)\, \psi (s, y)\\&= C \int _{{\mathbb R}_+} dt \int _{{\mathbb R}_+} ds \int _{{\mathbb R}^N} d\xi \, \mathscr {F}\varphi (t, \cdot )(\xi )\, \overline{\mathscr {F}\psi (s, \cdot )(\xi )} \, \rho _H(t-s) \, h_\beta (\xi ). \end{aligned} \end{aligned}$$
(2.2)

The Hilbert space \(\mathcal {HP}\) associated with the noise \(\dot{W}\) is defined as the completion of \(C_c^\infty ({\mathbb R}_+ \times {\mathbb R}^N)\) with respect to the inner product \(\langle \cdot , \cdot \rangle _{\mathcal {HP}}\). This Hilbert space can be identified as the space of all distribution-valued functions \(S: {\mathbb R}_+ \rightarrow \mathcal {S}'({\mathbb R}^N)\) such that for each \(t \ge 0\), \(\mathscr {F}S(t)\) is a function, and

$$\begin{aligned} \int _{{\mathbb R}_+} dt \int _{{\mathbb R}_+} ds \int _{{\mathbb R}^N} d\xi \, \mathscr {F}S(t)(\xi )\, \overline{\mathscr {F}S(s)(\xi )} \, \rho _H(t-s) \, h_\beta (\xi ) < \infty . \end{aligned}$$

The Gaussian noise \(\dot{W}\) can be defined as a generalized Gaussian process \(\{ W(\varphi ) : \varphi \in C^\infty _c({\mathbb R}_+\times {\mathbb R}^N)\}\) with mean zero and covariance \({\mathbb E}[W(\varphi )W(\psi )] = \langle \varphi , \psi \rangle _{\mathcal {HP}}\). Then W produces an isometry from \(\mathcal {HP}\) into a Gaussian subspace of \(L^2({\mathbb P})\)

$$\begin{aligned} \varphi \mapsto W(\varphi ) =: \int _{{\mathbb R}_+} \int _{{\mathbb R}^N} \varphi (s, y) W(ds, dy), \end{aligned}$$

such that \({\mathbb E}[W(\varphi )W(\psi )] = \langle \varphi , \psi \rangle _{\mathcal {HP}}\) for all \(\varphi , \psi \in \mathcal {HP}\).

Let \(g_{t, x}(s, y) = G(t-s, x-y) \mathbf{1}_{[0, t]}(s)\), where G(tx) is the fundamental solution of the wave equation. By Theorem 3.1 of [5], under the Assumption (2.1), \(g_{t, x} \in \mathcal {HP}\) if and only if

$$\begin{aligned} \beta > N - 2H - 1, \end{aligned}$$
(2.3)

and when (2.3) holds, (1.1) has a random field solution that is mean square continuous in (tx):

$$\begin{aligned} u(t, x) = W(g_{t, x}) = \int _0^t \int _{{\mathbb R}^N} G(t-s, x-y) W(ds, dy). \end{aligned}$$

Note that \(u = \{ u(t, x) : t \ge 0, x \in {\mathbb R}^N \}\) is an \({\mathbb R}^d\)-valued Gaussian field with i.i.d. components.

If the Fourier transforms of \(s \mapsto \mathscr {F}\varphi (s, \cdot )(\xi )\) and \(s \mapsto \mathscr {F}\psi (s, \cdot )(\xi )\) exist, they are equal to the Fourier transforms of \(\varphi \) and \(\psi \) in (tx)-variables, denoted by \(\mathscr {F}\varphi (\tau , \xi )\) and \(\mathscr {F}\psi (\tau , \xi )\), respectively. In this case, it follows from (2.2) and the Plancherel theorem that

$$\begin{aligned} \langle \varphi , \psi \rangle _{\mathcal {HP}} = C \int _{\mathbb R}d\tau \int _{{\mathbb R}^N} d\xi \, \mathscr {F}\varphi (\tau , \xi )\, \overline{\mathscr {F}\psi (\tau , \xi )}\, |\tau |^{1-2H} \,h_\beta (\xi ). \end{aligned}$$
(2.4)

This formula remains true when \(H = 1/2\). Recall that the Fourier transform of the fundamental solution in the space variable is \(\mathscr {F}G(t, \cdot )(\xi ) = |\xi |^{-1}\sin (t|\xi |)\); see [26, Ch.5]. Then

$$\begin{aligned} \mathscr {F}g_{t, x}(s, \cdot )(\xi ) = e^{-ix\cdot \xi }\, \frac{\sin ((t-s)|\xi |)}{|\xi |} \mathbf{1}_{[0, t]}(s) \end{aligned}$$

and the Fourier transform of \(s \mapsto \mathscr {F}g_{t, x}(s, \cdot )(\xi )\) is

$$\begin{aligned} \mathscr {F}g_{t, x}(\tau , \xi ) = \frac{e^{-ix\cdot \xi }}{2|\xi |} \left( \frac{e^{-it\tau } - e^{it|\xi |}}{\tau +|\xi |} - \frac{e^{-it\tau }-e^{-it|\xi |}}{\tau - |\xi |} \right) . \end{aligned}$$
(2.5)

This and (2.4) provide a formula for the covariance of u(tx).

Next, let us recall the definition and properties of local times. Let \(u = \{u(z) : z \in {\mathbb R}^k \}\) be a random field with values in \({\mathbb R}^d\). The occupation measure of u on a Borel set \(T \in \mathscr {B}({\mathbb R}^k)\) is the random measure defined by

$$\begin{aligned} \nu _T(B) = \nu _T(B, \omega ) = \lambda _k\{ z \in T : u(z) \in B \}, \quad B \in \mathscr {B}({\mathbb R}^d), \end{aligned}$$

where \(\lambda _k\) denotes the Lebesgue measure on \({\mathbb R}^k\). We say that u has a local time on T if \(\nu _T\) is a.s. absolutely continuous with respect to \(\lambda _d\), the Lebesgue measure on \({\mathbb R}^d\). A version of the Radon–Nikodym derivative, denoted by \(L(v, T) = L(v, T, \omega )\), \(v \in {\mathbb R}^d\), is called a version of the local time. It follows from the definition that for all \(B \in \mathscr {B}({\mathbb R}^d)\),

$$\begin{aligned} \lambda _k\{ z \in T : u(z) \in B \} = \int _B L(v, T) \, dv. \end{aligned}$$
(2.6)

Obviously, if u has a local time on T, then it has a local time on every S in \(\mathscr {B}(T)\), the collection of all Borel subsets of T. We say that L(vS) is a kernel if

  1. (i)

    For each \(S \in \mathscr {B}(T)\), the function \((v, \omega ) \mapsto L(v, S, \omega )\) is \(\mathscr {B}({\mathbb R}^d) \times \mathscr {F}\)-measurable;

  2. (ii)

    For each \((v, \omega ) \in {\mathbb R}^d \times \Omega \), the set function \(S \mapsto L(v, S, \omega )\) is a measure on \((T, \mathscr {B}(T))\).

It is desirable to work with a version of the local time that is a kernel because it satisfies the following properties [27, Theorem (6.4)]:

  1. (i)

    Occupation density formula: for every nonnegative Borel function f(zv) on \(T \times {\mathbb R}^d\),

    $$\begin{aligned} \int _T f(z, u(z)) \, dz = \int _{{\mathbb R}^d} dv \int _T f(z, v) L(v, dz). \end{aligned}$$
  2. (ii)

    \(L(v, M_v^c) = 0\) for a.e. v, where \(M_v = \{ z \in T : u(z) = v \}\), i.e., the support of the measure \(L(v, \cdot )\) is contained in the v-level set \(M_v\) of u.

For any \(z = (z_1, \dots , z_k) \in {\mathbb R}^k\), let \((-\infty , z]\) denote the unbounded interval \(\prod _{j=1}^k (-\infty , z_j]\) in \({\mathbb R}^k\) with upper right corner at z. Let T be a compact interval in \({\mathbb R}^k\) and \(Q_z = (-\infty , z] \cap T\). We say that a version of the local time L is jointly continuous on T if \((v, z) \mapsto L(v, Q_z)\) is jointly continuous on \({\mathbb R}^d \times T\). It is known that if L is jointly continuous, then \(L(v, Q_z)\) can be uniquely extended to a kernel L(vS), \(S \in \mathscr {B}(T)\), satisfying the property that \(L(v, J) = 0\) for all \(v \not \in \overline{u(J)}\) and all intervals \(J \subset T\) with rational endpoints; and if, in addition, u has continuous sample paths, then \(L(v, M_v^c) = 0\) for all \(v \in {\mathbb R}^d\); see [9, 27, p.12], [1, p.223]. Since the local times serve as a natural measure on the level sets of u, they are useful in studying the properties of the level sets of u [10, 27, 38, 51].

3 Local Nondeterminism

This section is devoted to studying LND property of the solution u(tx) of (1.1). In what follows, we denote

$$\begin{aligned} \alpha = \frac{2H+1-N+\beta }{2} \end{aligned}$$

and assume

$$\begin{aligned} N - 2H - 1< \beta < N - 2H + 1 \end{aligned}$$
(3.1)

so that the solution u(tx) exists by (2.3), and that \(0< \alpha < 1\). In this case, the proposition below implies that the sample functions \((t, x) \mapsto u(t, x)\) are a.s. locally Hölder continuous of any order strictly less than \(\alpha \).

Proposition 3.1

For any \(0< a < b\) and \(M > 0\), there exist positive finite constants \(C_1\) and \(C_2\) such that for all \((t, x), (s, y) \in [a, b] \times [-M, M]^N\),

$$\begin{aligned} C_1 (|t-s| + |x-y|)^{2\alpha } \le {\mathbb E}(|u(t, x) - u(s, y)|^2) \le C_2 (|t-s| + |x-y|)^{2\alpha }. \end{aligned}$$
(3.2)

Proof

We may assume that \(d = 1\). By (2.4), we have

$$\begin{aligned} {\mathbb E}(|u(t, x) - u(s, y)|^2) = C \int _{\mathbb R}d\tau \int _{{\mathbb R}^N} d\xi \, |\mathscr {F} g_{t, x}(\tau , \xi ) - \mathscr {F} g_{s, y}(\tau , \xi )|^2\, |\tau |^{1-2H}\, h_\beta (\xi ). \end{aligned}$$

Then, by the Assumption (2.1) for \(h_\beta \), it is enough to consider the case that \(h_\beta (\xi ) = |\xi |^{-\beta }\). But for this case, (3.2) has been proved in [16]. \(\square \)

The following LND result is the basis for the study of regularity properties of local times in this paper. This result extends Proposition 2.1 of [34] and shows that u(tx) satisfies a strong LND property in the form of a spherical integral. The proof involves a Fourier analytic method.

Proposition 3.2

For any \(0< a < \infty \), there exist constants \(C > 0\) and \(r_0 > 0\) such that for all integers \(n \ge 1\), for all \((t, x), (t^1, x^1), \dots , (t^n, x^n) \in [a, \infty ) \times {\mathbb R}^N\) with \(\max _j(|t-t^j| + |x-x^j|) \le r_0\), we have

$$\begin{aligned} \mathrm {Var}(u_1(t, x)|u_1(t^1, x^1), \dots , u_1(t^n, x^n)) \ge C \int _{\mathbb {S}^{N-1}} \min _{1 \le j \le n}|(t-t^j) + (x-x^j) \cdot w|^{2\alpha } \,\sigma (dw), \end{aligned}$$
(3.3)

where \(\sigma \) is the surface measure on the unit sphere \(\mathbb {S}^{N-1}\).

Proof

Take \(r_0 = a/2\). For each \(w \in \mathbb {S}^{N-1}\), let

$$\begin{aligned} r(w) = \min _{1 \le j \le n} |(t-t^j) + (x - x^j) \cdot w|. \end{aligned}$$

Since u is Gaussian, the conditional variance \(\mathrm {Var}(u_1(t, x)|u_1(t^1, x^1), \dots , u_1(t^n, x^n))\) is the squared distance between u(tx) and the linear subspace spanned by \(u_1(t^1, x^1), \dots , u_1(t^n, x^n)\) in \(L^2({\mathbb P})\). Thus, it suffices to prove that there exist constants \(C> 0\) and \(r_0 > 0\) such that for any \(n \ge 1\), for any \((t, x), (t^1, x^1), \dots , (t^n, x^n) \in [a, \infty ) \times {\mathbb R}^N\) with \(\max _j(|t-t^j| + |x-x^j|) \le r_0\), and for any choice of real numbers \(a_1, \dots , a_n\), we have

$$\begin{aligned} {\mathbb E}\bigg [\Big (u_1(t, x) - \sum _{j=1}^n a_j u_1(t^j, x^j)\Big )^2\bigg ] \ge C \int _{\mathbb {S}^{N-1}} r(w)^{2H+1-N+\beta } \,\sigma (dw). \end{aligned}$$
(3.4)

To this end, we note that by (2.1) and (2.4),

$$\begin{aligned} \begin{aligned}&{\mathbb E}\bigg [\Big (u_1(t, x) - \sum _{j=1}^n a_j u_1(t^j, x^j)\Big )^2\bigg ]\\&\quad \ge C \int _{\mathbb R}d\tau \int _{{\mathbb R}^N} d\xi \,\Big |\mathscr {F}g_{t, x}(\tau , \xi ) - \sum _{j=1}^n a_j \mathscr {F}g_{t^j, x^j}(\tau , \xi )\Big |^2 |\tau |^{1-2H} |\xi |^{-\beta }. \end{aligned} \end{aligned}$$

Then, we use (2.5) and spherical coordinates \(\xi = \rho w\) to get

$$\begin{aligned}&C \int _{\mathbb R}d\tau \int _{{\mathbb R}_+} d\rho \,\int _{\mathbb {S}^{N-1}} \sigma (dw) \Big | F(t, x \cdot w, \tau , \rho ) \\&\quad - \sum _{j=1}^n a_j F(t^j, x^j \cdot w, \tau , \rho )\Big |^2 |\tau |^{1-2H} \rho ^{N-\beta - 3}, \end{aligned}$$

where

$$\begin{aligned} F(t, y, \tau , \rho ) = \frac{e^{-i \rho y}}{2} \left( \frac{e^{-it\tau } - e^{it\rho }}{\tau +\rho } - \frac{e^{-it\tau }-e^{-it\rho }}{\tau - \rho } \right) . \end{aligned}$$

Since \( F(t, x \cdot w, -\tau , -\rho ) =- \overline{ F(t, x \cdot w, \tau , \rho )}\), it follows that

$$\begin{aligned} \begin{aligned}&{\mathbb E}\bigg [\Big (u_1(t, x) - \sum _{j=1}^n a_j u_1(t^j, x^j)\Big )^2\bigg ]\\&\quad \ge \frac{C}{2} \int _{\mathbb {S}^{N-1}} \sigma (dw) \underbrace{\int _{\mathbb R}d\tau \int _{\mathbb R}d\rho \, \Big | F(t, x \cdot w, \tau , \rho ) - \sum _{j=1}^n a_j F(t^j, x^j \cdot w, \tau , \rho )\Big |^2 |\tau |^{1-2H} |\rho |^{N-\beta -3}}_{=:A(w)}. \end{aligned} \end{aligned}$$
(3.5)

Choose and fix any two nonnegative smooth test functions \(\phi \), \(\psi : {\mathbb R}\rightarrow {\mathbb R}\) satisfying the following properties: \(\phi \) is supported on [0, a/2] and \(\int \phi (s) ds = 1\); \(\psi \) is supported on \([-1, 1]\) and \(\psi (0) = 1\). Let \(\psi _r(x) = r^{-1} \psi (r^{-1} x)\). For each \(w \in \mathbb {S}^{N-1}\) with \(r(w) > 0\), write \(\widehat{\psi }_{r(w)} = \mathscr {F}(\psi _{r(w)})\) and consider

$$\begin{aligned} I(w)&:= \int _{\mathbb R}d\rho \int _{\mathbb R}d\tau \, \overline{\bigg ( F(t, x \cdot w, \tau , \rho ) - \sum _{j=1}^n a_j F(t^j, x^j \cdot w, \tau , \rho )\bigg )}\\&\quad \times \, e^{-it\rho }e^{-i\rho x\cdot w}\widehat{\phi }(\tau -\rho ) \widehat{\psi }_{r(w)}(\rho ). \end{aligned}$$

Note that for \(\rho \) fixed, \(\tau \mapsto \widehat{\phi }(\tau -\rho )\) is the Fourier transform of \(s \mapsto e^{is\rho } \phi (s)\), and \(\tau \mapsto F(t, x\cdot w, \tau , \rho )\) is the Fourier transform of \(s \mapsto e^{-i\rho x\cdot w}\sin ((t-s)\rho ) \mathbf{1}_{[0, t]}(s)\). Then apply the Plancherel theorem to the integral in \(\tau \) to get that

$$\begin{aligned} I(w)&= 2\pi \int _{\mathbb R}d\rho \int _{\mathbb R}ds \, \overline{\bigg ( e^{-i\rho x\cdot w}\sin ((t-s)\rho ) \mathbf{1}_{[0, t]}(s) - \sum _{j=1}^n a_j e^{-i\rho x^j \cdot w} \sin ((t^j-s)\rho ) \mathbf{1}_{[0, t^j]}(s) \bigg )}\\&\quad \times e^{-i(t-s)\rho } e^{-i\rho x\cdot w}\phi (s) \widehat{\psi }_{r(w)}(\rho ). \end{aligned}$$

Since \(\sin (z) =\frac{1}{2i} (e^{iz} - e^{-iz})\) and \(\phi \) is supported on [0, a/2], this is

$$\begin{aligned}&= -\pi i \int _0^{a/2} ds \int _{\mathbb R}d\rho \, \bigg [ e^{i\rho x\cdot w}\Big (e^{i(t-s)\rho } - e^{-i(t-s)\rho }\Big ) \\&\quad - \sum _{j=1}^n a_j e^{i\rho x^j \cdot w} \Big (e^{i(t^j-s)\rho } - e^{-i(t^j-s)\rho }\Big )\bigg ] e^{-i(t-s)\rho } e^{-i\rho x\cdot w}\phi (s) \widehat{\psi }_{r(w)}(\rho ). \end{aligned}$$

Then, apply the Fourier inversion theorem to \(\widehat{\psi }_{r(w)} = \mathscr {F}(\psi _{r(w)})\) to get

$$\begin{aligned} =&-2\pi ^2 i \int _0^{a/2} \phi (s) \bigg [ \psi _{r(w)}(0) - \psi _{r(w)}(-2(t-s)) - \sum _{j=1}^n a_j \Big (\psi _{r(w)}((x^j-x)\cdot w + (t^j-t)) \\&- \psi _{r(w)}((x^j-x)\cdot w + (t^j-t) - 2(t^j-s))\Big )\bigg ] ds. \end{aligned}$$

Since \(t \ge a\) and \(r(w) \le r_0 = a/2\), we see that for all \(s \in [0, a/2]\), \(2(t-s)/r(w) \ge 2\) and thus

$$\begin{aligned} \psi _{r(w)}(-2(t-s)) = 0. \end{aligned}$$

By the definition of r(w), we have \(|(x^j-x)\cdot w + (t^j - t)|/r(w) \ge 1\), which implies

$$\begin{aligned} \psi _{r(w)}((x^j-x)\cdot w + (t^j-t)) = 0. \end{aligned}$$

Moreover, we have \((x^j-x)\cdot w + (t^j-t) - 2(t^j-s) \le r_0 - a = -a/2 \le -r(w)\), hence

$$\begin{aligned} \psi _{r(w)}((x^j-x)\cdot w + (t^j-t) - 2(t^j-s)) = 0. \end{aligned}$$

It follows that

$$\begin{aligned} |I(w)| = 2\pi ^2 \psi _{r(w)}(0) \int _0^{a/2} \phi (s)\, ds = 2\pi ^2 r(w)^{-1}. \end{aligned}$$
(3.6)

On the other hand, by the Cauchy–Schwarz inequality,

$$\begin{aligned} |I(w)|^2 \le A(w) \times \int _{\mathbb R}d\tau \int _{\mathbb R}d\rho \, |\widehat{\phi }(\tau -\rho )|^2 |\widehat{\psi }_{r(w)}(\rho )|^2 |\tau |^{2H-1} |\rho |^{3-N+\beta }. \end{aligned}$$
(3.7)

Note that \(\widehat{\psi }_{r(w)}(\rho ) = \widehat{\psi }(r(w)\rho )\) and both \(\widehat{\phi }\) and \(\widehat{\psi }\) are rapidly decreasing functions. To estimate the double integral in (3.7), we consider two regions: (i) \(|\tau |\le |\rho |\) and (ii) \(|\tau | > |\rho |\). For region (i), by \(|\tau |^{2H-1} \le |\rho |^{2H-1}\) and scaling in \(\rho \), we have

$$\begin{aligned}&\int _{\mathbb R}d\rho \int _{|\tau | \le |\rho |} d\tau \, |\widehat{\phi }(\tau -\rho )|^2 |\widehat{\psi }_{r(w)}(\rho )|^2 |\tau |^{2H-1} |\rho |^{3-N+\beta }\\&\quad \le \int _{\mathbb R}d\rho \, |\widehat{\psi }(r(w)\rho )|^2 |\rho |^{2H+2-N+\beta } \int _{\mathbb R}d\tau \,|\widehat{\phi }(\tau )|^2\\&\quad = C r(w)^{-2H-3+N-\beta }. \end{aligned}$$

For region (ii), note that \(3-N+\beta> 2H+1-N+\beta > 0\) by (3.1), so \(|\rho |^{3-N+\beta } \le |\tau |^{3-N+\beta }\). By letting \(z = \tau - \rho \) and then by scaling,

$$\begin{aligned}&\int _{\mathbb R}d\rho \int _{|\tau | > |\rho |} d\tau \, |\widehat{\phi }(\tau -\rho )|^2 |\widehat{\psi }_{r(w)}(\rho )|^2 |\tau |^{2H-1} |\rho |^{3-N+\beta }\\&\quad \le \int _{\mathbb R}d\rho \, |\widehat{\psi }(r(w)\rho )|^2 \int _{\mathbb R}dz \,|\widehat{\phi }(z)|^2 |z + \rho |^{2H+2-N+\beta } \\&\quad \le C \int _{\mathbb R}d\rho \, |\widehat{\psi }(r(w)\rho )|^2 \int _{\mathbb R}dz\, |\widehat{\phi }(z)|^2 |z|^{2H+2-N+\beta }\\&\qquad + C \int _{\mathbb R}d\rho \, |\widehat{\psi }(r(w)\rho )|^2 |\rho |^{2H+2-N+\beta } \int _{\mathbb R}dz\, |\widehat{\phi }(z)|^2\\&\quad \le C r(w)^{-1} + C r(w)^{-2H-3+N-\beta }, \end{aligned}$$

which is \(\le C r(w)^{-2H-3+N-\beta }\) for some larger constant C because \(2H+3-N+\beta > 1\). Hence

$$\begin{aligned} |I(w)|^2 \le C A(w) r(w)^{-2H-3+N-\beta }. \end{aligned}$$
(3.8)

Now, combining (3.6) and (3.8), we get that

$$\begin{aligned} A(w) \ge C r(w)^{2H+1-N+\beta }, \end{aligned}$$
(3.9)

and this remains true if \(r(w) = 0\). Therefore, we can integrate both sides of (3.9) over \(\mathbb {S}^{N-1}\) with respect to \(\sigma (dw)\) and use (3.5) to get (3.4). \(\square \)

When \(N = 1\), \(\sigma \) is supported on \(\{-1, 1\}\). In this case, u(tx) satisfies sectorial LND under the change of coordinates \((t, x) \mapsto (t+x, t-x)\). It is known that the Brownian sheet and fractional Brownian sheets satisfy the sectorial LND property [30, 49].

Corollary 3.3

When \(N = 1\), (3.3) becomes

$$\begin{aligned} \begin{aligned}&\mathrm {Var}(u_1(t, x)|u_1(t^1, x^1), \dots , u_1(t^n, x^n)) \\&\qquad \ge C \Big ( \min _{1 \le j \le n}|(t+x) - (t^j+x^j)|^{2\alpha } + \min _{1\le j \le n} |(t-x) - (t^j-x^j)|^{2\alpha } \Big ). \end{aligned} \end{aligned}$$

Moreover, u(tx) satisfies the strong LND property in one variable (t or x) while the other variable is held fixed.

Corollary 3.4

Fix \(x_0 \in {\mathbb R}^N\). For any \(0< a < \infty \), there exist constants \(C > 0\) and \(r_0 > 0\) such that for all integers \(n \ge 1\), for all \(t, t^1, \dots , t^n \in [a, \infty )\) with \(\max _j |t-t^j| \le r_0\), we have

$$\begin{aligned} \mathrm {Var}(u_1(t, x_0) | u_1(t^1, x_0), \dots , u_1(t^n, x_0)) \ge C \min _{1 \le j \le n} |t-t^j|^{2\alpha }. \end{aligned}$$

Proposition 3.5

Fix \(t_0 > 0\). Then \(\{u_1(t_0, x) : x \in {\mathbb R}^N \}\) is a stationary Gaussian random field with a spectral density \(f_{t_0}(\xi ) = N_{t_0}(\xi ) h_\beta (\xi )\), \(\xi \in {\mathbb R}^N\), where

$$\begin{aligned} N_{t_0}(\xi ) = C |\xi |^{-2} \int _{\mathbb R}\bigg | \frac{e^{-it_0 \tau } - e^{it_0|\xi |}}{\tau +|\xi |} - \frac{e^{-it_0\tau } - e^{-it_0|\xi |}}{\tau -|\xi |} \bigg |^2 |\tau |^{1-2H} \, d\tau .\end{aligned}$$

Moreover, for any \(0< M < \infty \), there exists a constant \(C > 0\) such that for all integers \(n \ge 1\), for all \(x, x^1, \dots , x^n \in \{ y \in {\mathbb R}^N : |y| \le M \}\),

$$\begin{aligned} \mathrm {Var}(u_1(t_0, x) | u_1(t_0, x^1), \dots , u_1(t_0, x^n)) \ge C \min _{1 \le j \le n} |x - x^j|^{2\alpha }. \end{aligned}$$
(3.10)

Proof

By (2.4) and (2.5), we have \({\mathbb E}[u_1(t_0, x) u_1(t_0, y)] = \int _{{\mathbb R}^n} e^{-i (x - y) \cdot \xi } N_{t_0}(\xi ) h_\beta (\xi )\, d\xi \), which verifies the first assertion. To prove (3.10), use (2.4), (2.1) and (2.5) to get that, for any \(a_1, \dots , a_n \in {\mathbb R}\),

$$\begin{aligned} {\mathbb E}\bigg [ \Big ( u_1(t_0, x) - \sum _{j=1}^n a_j u_1(t_0, x^j) \Big )^2 \bigg ] \ge C \int _{{\mathbb R}^N} \Big | 1 - \sum _{j=1}^n a_j e^{i(x-x^j)\cdot \xi } \Big |^2 N_{t_0}(\xi ) |\xi |^{-\beta } \, d\xi . \end{aligned}$$

By Lemma 6.2 of [2], there exists a constant \(C>0\) depending on \(t_0\) such that for all \(\xi \in {\mathbb R}^N\),

$$\begin{aligned} N_{t_0}(\xi ) \ge \frac{C}{\sqrt{|\xi |^2 + 1}} \int _{\mathbb R}\frac{|\tau |^{1-2H}}{|\tau |^2 + |\xi |^2 + 1} d\tau . \end{aligned}$$

Fix any two nonnegative smooth test functions \(\phi : {\mathbb R}\rightarrow {\mathbb R}\) and \(\psi : {\mathbb R}^N \rightarrow {\mathbb R}\) satisfying the following properties: \(\phi \) is supported on \([-1, 1]\), \(\psi \) is supported on \(\{ \xi \in {\mathbb R}^N : |\xi | \le 1 \}\), and \(\phi (0) = \psi (0) = 1\). Let \(r = \min _j |x - x^j|\), \(\phi _r(\tau ) = r^{-1}\phi (r^{-1}\tau )\), \(\psi _r(\xi ) = r^{-N}\psi (r^{-1}\xi )\) and consider

$$\begin{aligned} I := \iint _{{\mathbb R}\times {\mathbb R}^N} \Big ( 1 - \sum _{j=1}^n a_j e^{i(x-x^j)\cdot \xi } \Big ) \widehat{\phi }_r(\tau ) \widehat{\psi }_r(\xi )\, d\tau \, d\xi . \end{aligned}$$

By Fourier inversion,

$$\begin{aligned} I = (2\pi )^{1+N} \phi _r(0) \Big (\psi _r(0) - \sum _{j=1}^n a_j \psi _r(x-x^j)\Big ) = (2\pi )^{1+N} r^{-1-N}. \end{aligned}$$
(3.11)

On the other hand, by the Cauchy–Schwarz inequality,

$$\begin{aligned} I^2&\le C\, {\mathbb E}\bigg [ \Big ( u_1(t_0, x) - \sum _{j=1}^n a_j u_1(t_0, x^j) \Big )^2 \bigg ]\\&\quad \times \iint _{{\mathbb R}\times {\mathbb R}^N} \sqrt{|\xi |^2 + 1}\, (|\tau |^2 + |\xi |^2 + 1) |\tau |^{2H-1} |\xi |^\beta |\widehat{\phi }(r\tau ) \widehat{\psi }(r\xi )|^2\, d\tau \, d\xi . \end{aligned}$$

By scaling, the double integral is equal to

$$\begin{aligned} r^{-2H-3-N-\beta } \iint _{{\mathbb R}\times {\mathbb R}^N} \sqrt{|\xi |^2 + r^2}\, (|\tau |^2 + |\xi |^2 + r^2) |\tau |^{2H-1} |\xi |^\beta |\widehat{\phi }(\tau ) \widehat{\psi }(\xi )|^2 \, d\tau \, d\xi , \end{aligned}$$

which is \(\le C r^{-2H-3-N-\beta }\) by applying \(r \le 2M\) to the integrand. This and (3.11) imply that

$$\begin{aligned}{\mathbb E}\bigg [ \Big ( u_1(t_0, x) - \sum _{j=1}^n a_j u_1(t_0, x^j) \Big )^2 \bigg ] \ge C r^{2H+1-N+\beta }, \end{aligned}$$

where C does not depend on \(n, x, x^j\) or \(a_j\). This proves (3.10). \(\square \)

A property of the conditional variances \(\mathrm {Var}(u_1(t, x)| u_1(t^1, x^1), \dots , u_1(t^n, x^n))\) is that they are strictly positive whenever the points \((t^j, x^j)\) are all different from (tx). Indeed, u has the following linear independence property:

Proposition 3.6

For any \(n \ge 2\), for any distinct points \((t^1, x^1), \dots , (t^n, x^n)\) in \((0, \infty ) \times {\mathbb R}^N\), the Gaussian random variables \(u_1(t^1, x^1), \dots , u_1(t^n, x^n)\) are linearly independent.

Proof

Suppose \(a_1, \dots , a_n\) are real numbers such that \(\sum _{j=1}^n a_j u_1(t^j, x^j) = 0\) a.s. Then by (2.4),

$$\begin{aligned} 0 = {\mathbb E}\bigg (\sum _{j=1}^n a_j u_1(t^j, x^j) \bigg )^2 = \int _{\mathbb R}d\tau \int _{{\mathbb R}^N} d\xi \,\bigg | \sum _{j=1}^n a_j \mathscr {F} g_{t^j, x^j}(\tau , \xi ) \bigg |^2 |\tau |^{1-2H} h_\beta (\xi ). \end{aligned}$$

It follows that for all \(\tau \in \mathbb {R}\) and \(\xi \in \mathbb {R}^N\), \(\sum _{j=1}^n a_j \mathscr {F}g_{t^j, x^j}(\tau , \xi ) = 0\), which, by (2.5), implies

$$\begin{aligned} \sum _{j=1}^n b_j e^{-i t^j \tau } + c_1 \tau + c_2 = 0, \end{aligned}$$
(3.12)

where \(b_j = -2 a_j |\xi | e^{-i x^j \cdot \xi }\),

$$\begin{aligned} c_1&= -\sum _{j=1}^n a_j e^{-i x^j \cdot \xi }(e^{it^j|\xi |} - e^{-it^j|\xi |}),\\ c_2&= \sum _{j=1}^n a_j |\xi | e^{-i x^j \cdot \xi }(e^{it^j|\xi |} + e^{-it^j|\xi |}). \end{aligned}$$

We need to show that \(a_j = 0\) for all \(j = 1, \dots , n\). Let \(\hat{t}^1, \dots , \hat{t}^p\) be all distinct values of the \(t^j\)’s. If we fix an arbitrary \(\xi \in \mathbb {R}^N\) and differentiate (3.12) with respect to \(\tau \), we see that for all \(\tau \in {\mathbb R}\),

$$\begin{aligned} \sum _{\ell = 1}^p \bigg (-i\hat{t}^\ell \sum _{j: t^j = \hat{t}^\ell } b_j\bigg ) e^{-i \hat{t}^\ell \tau } + c_1 = 0. \end{aligned}$$

Since the functions \(\{e^{-i \hat{t}^1 \tau }, \dots , e^{-i \hat{t}^p \tau }, 1 \}\) are linearly independent over \(\mathbb {C}\), we have

$$\begin{aligned} -i\hat{t}^\ell \sum _{j: t^j = \hat{t}^\ell } b_j= 0 \end{aligned}$$

for all \(\ell = 1, \dots , p\). Since \(\xi \in {\mathbb R}^N\) is arbitrary, this implies that

$$\begin{aligned} \sum _{j: t^j = \hat{t}^\ell } a_j e^{-ix^j\cdot \xi } = 0 \end{aligned}$$
(3.13)

for all \(\xi \in \mathbb {R}^N\) and all \(\ell = 1, \dots , p\). Since the points \((t^1, x^1), \dots , (t^n, x^n)\) are distinct, for any fixed \(\ell \), the \(x^j\)’s that appear in the sum in (3.13) are distinct from each other. By linear independence of the functions \(e^{-i x^j \cdot \xi }\), we conclude that \(a_j = 0\) for all j. \(\square \)

In fact, using the LND property of u(tx), we can obtain a stronger result which says more than the Hölder regularity of the sample functions, namely the exact uniform modulus of continuity:

Theorem 3.7

Assume (3.1). For any compact interval I in \((0, \infty ) \times {\mathbb R}^N\), there exists a constant \(0< C < \infty \) such that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \sup _{\begin{array}{c} (t, x), (s, y) \in I,\\ 0 < |t-s|+|x-y| \le \varepsilon \end{array}} \frac{|u(t, x) - u(s, y)|}{{(|t-s|+|x-y|)}^\alpha \sqrt{\log \big [1+(|t-s|+|x-y|)^{-1}\big ]}} = C \quad \text {a.s.} \end{aligned}$$
(3.14)

Proof

Using the Karhunen–Loève expansion of u(tx) and Kolmogorov’s zero–one law, we can show that the limit (3.14) holds for some constant \(0 \le C \le \infty \) (cf. Lemma 7.1.1 of [35]). Then, from (3.2), we can use the standard metric entropy result for Gaussian modulus of continuity [24] to prove that this limit is finite, and use Proposition 3.2 to prove that it is also strictly positive. The proof is similar to that of Theorem 3.1 of [34] so we omit the details. \(\square \)

4 Existence of Jointly Continuous Local Times

The objective of this section is to establish the existence of jointly continuous local times for the solution of (1.1). Let us first recall a necessary and sufficient condition for the existence of square-integrable local times for general Gaussian random fields based on the Fourier analytic approach of Berman; see [7, 27, p.36].

Let \(X = \{X(z) : z \in T\}\) be an \({\mathbb R}^d\)-valued Gaussian random field on a compact interval \(T \subset {\mathbb R}^k\). The Fourier transform (or characteristic function) of the occupation measure \(\nu _T\) of X is

$$\begin{aligned} \hat{\nu }_T(\xi ) = \int _{{\mathbb R}^d} e^{i\xi \cdot v} \nu _T(dv) = \int _T e^{i\xi \cdot X(z)} \,dz. \end{aligned}$$

By the Plancherel theorem, a necessary and sufficient condition for X to have a square-integrable local time on T, namely, \(L(\cdot , T) \in L^2(\lambda _d \times {\mathbb P})\), is

$$\begin{aligned} \int _{{\mathbb R}^d} \int _T \int _T {\mathbb E}[e^{i\xi \cdot (X(z) - X(z'))}] \,dz\,dz'\,d\xi < \infty . \end{aligned}$$
(4.1)

The integral in (4.1) above is equal to \({\mathbb E}\int _{{\mathbb R}^d} |\hat{\nu }_T(\xi )|^2 d\xi \). In particular, when (4.1) holds, a version of the local time can be obtained by the inverse \(L^2\)-Fourier transform of \(\hat{\nu }_T\):

$$\begin{aligned} L(v, T) \overset{L^2}{=} \lim _{M \rightarrow \infty } (2\pi )^{-d} \int _{[-M, M]^d} e^{-i\xi \cdot v} \int _T e^{i\xi \cdot X(z)} dz\,d\xi . \end{aligned}$$
(4.2)

There are several ways to consider the local times of the stochastic wave Eq. (1.1). The solution u(tx) can be regarded as a process in t, in x, or in (tx). Using (4.1) and (3.2), we can easily derive the following necessary and sufficient conditions for the existence of square-integrable local times for u, for each of the three cases.

Theorem 4.1

Assume (3.1). Let \(T_1 \subset (0, \infty )\) and \(T_2 \subset {\mathbb R}^N\) be compact intervals and \(T = T_1 \times T_2\).

  1. (i)

    For any fixed \(x_0 \in {\mathbb R}^N\), \(\{ u(t, x_0) : t \in T_1\}\) has a square-integrable local time \(L^{x_0}(v, T_1)\) on \(T_1\) if and only if \(\alpha d < 1\).

  2. (ii)

    For any fixed \(t_0 > 0\), \(\{ u(t_0, x) : x \in T_2\}\) has a square-integrable local time \(L_{t_0}(v, T_2)\) on \(T_2\) if and only if \(\alpha d < N\).

  3. (iii)

    \(\{ u(t, x) : (t, x) \in T\}\) has a square-integrable local time L(vT) on T if and only if \(\alpha d < 1+N\).

By Corollary 3.4 and Proposition 3.5, u satisfies the LND property (in the sense of Berman or Pitt) in one variable t or x when the other variable is held fixed. Therefore, if the conditions in (i) and (ii) above hold, then the joint continuity and Hölder conditions of the local times follow from the standard results of [11, 27, 43]. For case (iii), when \(N = 1\), u(tx) satisfies sectorial LND by Corollary 3.3, so the results of [50] can be applied; otherwise, u satisfies a different type of strong LND which takes an integral form by Proposition 3.2, so the standard results of [11, 27, 43, 50, 51] cannot be directly applied. It can be seen from (2.6) that if \(\alpha d < 1\), then

$$\begin{aligned} L(v, T) = \int _{T_2} L^x(v, T_1)\, dx \quad \text {a.e. } v, \end{aligned}$$

and if \(\alpha d < N\), then

$$\begin{aligned} L(v, T) = \int _{T_1} L_t(v, T_2)\, dt \quad \text {a.e. } v. \end{aligned}$$

While these relations may allow us to deduce regularity of L(vT) from that of \(L_t(v, T_2)\) or \(L^x(v, T_1)\), they are not accessible when \(N \le \alpha d < 1+N\).

The main result of this section is Theorem 4.8, which establishes joint continuity of the local times of u, particularly for case (iii) above. Our approach is to directly exploit the spherical LND property in Proposition 3.2 to obtain moment estimates for the local times.

Lemma 4.2

Let \(p > 0\) and T be a compact interval in \((0, \infty ) \times {\mathbb R}^N\). If \(\alpha p < 1+N\), then there exists a constant \(C < \infty \) such that for all intervals I in T, \(n \ge 1\) and \((t^1, x^1), \dots , (t^n, x^n) \in I\),

$$\begin{aligned}&\int _{I} dt \,dx \left[ \int _{\mathbb {S}^{N-1}} \min _{1\le i \le n} |(t + x\cdot w) - (t^i + x^i \cdot w)|^{2\alpha } \sigma (dw) \right] ^{-\frac{p}{2}}\\&\quad \le C n^{\alpha p} [\lambda _{1+N}(I)]^{1-\frac{\alpha p}{1+N}}. \end{aligned}$$

Proof

Fix \((t^1, x^1), \dots , (t^n, x^n) \in I\). Let \(\delta > 0\) be a small constant to be determined. For \(\ell = 1, \dots , N\), let \(e_\ell \) denote the unit vector in \({\mathbb R}^N\) whose \(\ell \)-th entry is 1 and all other entries are 0. Let \(e_0 = -e_1\). Also, let \(S(e_\ell , \delta ) = \{ w \in \mathbb {S}^{N-1} : |w - e_\ell | \le \delta \}\). Suppose \(\delta \) is small enough so that \(S(e_0, \delta ), \dots , S(e_N, \delta )\) are disjoint. For each \(0 \le \ell \le N\), fix a rotation matrix \(R_\ell \) such that \(R_\ell e_1 = e_\ell \) and let \(w_\ell = R_\ell w\). Then

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {S}^{N-1}} \min _{1\le i \le n} |(t + x\cdot w) - (t^i + x^i \cdot w)|^{2\alpha } \sigma (dw)\\&\quad \ge \sum _{\ell =0}^{N} \int _{S(e_\ell , \delta )} \min _{1\le i \le n} |(t + x\cdot w) - (t^i + x^i \cdot w)|^{2\alpha } \sigma (dw)\\&\quad = \int _{S(e_1, \delta )} \sum _{\ell =0}^{N} \min _{1\le i \le n} |(t + x\cdot w_\ell ) - (t^i + x^i \cdot w_\ell )|^{2\alpha } \sigma (dw). \end{aligned} \end{aligned}$$

Let \(M = \sigma (S(e_1, \delta ))\). Since \(s \mapsto s^{-p/2}\) is a convex function on \({\mathbb R}_+\), by Jensen’s inequality,

$$\begin{aligned} \begin{aligned}&\int _I dt\,dx \left[ \int _{\mathbb {S}^{N-1}} \min _{1\le i \le n} |(t + x\cdot w) - (t^i + x^i \cdot w)|^{2\alpha } \sigma (dw) \right] ^{-\frac{p}{2}}\\&\quad \le M^{-p/2} \int _I dt\, dx \Bigg [ \int _{S(e_1, \delta )} \sum _{\ell =0}^N \min _{1\le i \le n} |(t + x\cdot w_\ell ) - (t^i + x^i \cdot w_\ell )|^{2\alpha } \frac{\sigma (dw)}{M}\Bigg ]^{-\frac{p}{2}}\\&\quad \le M^{-p/2} \int _I dt\,dx \int _{S(e_1, \delta )} \Bigg [ \sum _{\ell =0}^N \min _{1\le i \le n} |(t + x\cdot w_\ell ) - (t^i + x^i \cdot w_\ell )|^{2\alpha }\Bigg ]^{-\frac{p}{2}} \frac{\sigma (dw)}{M}. \end{aligned} \end{aligned}$$

By using the inequality \((\sum _{\ell =0}^N |z_\ell |)^\alpha \le \sum _{\ell =0}^N |z_\ell |^\alpha \) for \(0< \alpha < 1\), and Fubini’s theorem, this is

$$\begin{aligned} \le C \int _{S(e_1, \delta )} \sigma (dw) \int _I dt\,dx \Bigg [ \sum _{\ell =0}^N \min _{1\le i \le n} |(t + x\cdot w_\ell ) - (t^i + x^i \cdot w_\ell )|^{2}\Bigg ]^{-\frac{\alpha p}{2}}. \end{aligned}$$
(4.3)

For each \(w \in S(e_1, \delta )\), we estimate the integral over I using the linear transformation from \({\mathbb R}^{1+N}\) to itself

$$\begin{aligned}f_w : (t, x) \mapsto y = (y_0, \dots , y_N)\end{aligned}$$

defined by \(y_\ell = t + x \cdot w_\ell \) for \(\ell = 0, \dots , N\). Write \(w_\ell = (w_{\ell ,1}, \dots , w_{\ell ,N})\) and denote the Jacobian by

$$\begin{aligned} J_w = \det Df_w = \det \begin{pmatrix} 1 &{} w_{0,1} &{} \cdots &{} w_{0, N}\\ 1 &{} w_{1,1} &{} \cdots &{} w_{1, N}\\ \vdots &{} &{} \ddots &{} \\ 1 &{} w_{N, 1} &{} \cdots &{} w_{N, N} \end{pmatrix}. \end{aligned}$$

Since \(w \mapsto J_w\) is continuous and \(J_{e_1} = 2\), we can choose and fix a small enough constant \(0< \delta < 1\) such that

$$\begin{aligned} 1 \le J_w \le 3 \quad \text {for all } w \in S(e_1, \delta ). \end{aligned}$$
(4.4)

Fix \(w \in S(e_1, \delta )\) and let \(y^i_\ell = y^i_\ell (w) = t^i + x^i \cdot w_\ell \). Then, under the transformation,

$$\begin{aligned} \begin{aligned}&\int _I dt\,dx \Bigg [ \sum _{\ell =0}^N \min _{1\le i \le n} |(t + x\cdot w_\ell ) - (t^i + x^i \cdot w_\ell )|^2\Bigg ]^{-\frac{\alpha p}{2}}\\&\quad \le C \int _{f_w(I)} \frac{dy}{{\big (\sum _{\ell =0}^N \min \limits _{1 \le i\le n} |y_\ell - y^i_\ell |^2\big )}^{\frac{\alpha p}{2}}}. \end{aligned} \end{aligned}$$

Consider the Cartesian product \(Z = \prod _{\ell =0}^N \{ y^1_\ell , \dots , y^{n}_\ell \}\). This set consists of at most \(n^{1+N}\) different points in \({\mathbb R}^{1+N}\). For each \(z = (z_0, \dots , z_N) \in Z\), define

$$\begin{aligned} \Gamma _z = \Big \{ y \in f_w(I) : |y_\ell - z_\ell | = \min _{1 \le i \le n} |y_\ell - y^i_\ell | \text { for all } \ell = 0, \dots , N \Big \}. \end{aligned}$$

Then \(\bigcup _{z \in Z} \Gamma _z = f_w(I)\) and the interiors of \(\Gamma _z\) are non-overlapping, so that

$$\begin{aligned} \begin{aligned} \int _{f_w(I)} \frac{dy}{{\big (\sum _{\ell =0}^N \min \limits _{1 \le i\le n} |y_\ell - y^i_\ell |^2\big )}^{\frac{\alpha p}{2}}}&= \sum _{z \in Z} \int _{\Gamma _z} \frac{dy}{|y-z|^{\alpha p}}. \end{aligned} \end{aligned}$$

For each \(z \in Z\), we compute the integral over \(\Gamma _z\) using polar coordinates \(y = z + \rho \theta \). Note that \(f_w(I)\) is a convex set in \({\mathbb R}^{1+N}\), and so is \(\Gamma _z\). Thus, for each \(\theta \in \mathbb {S}^N\) the variable \(\rho \) takes values between two nonnegative numbers \(\rho _z(\theta ) \le \tilde{\rho }_z(\theta )\). Let \(\sigma (d\theta )\) be the surface measure on \(\mathbb {S}^N\). Then

$$\begin{aligned} \begin{aligned} \sum _{z \in Z} \int _{\Gamma _z} \frac{dy}{|y-z|^{\alpha p}}&= \sum _{z \in Z} \int _{\mathbb {S}^{N}} \sigma (d\theta ) \int _{\rho _z(\theta )}^{\tilde{\rho }_z(\theta )} \rho ^{N-\alpha p} d\rho \\&= \frac{1}{1+N-\alpha p} \sum _{z \in Z} \int _{\mathbb {S}^{N}} [\tilde{\rho }_z(\theta )^{1+N-\alpha p} - \rho _z(\theta )^{1+N-\alpha p}] \sigma (d\theta )\\&\le \frac{1}{1+N-\alpha p} \sum _{z \in Z} \int _{\mathbb {S}^{N}} {[\tilde{\rho }_z(\theta )^{1+N} - \rho _z(\theta )^{1+N}]}^{1-\frac{\alpha p}{1+N}} \sigma (d\theta ). \end{aligned} \end{aligned}$$

The last inequality follows from \(b^q - a^q \le (b-a)^q\) for \(0 \le a \le b\) and \(0< q < 1\), which can be verified easily. Since the Lebesgue measure of \(\Gamma _z\) is

$$\begin{aligned} \lambda _{1+N}(\Gamma _z) = \frac{1}{1+N} \int _{\mathbb {S}^{N}}[\tilde{\rho }_z(\theta )^{1+N} - \rho _z(\theta )^{1+N}] \sigma (d\theta ) \end{aligned}$$

and the function \(s \mapsto s^{1-\frac{\alpha p}{1+N}}\) is concave on \({\mathbb R}_+\), we can use Jensen’s inequality to get that

$$\begin{aligned} \begin{aligned}&\sum _{z \in Z} \int _{\mathbb {S}^{N}} [\tilde{\rho }_z(\theta )^{1+N} - \rho _z(\theta )^{1+N}]^{1- \frac{\alpha p}{1+N}} \sigma (d\theta )\\&\quad \le \sum _{z \in Z} \bigg (\int _{\mathbb {S}^{N}} [\tilde{\rho }_z(\theta )^{1+N} - \rho _z(\theta )^{1+N}] \sigma (d\theta ) \bigg )^{1-\frac{\alpha p}{1+N}}\\&\quad = {(1+N)}^{1-\frac{\alpha p}{1+N}} \sum _{z \in Z} \big (\lambda _{1+N}(\Gamma _z)\big )^{1-\frac{\alpha p}{1+N}}. \end{aligned} \end{aligned}$$

Let |Z| denote the cardinality of Z. Then by Jensen’s inequality again, this is

$$\begin{aligned} \begin{aligned}&= {(1+N)}^{1-\frac{\alpha p}{1+N}} |Z| \cdot \frac{1}{|Z|}\sum _{z \in Z} \big (\lambda _{1+N}(\Gamma _z)\big )^{1-\frac{\alpha p}{1+N}}\\&\quad \le C|Z| \bigg ( \frac{1}{|Z|}\sum _{z \in Z} \lambda _{1+N}(\Gamma _z) \bigg )^{1-\frac{\alpha p}{1+N}}\\&\quad = C{|Z|}^{\frac{\alpha p}{1+N}} {[\lambda _{1+N}(f_w(I))]}^{1-\frac{\alpha p}{1+N}}. \end{aligned} \end{aligned}$$

Since \(|Z| \le n^{1+N}\) and \(\lambda _{1+N}(f_w(I)) \le C \lambda _{1+N}(I)\) by (4.4), we deduce that

$$\begin{aligned} \int _I dt\,dx \Bigg [ \sum _{\ell =0}^N \min _{1\le i \le n} |(t+ x\cdot w_\ell ) - (t^i + x^i \cdot w_\ell )|^{2\alpha }\Bigg ]^{-p/2} \le C n^{\alpha p} {[\lambda _{1+N}(I)]}^{1-\frac{\alpha p}{1+N}}, \end{aligned}$$

where C is a constant independent of \(w \in S(e_1, \delta )\). Then put this back into (4.3) to complete the proof. \(\square \)

The proof of Lemma 4.2 also yields the following result.

Lemma 4.3

Let T be a compact interval in \({\mathbb R}^k\). Let \(p > 0\) be such that \(\alpha p < k\). Then there exists a finite constant C such that for all convex subsets F of T, for all \(n \ge 1\) and \(y, y^1, \dots , y^n \in F\),

$$\begin{aligned} \int _F \frac{dy}{\big ( \sum _{\ell =1}^k \min \limits _{1\le i \le n} |y_\ell - y^i_\ell |^2 \big )^{\frac{\alpha p}{2}}} \le C n^{\alpha p} [\lambda _k(F)]^{1-\frac{\alpha p}{k}}. \end{aligned}$$

For any mean-zero Gaussian vector \((X_1, \dots , X_n)\), the following formula can be easily verified:

$$\begin{aligned} \det \mathrm {Cov}(X_1, \dots , X_n) = \mathrm {Var}(X_1) \prod _{j=2}^n \mathrm {Var}(X_j| X_1, \dots , X_{j-1}), \end{aligned}$$
(4.5)

where \(\det \mathrm {Cov}(X_1, \dots , X_n)\) denotes the determinant of the covariance matrix of \((X_1, \dots , X_n)\).

Lemma 4.4

Let I be a compact interval in \({\mathbb R}\) and \(t^0 = 0\).

  1. (i)

    If \(I \subset (0, \infty )\) and B(t) is a fractional Brownian motion with Hurst index \(0< \alpha < 1\), then there exist constants \(0< C_1 \le C_2 < \infty \) such that for any \(n \ge 1\), for any \(t, t^1, \dots , t^n \in I\),

    $$\begin{aligned} C_2 \min _{0\le i \le n} |t - t^i|^{2\alpha } \le \mathrm {Var}(B(t)|B(t^1), \dots , B(t^n)) \le C_2 \min _{0\le i \le n} |t - t^i|^{2\alpha }. \end{aligned}$$
  2. (ii)

    There exists a constant \(C > 0\) such that for any \(n \ge 1\), for any \(t^1, \dots , t^n \in I\), for any permutation \(\pi \) on \(\{1, \dots , n\}\),

    $$\begin{aligned} \prod _{j=2}^n \min _{1 \le i \le j-1}|t^{\pi (j)} - t^{\pi (i)}| \ge C^n \prod _{j=2}^n \min _{1 \le i \le j-1}|t^j - t^i|. \end{aligned}$$

Proof

(i) The first inequality is due to the strong LND property of the fractional Brownian motion [43, Lemma 7.1]; the second inequality holds because the conditional variance is \(\le \mathrm {Var}(B(t))\) and is \(\le \mathrm {Var}(B(t) - B(t^i))\) for every \(1 \le i \le n\).

(ii) Clearly, both sides of the inequality are translation invariant, so by shifting we may assume that \(I \subset (0, \infty )\) and \(t > \mathrm {diam}(I)\) for all \(t \in I\). Take \(\alpha = 1/2\). Since \(\det \mathrm {Cov}(B(t^{\pi (1)}), \dots , B(t^{\pi (n)})) = \det \mathrm {Cov}(B(t^1), \dots , B(t^n))\), the result follows from part (i) of this lemma and the formula (4.5). \(\square \)

Lemma 4.5

[19, Lemma 2] Let \(Z_1, \dots , Z_n\) be mean-zero Gaussian random variables that are linearly independent. Let \(g: {\mathbb R}\rightarrow {\mathbb R}\) be a measurable function such that \(\int _{\mathbb R}g(x) e^{-\varepsilon x^2} dx < \infty \) for every \(\varepsilon > 0\). Then

$$\begin{aligned} \begin{aligned}&\int _{{\mathbb R}^n} g(\xi _1) \exp \bigg [ {-\frac{1}{2} \mathrm {Var}\Big ( \sum _{j=1}^n \xi _j Z_j\Big )} \bigg ] d\xi _1 \cdots d\xi _n\\&\quad = \frac{(2\pi )^{(n-1)/2}}{[\det \mathrm {Cov}(Z_1, \dots , Z_n)]^{1/2}} \int _{\mathbb R}g\Big (\frac{x}{V_1}\Big ) \,e^{-x^2/2} \, dx, \end{aligned} \end{aligned}$$

where \(V_1^2 = \mathrm {Var}(Z_1|Z_2, \dots , Z_n)\).

Lemma 4.6

Let T be a compact interval in \((0, \infty ) \times {\mathbb R}^N\). Let \(q_{j,k} \ge 0\) and \(q > 0\) be such that \(\alpha (d+2q) < 1+N\) and \(\sum _{k=1}^d q_{j,k} = q\) for each j. For \(\bar{z} = (z^1, \dots , z^n) \in T^n\), let

$$\begin{aligned} J(\bar{z}) = \int _{{\mathbb R}^{nd}} \Big (\prod _{j=1}^n \prod _{k=1}^d {|\xi _k^j|}^{q_{j,k}}\Big ) {\mathbb E}\big ( e^{i\sum _{j=1}^n \sum _{k=1}^d \xi _k^j u_k(z^j)} \big ) \, d\bar{\xi }. \end{aligned}$$

where \(\bar{\xi }= (\xi ^1_1, \dots , \xi ^n_d)\). Then there exist constants \(C< \infty \) and \(r_0 > 0\) such that the following hold for all \(n \ge 2\):

  1. (i)

    For all compact intervals \(I \subset T\) with side lengths \(\le r_0\),

    $$\begin{aligned} \int _{I^n} J(\bar{z}) \, d\bar{z} \le C^n {(n!)}^{\alpha d + (\frac{1}{2} +2\alpha )q} {[\lambda _{1+N}(I)]}^{n(1 - \frac{\alpha (d+2q)}{1+N})\frac{d}{d+2q}}. \end{aligned}$$
    (4.6)
  2. (ii)

    If, in addition, I has side lengths \(\le r\) with \(0 < r \le r_0\), then

    $$\begin{aligned} \int _{I^n} J(\bar{z}) \, d\bar{z} \le C^n {(n!)}^{\alpha d + (\frac{1}{2}+\alpha )q} {r}^{n(1+N - \alpha (d+q))}. \end{aligned}$$
    (4.7)

Proof

(i). Let \(I \subset T\) be a compact interval with side lengths \(\le r_0\). By the fact that \(u_1, \dots , u_d\) are i.i.d. and Gaussian, and by the generalized Hölder inequality,

$$\begin{aligned} \begin{aligned} J(\bar{z})&= \prod _{k=1}^d \int _{{\mathbb R}^n} \prod _{j=1}^n {|\xi _k^j|}^{q_{j,k}} \exp \bigg [ -\frac{1}{2} \mathrm {Var}\Big (\sum _{j=1}^n \xi _k^j u_1(z^j)\Big ) \bigg ] d\bar{\xi }_k\\&\le \prod _{k=1}^d \prod _{j=1}^n \bigg \{ \int _{{\mathbb R}^n} {|\xi _k^j|}^{nq_{j,k}} \exp \bigg [ -\frac{1}{2} \mathrm {Var}\Big (\sum _{j=1}^n \xi _k^j u_1(z^j)\Big ) \bigg ] d\bar{\xi }_k\bigg \}^{\frac{1}{n}}, \end{aligned} \end{aligned}$$

where \(\bar{\xi }_k = (\xi ^1_k, \dots , \xi ^n_k) \in {\mathbb R}^n\). It is enough to consider points \(z^1, \dots , z^n \in I\) that are distinct from each other since the set of such points has full Lebesgue measure in \(I^n\). Then \(u_1(z^1), \dots , u_1(z^n)\) are linearly independent by Proposition 3.6. By Lemma 4.5 and Stirling’s formula, \(J(\bar{z})\) is bounded by

$$\begin{aligned} \nonumber&\quad C^n\prod _{k=1}^d \prod _{j=1}^n \Big \{{[\det \mathrm {Cov}(u_1(z^1), \dots , u_1(z^n))]}^{-\frac{1}{2}} {[\mathrm {Var}(u_1(z^j)|u_1(z^i) : i\ne j )]}^{-\frac{nq_{j,k}}{2}} \Gamma \Big (\frac{nq_{j,k}+1}{2}\Big )\Big \}^{\frac{1}{n}}\nonumber \\&\quad \le C^n {(n!)}^{\frac{q}{2}} {[\det \mathrm {Cov}(u_1(z^1), \dots , u_1(z^n))]}^{-\frac{d}{2}} \, \prod _{j=1}^n{[\mathrm {Var}(u_1(z^j)|u_1(z^i) : i \ne j)]}^{-\frac{q}{2}}. \end{aligned}$$
(4.8)

Define \(e_0, \dots , e_N, w_0, \dots , w_N\) and \(\delta \) as in the proof of Lemma 4.2. By (4.5) and Proposition 3.2, for \(r_0\) small enough,

$$\begin{aligned}&{[\det \mathrm {Cov}(u_1(z^1), \dots , u_1(z^n))]}^{\frac{d}{2}} \prod _{j=1}^n {[\mathrm {Var}(u_1(z^j)|u_1(z^i) : i \ne j)]}^{\frac{q}{2}}\\&\ge C^n \prod _{j=1}^n \bigg [ \int _{\mathbb {S}^{N-1}}r_j(w)^{2\alpha } \sigma (dw) \bigg ]^{\frac{d}{2}} \prod _{j=1}^n \bigg [\int _{\mathbb {S}^{N-1}} \tilde{r}_j(w)^{2\alpha } \sigma (dw) \bigg ]^{\frac{q}{2}}\\&\ge C^n \prod _{j=1}^n \bigg [ \int _{S(e_1, \delta )} \sum _{\ell =0}^N r_j(w_\ell )^{2\alpha } \sigma (dw) \bigg ]^{\frac{d}{2}} \prod _{j=1}^n \bigg [\int _{S(e_1, \delta )} \sum _{\ell =0}^N\tilde{r}_j(w_\ell )^{2\alpha } \sigma (dw) \bigg ]^{\frac{q}{2}}, \end{aligned}$$

where \(r_1(w) \equiv 1\),

$$\begin{aligned} r_j(w)&= \min _{1 \le i \le j-1} |(t^j + x^j \cdot w) - (t^i + x^i \cdot w)|, \quad 2 \le j \le n,\\ \tilde{r}_j(w)&= \min _{1 \le i \le n, \,i \ne j} |(t^j + x^j \cdot w) - (t^i + x^i \cdot w)|, \quad 1\le j \le n. \end{aligned}$$

Then, by the generalized Hölder inequality,

$$\begin{aligned}&{[\det \mathrm {Cov}(u_1(z^1), \dots , u_1(z^n))]}^{\frac{d}{2}} \prod _{j=1}^n {[\mathrm {Var}(u_1(z^j)|u_1(z^i) : i \ne j)]}^{\frac{q}{2}}\\&\quad \ge C^n \bigg [\int _{S(e_1, \delta )} \prod _{j=1}^n \Big (\sum _{\ell =0}^N r_j(w_\ell )^{2\alpha }\Big )^{\frac{d}{2m}} \Big ( \sum _{\ell =0}^N \tilde{r}_j(w_\ell )^{2\alpha }\Big )^{\frac{q}{2m}} \sigma (dw) \bigg ]^m, \end{aligned}$$

where \(m = \frac{n(d+q)}{2}\). Recall that \(\delta \) is a constant and \(M = \sigma (S(e_1, \delta ))\). Then, by Jensen’s inequality for the convex function \(x \mapsto x^{-m}\) on \({\mathbb R}_+\), we have

$$\begin{aligned} \begin{aligned}&{[\det \mathrm {Cov}(u_1(z^1), \dots , u_1(z^n))]}^{-\frac{d}{2}} \prod _{j=1}^n {[\mathrm {Var}(u_1(z^j)|u_1(z^i) : i \ne j)]}^{-\frac{q}{2}}\\&\le C^n M^{-m-1} \int _{S(e_1, \delta )} \prod _{j=1}^n\bigg [ \Big ( \sum _{\ell =0}^N r_j(w_\ell )^{2\alpha } \Big )^{-\frac{d}{2}} \Big ( \sum _{\ell =0}^N \tilde{r}_j(w_\ell )^{2\alpha } \Big )^{-\frac{q}{2}}\bigg ] \sigma (dw)\\&\le C^n \int _{S(e_1, \delta )} \prod _{j=1}^n\bigg [ \Big ( \sum _{\ell =0}^N r_j(w_\ell )^2\Big )^{-\frac{\alpha d}{2}} \Big (\sum _{\ell =0}^N \tilde{r}_j(w_\ell )^2\Big )^{-\frac{\alpha q}{2}} \bigg ] \sigma (dw). \end{aligned} \end{aligned}$$
(4.9)

Recall the transformation \(f_w: z = (t, x) \mapsto y = (y_0, \dots , y_N)\) defined by \(y_\ell = t + x \cdot w_\ell \) and that it satisfies (4.4). To estimate the integral of \(J(\bar{z})\) over \(I^n\), first use (4.8) and (4.9). Then, by interchanging the order of integration and using the transformation, followed by Hölder’s inequality with exponents \(\frac{d+2q}{d}\) and \(\frac{d+2q}{2q}\), we get that

$$\begin{aligned}&\int _{I^n} J(\bar{z}) \, d\bar{z} \le C^n {(n!)}^{\frac{q}{2}} \int _{S(e_1, \delta )} \sigma (dw)\\&\quad \int _{{[f_w(I)]}^n} \frac{d y^1 \cdots d y^n}{\prod _{j=1}^n\big [ \big ( \sum _{\ell =0}^N \min \limits _{1 \le i \le j-1} |y^j_\ell - y^i_\ell |^2\big )^{\frac{\alpha d}{2}}\, \big (\sum _{\ell =0}^N\min \limits _{i:\,i \ne j}|y^j_\ell - y^i_\ell |^2\big )^{\frac{\alpha q}{2}} \big ]}\\&\quad \le C^n {(n!)}^{\frac{q}{2}} \int _{S(e_1, \delta )} A_1(w) A_2(w) \sigma (dw), \end{aligned}$$

where \(y^j_\ell = t^j + x^j \cdot w_\ell \),

$$\begin{aligned} A_1(w)= & {} \Bigg \{ \int _{{[f_w(I)]}^n} \frac{dy^1\cdots dy^n}{\prod _{j=1}^n \big ( \sum _{\ell =0}^N \min \limits _{1 \le i \le j-1} |y^j_\ell - y^i_\ell |^2\big )^{\frac{\alpha (d+2q)}{2}}}\Bigg \}^{\frac{d}{d+2q}},\\ A_2(w)= & {} \Bigg \{ \int _{{[f_w(I)]}^n} \frac{dy^1\cdots dy^n}{\prod _{j=1}^n \big ( \sum _{\ell =0}^N \min \limits _{i:\, i \ne j} |y^j_\ell - y^i_\ell |^2\big )^{\frac{\alpha (d+2q)}{4}}}\Bigg \}^{\frac{2q}{d+2q}}. \end{aligned}$$

Now, we need the assumption that \(\alpha (d+2q) < 1+N\), and recall that (4.4) implies \(\lambda _{1+N}(f_w(I)) \le C\lambda _{1+N}(I)\) for all \(w \in S(e_1, \delta )\). Then by Lemma 4.3, we have

$$\begin{aligned} A_1(w) \le C^n (n!)^{\alpha d} {[\lambda _{1+N}(I)]}^{n(1 - \frac{\alpha (d+2q)}{1+N})\frac{d}{d+2q}}. \end{aligned}$$

For \(A_2\), we first use the AM–GM inequality to get

$$\begin{aligned} A_2(w)&\le \Bigg \{ \int _{{[f_w(I)]}^n} \frac{dy^1 \cdots dy^n}{\prod _{j=1}^n \prod _{\ell =0}^N\min \limits _{i:\, i \ne j}|y_\ell ^j - y_\ell ^i|^{\frac{\alpha (d+2q)}{2(1+N)}}} \Bigg \}^{\frac{2q}{d+2q}}. \end{aligned}$$

Since I has side lengths \(\le r_0\), we can see from the definition of \(y^j_\ell \) that each \(y^j = (y^j_0, \dots , y^j_N)\) is contained in \(\prod _{\ell =0}^N \tilde{I}_\ell \), where each \(\tilde{I}_\ell \) is an interval in \({\mathbb R}\) of length \(\le (1+N)r_0\). From this, we get

$$\begin{aligned}&\le \prod _{\ell =0}^N \Bigg \{ \int _{{(\tilde{I}_\ell )}^n} \frac{dy^1_\ell \cdots dy^n_\ell }{\prod _{j=1}^n \min \limits _{i:\, i \ne j}|y_\ell ^j - y_\ell ^i|^{\frac{\alpha (d+2q)}{2(1+N)}}} \Bigg \}^{\frac{2q}{d+2q}}. \end{aligned}$$

Fix \(\ell \). For each \((y^1_\ell , \dots , y^n_\ell ) \in (\tilde{I}_\ell )^n\), let \(\pi \) be a permutation such that \(y^{\pi (1)}_\ell \le \cdots \le y^{\pi (n)}_\ell \), and note that the \(y^j_\ell \) are all bounded. For convenience, set \(y_\ell ^{\pi (0)} = y_\ell ^{\pi (n+1)} = 0\). It follows that

$$\begin{aligned} \prod _{j=1}^n \min \limits _{i:\, i \ne j}|y_\ell ^j - y_\ell ^i|&= \prod _{j=1}^n \min \limits _{i:\, i \ne j}|y_\ell ^{\pi (j)} - y_\ell ^{\pi (i)}|\\&= \prod _{j=1}^n \min \{ |y_\ell ^{\pi (j)} - y_\ell ^{\pi (j-1)}|, |y_\ell ^{\pi (j)} - y_\ell ^{\pi (j+1)}|\}\\&\ge C^n \prod _{j=1}^n \big (|y_\ell ^{\pi (j)} - y_\ell ^{\pi (j-1)}|\cdot |y_\ell ^{\pi (j)} - y_\ell ^{\pi (j+1)}|\big )\\&\ge C^n \prod _{j=2}^n |y^{\pi (j)}_\ell - y^{\pi (j-1)}_\ell |^2\\&\ge C^n \prod _{j=2}^n \min _{1 \le i \le j-1}|y^j_\ell - y^i_\ell |^2. \end{aligned}$$

The last inequality follows from Lemma 4.4(ii). Then, by Lemma 4.3,

$$\begin{aligned} \begin{aligned} A_2(w)&\le C^n \prod _{\ell =0}^N \Bigg \{ \int _{{(\tilde{I}_\ell )}^n} \frac{dy^1_\ell \cdots dy^n_\ell }{\prod _{j=2}^n \min \limits _{1 \le i \le j-1}|y_\ell ^j - y_\ell ^i|^{\frac{\alpha (d+2q)}{1+N}}} \Bigg \}^{\frac{2q}{d+2q}} \le C^n (n!)^{2\alpha q}. \end{aligned} \end{aligned}$$

The constant C does not depend on \(w \in S(e_1, \delta )\). This leads to (4.6).

To prove (ii), suppose that I has side lengths \(\le r\), where \(r \le r_0\). Again, by (4.5), Proposition 3.2 and the generalized Hölder inequality, for \(r_0\) small enough,

$$\begin{aligned} \begin{aligned}&{[\det \mathrm {Cov}(u_1(z^1), \dots , u_1(z^n))]}^{\frac{d}{2}} \prod _{j=1}^n{[\mathrm {Var}(u_1(z^j)|u_1(z^i) : i \ne j)]}^{\frac{q}{2}}\\&\quad \ge C^n \prod _{j=1}^n \bigg [\int _{\mathbb {S}^{N-1}} r_j(w)^{2\alpha } \sigma (dw) \bigg ]^{\frac{d}{2}} \prod _{j=1}^n \bigg [\int _{\mathbb {S}^{N-1}} \tilde{r}_j(w)^{2\alpha } \sigma (dw) \bigg ]^{\frac{q}{2}}\\&\quad \ge C^n \bigg [ \int _{\mathbb {S}^{N-1}} \prod _{j=1}^n \Big ({r_j(w)}^{\frac{\alpha d}{m}}\, {\tilde{r}_j(w)}^{\frac{\alpha q}{m}} \Big )\, \sigma (dw) \bigg ]^m\\&\quad \ge C^n \bigg [ \int _{S(e_1, \delta )}\sum _{\ell =0}^N \prod _{j=1}^n \Big ({r_j(w_\ell )}^{\frac{\alpha d}{m}}\, {\tilde{r}_j(w_\ell )}^{\frac{\alpha q}{m}} \Big )\, \sigma (dw) \bigg ]^m,\\ \end{aligned} \end{aligned}$$

where \(m = \frac{n(d+q)}{2}\), \(r_j\) and \(\tilde{r}_j\) are defined as before. Define the variables \(y^j_\ell = t^j + x^j \cdot w_\ell \) as before. Then, by the AM–GM inequality and Jensen’s inequality,

$$\begin{aligned}&{[\det \mathrm {Cov}(u_1(z^1), \dots , u_1(z^n))]}^{-\frac{d}{2}} \, \prod _{j=1}^n{[\mathrm {Var}(u_1(z^j)|u_1(z^i) : i \ne j)]}^{-\frac{q}{2}}\\&\quad \le C^n \bigg [ \int _{S(e_1, \delta )} \prod _{\ell =0}^N \prod _{j=1}^n \Big ({r_j(w_\ell )}^{\frac{\alpha d}{m(1+N)}}\, {\tilde{r}_j(w_\ell )}^{\frac{\alpha q}{m(1+N)}} \Big ) \sigma (dw) \bigg ]^{-m}\\&\quad \le C^n \int _{S(e_1, \delta )} \prod _{\ell =0}^N \prod _{j=1}^n \Big ({r_j(w_\ell )}^{-\frac{\alpha d}{1+N}}\, {\tilde{r}_j(w_\ell )}^{-\frac{\alpha q}{1+N}} \Big ) \sigma (dw). \end{aligned}$$

Since I has side lengths \(\le r\), each \(y^j_\ell \) is contained in an interval \(\tilde{I}_\ell \subset {\mathbb R}\) of length \(\le (1+N)r\). Then, using (4.8) and the transformation \(f_w: z \mapsto y\), we have

$$\begin{aligned} \begin{aligned} \int _{I^n} J(\bar{z})\, d\bar{z}&\le C^n {(n!)}^{\frac{q}{2}} \int _{S(e_1, \delta )} \sigma (dw) \\&\quad \times \prod _{\ell =0}^N \int _{{(\tilde{I}_\ell )}^n} \frac{dy^1_\ell \cdots dy^n_\ell }{\prod _{j=2}^n \min \limits _{1 \le i \le j-1}|y^j_\ell - y^i_\ell |^{\frac{\alpha d}{1+N}} \prod _{j=1}^n\min \limits _{i:\, i \ne j}|y^j_\ell - y^i_\ell |^{\frac{\alpha q}{1+N}}}. \end{aligned} \end{aligned}$$
(4.10)

Fix \(\ell \) and consider the integral over \((\tilde{I}_\ell )^n\). For \((y^1_\ell , \dots , y^n_\ell ) \in (\tilde{I}_\ell )^n\), let \(\pi \) be a permutation such that \(y^{\pi (1)}_\ell \le \dots \le y^{\pi (n)}_\ell \). Then, by Lemma 4.4(ii),

$$\begin{aligned}&\prod _{j=2}^n \min \limits _{1 \le i \le j-1}{|y^j_\ell - y^i_\ell |}^{\frac{\alpha d}{1+N}} \prod _{j=1}^n \min \limits _{i:\,i \ne j}{|y^j_\ell - y^i_\ell |}^{\frac{\alpha q}{1+N}}\\&\quad \ge C^n \prod _{j=2}^n \min \limits _{1 \le i \le j-1}{|y^{\pi (j)}_\ell - y^{\pi (i)}_\ell |}^{\frac{\alpha d}{1+N}} \prod _{j=1}^n \min \limits _{i:\,i \ne j}{|y^{\pi (j)}_\ell - y^{\pi (i)}_\ell |}^{\frac{\alpha q}{1+N}} \\&\quad \ge C^n \prod _{j=1}^n \Big ( {|y^{\pi (j)}_\ell - y^{\pi (j-1)}_\ell |}^{\frac{\alpha d}{1+N}} {|y^{\pi (j)}_\ell - y^{\pi (j-1)}_\ell |}^{\frac{\alpha q}{1+N}\theta _j} {|y^{\pi (j)}_\ell - y^{\pi (j+1)}_\ell |}^{\frac{\alpha q}{1+N}(1-\theta _j)}\Big ) \end{aligned}$$

for some \(\theta = (\theta _1, \dots , \theta _n) \in \{0, 1\}^n\) with \(\theta _1 = 0\) and \(\theta _n = 1\). Denote \(\theta '_j = 1-\theta _j\). By Lemma 4.4(ii) again, we get

$$\begin{aligned}&\ge C^n \prod _{j=2}^n |y^{\pi (j)}_\ell - y^{\pi (j-1)}_\ell |^{\frac{\alpha }{1+N}(d + q(\theta _j+\theta '_{j-1}))}\\&\ge C^n \prod _{j=2}^n\min _{1 \le i \le j-1} {|y^j_\ell - y^i_\ell |}^{\frac{\alpha }{1+N}(d + q(\theta _j+\theta '_{j-1}))}. \end{aligned}$$

Hence, for each \(\ell \), the integral over \((\tilde{I}_\ell )^n\) in (4.10) is bounded by

$$\begin{aligned}&C^n \sum _{\theta } \int _{{(\tilde{I}_\ell )}^n}\frac{dy^1_\ell \cdots dy^n_\ell }{\prod _{j=2}^n\min \limits _{1 \le i \le j-1} {|y^j_\ell - y^i_\ell |}^{\frac{\alpha }{1+N}(d + q(\theta _j +\theta '_{j-1}))}}. \end{aligned}$$

The sum runs over all \(\theta \in \{0, 1\}^n\) with \(\theta _1 = 0\) and \(\theta _n = 1\), containing \(< 2^n\) summands. Note that \(\frac{\alpha }{1+N}(d+q (\theta _j +\theta '_{j-1})) \le \frac{\alpha }{1+N} (d+2q) < 1\). By Lemma 4.3 and the relation \(\theta _j + \theta '_j = 1\), we get

$$\begin{aligned}&\le C^n \sum _\theta \prod _{j=1}^n \Big (j^{\frac{\alpha }{1+N}(d+q(\theta _j + \theta '_{j-1}))} r^{1-\frac{\alpha }{1+N}(d+q(\theta _j+\theta '_{j-1}))}\Big )\\&\le C^n r^{n(1-\frac{\alpha (d+q)}{1+N})} \sum _\theta \prod _{j=1}^n j^{\frac{\alpha }{1+N}(d+q\theta _j)} \prod _{j=1}^n (2j)^{\frac{\alpha }{1+N}(d+q\theta '_j)}\\&\le C^n (n!)^{\frac{\alpha (d + q)}{1+N}} r^{n(1-\frac{\alpha (d+q)}{1+N})} \end{aligned}$$

(we have set \(\theta '_0 = 0\) in the above). Finally, put this back into (4.10) to conclude (4.7). \(\square \)

We can use the above lemmas to get moment estimates for the local times. Recall the following formulas for the local times L(vT) of an \({\mathbb R}^d\)-valued random field \(\{ X(z) : z \in T\}\), which can be found in [27, §25]: for any even number \(n \ge 2\), for any \(v, \tilde{v} \in {\mathbb R}^d\),

$$\begin{aligned}&{\mathbb E}[L(v, T)^n] = (2\pi )^{-nd} \int _{T^n} d\bar{z} \int _{{\mathbb R}^{nd}} d\bar{\xi }\, e^{- i \sum _{j=1}^n \xi ^j \cdot v} \,{\mathbb E}\big (e^{i\sum _{j=1}^n \xi ^j \cdot X(z^j)}\big ), \end{aligned}$$
(4.11)
$$\begin{aligned}&{\mathbb E}[(L(v, T) - L(\tilde{v}, T))^n]\nonumber \\&\quad = (2\pi )^{-nd} \int _{T^n} d\bar{z} \int _{{\mathbb R}^{nd}} d\bar{\xi }\, \prod _{j=1}^n\big (e^{- i \xi ^j \cdot v} - e^{- i \xi ^j \cdot \tilde{v}}\big ) \,{\mathbb E}\big (e^{i\sum _{j=1}^n \xi ^j \cdot X(z^j)}\big ). \end{aligned}$$
(4.12)

Proposition 4.7

Assume (3.1) and \(\alpha d < 1+N\). Let T be a compact interval in \((0, \infty ) \times {\mathbb R}^N\) and L(vT) be the local time of \(\{u(t, x) : (t, x) \in T\}\). Then the following statements hold for some constant \(r_0 > 0\):

  1. (i)

    There exists a constant C such that for all intervals I in T with side lengths \(\le r_0\), for all \(v \in {\mathbb R}^d\), for all even numbers \(n \ge 2\),

    $$\begin{aligned} {\mathbb E}[L(v, I)^n] \le C^n {(n!)}^{\alpha d} {[\lambda _{1+N}(I)]}^{n(1 - \frac{\alpha d}{1+N})}. \end{aligned}$$
    (4.13)
  2. (ii)

    For any \(0< \gamma <\min \{\frac{1}{2} (\frac{1+N}{\alpha } - d), 1\}\), there exists a constant C such that for all intervals I in T with side lengths \(\le r\), where \(0 < r \le r_0\), for all \(v, \tilde{v} \in {\mathbb R}^d\), for all even numbers \(n \ge 2\),

    $$\begin{aligned} {\mathbb E}[(L(v, I) - L(\tilde{v}, I))^n] \le C^n |v-\tilde{v}|^{n\gamma } {(n!)}^{\alpha d + (\frac{1}{2} +\alpha )\gamma } {r}^{n(1+N-\alpha (d + \gamma ))}. \end{aligned}$$
    (4.14)

Proof

(i). Write \(z = (t, x)\). By (4.11),

$$\begin{aligned} \begin{aligned} {\mathbb E}[L(v, I)^n]&\le (2\pi )^{-nd} \int _{I^n} \int _{{\mathbb R}^{nd}} {\mathbb E}[e^{i\sum _{j=1}^n \xi ^j \cdot u(z^j)}] \,d\bar{\xi }\,d\bar{z}\\&= (2\pi )^{-nd/2} \int _{I^n} [\det \mathrm {Cov}(u_1(z^1), \dots , u_1(z^n))]^{-d/2} \, dz^1 \cdots dz^n. \end{aligned} \end{aligned}$$

By (4.5) and Proposition 3.2, for \(r_0\) sufficiently small, this is

$$\begin{aligned} \le C^n \int _{I^n} \prod _{j=2}^n \bigg [\int _{\mathbb {S}^{N-1}} \min _{1 \le i \le j-1} |(t^j+ x^j \cdot w) - (t^i + x^i\cdot w)|^{2\alpha }\sigma (dw)\bigg ]^{-d/2} \, dz^1 \cdots dz^n. \end{aligned}$$

Then, integrate in the order \(dz^n, dz^{n-1}, \dots , dz^1\) and apply Lemma 4.2(i) repeatedly to get (4.13).

(ii). By (4.12),

$$\begin{aligned} \begin{aligned}&{\mathbb E}[(L(v, I) - L(\tilde{v}, I))^n] \\&\quad \le (2\pi )^{-nd} \int _{I^n} d\bar{z} \int _{{\mathbb R}^{nd}} d\bar{\xi }\, \prod _{j=1}^n \big |e^{-i\xi ^j \cdot v} - e^{-i\xi ^j \cdot \tilde{v}}\big | \, {\mathbb E}\big (e^{i\sum _{j=1}^n \xi ^j \cdot u(z^j)} \big ). \end{aligned} \end{aligned}$$

For any \(0< \gamma < 1\), we have the inequality \(|e^{-ix} - e^{-iy}| \le 2 |x-y|^\gamma \), which implies that

$$\begin{aligned} \prod _{j=1}^n \big |e^{-i\xi ^j \cdot v} - e^{-i\xi ^j \cdot \tilde{v}}\big | \le 2^n |v - \tilde{v}|^{n\gamma } \sum _{(k_1, \dots , k_n)} \prod _{j=1}^n {\big |\xi ^j_{k_j}\big |}^\gamma , \end{aligned}$$

where the sum is taken over all \((k_1, \dots , k_n) \in \{1, \dots , d\}^n\). Thus,

$$\begin{aligned} \begin{aligned}&{\mathbb E}[(L(v, I) - L(\tilde{v}, I))^n] \\&\quad \le C^n |v - \tilde{v}|^{n\gamma } \sum _{(k_1, \dots , k_n)} \int _{I^n} d\bar{z} \int _{{\mathbb R}^{nd}} d\bar{\xi }\, \Big (\prod _{j=1}^n {|\xi ^j_{k_j}|}^\gamma \Big ) \, {\mathbb E}\big (e^{i\sum _{j=1}^n \xi ^j \cdot u(z^j)} \big ). \end{aligned} \end{aligned}$$

Let \(\gamma \) satisfy \(\alpha (d+ 2\gamma ) < 1+N\). Then we can derive (4.14) using Lemma 4.6(ii) with \(q_{j, k} = \gamma \) if \(k = k_j\), and \(q_{j, k} = 0\) otherwise. \(\square \)

We now conclude the main result of this section.

Theorem 4.8

Assume (3.1).

  1. (i)

    If \(\alpha d < 1\), then for any fixed \(x_0 \in {\mathbb R}^N\), \(\{ u(t, x_0) : t \in T_1 \}\) has a jointly continuous local time on any compact interval \(T_1\) in \((0, \infty )\).

  2. (ii)

    If \(\alpha d < N\), then for any fixed \(t_0 > 0\), \(\{ u(t_0, x) : x \in T_2 \}\) has a jointly continuous local time on any compact interval \(T_2\) in \({\mathbb R}^N\).

  3. (iii)

    If \(\alpha d < 1+N\), then \(\{ u(t, x) : (t, x) \in T\}\) has a jointly continuous local time on any compact interval T in \((0, \infty ) \times {\mathbb R}^N\).

Proof

(i) and (ii). By Corollary 3.4 and Proposition 3.5, the processes \(t \mapsto u(t, x_0)\) and \(x \mapsto u(t_0, x)\) satisfy the LND property in the sense of Berman or Pitt, so the joint continuity of their local times follow from the results of [11, 27, 43]; see also [53].

(iii). By Theorem 4.1, \(\{u(t, x) : (t, x) \in T\}\) has a square-integrable local time on T. Denote this local time by L(vT). In particular, a.s., for all \(B \in \mathscr {B}({\mathbb R}^d)\) and all \(S \in \mathscr {B}(T)\),

$$\begin{aligned} \lambda _{1+N}\{ (t, x) \in S : u(t, x) \in B \} = \int _B L(v, S)\, dv. \end{aligned}$$
(4.15)

We need to find a version \(L^*\) of the local time that is jointly continuous. Let \(Q_z = (-\infty , z] \cap T\) for \(z \in T\). In what follows, we can assume that T has side lengths \(\le r_0\) so that Proposition 4.7 applies, because it is enough to prove existence of jointly continuous local times on sufficiently small subintervals of T. For all even numbers \(n \ge 2\), for all \(v, \tilde{v}\in {\mathbb R}^d\), for all \(z, \tilde{z} \in T\), we have

$$\begin{aligned} {\mathbb E}[(L(v, Q_z) - L(\tilde{v}, Q_{\tilde{z}}))^n]\le & {} 2^{n-1}\{ {\mathbb E}[(L(v, Q_z) - L(v, Q_{\tilde{z}}))^n] \\&+ {\mathbb E}[(L(v, Q_{\tilde{z}}) - L(\tilde{v}, Q_{\tilde{z}}))^n] \}. \end{aligned}$$

For the first term, the difference \(L(v, Q_z) - L(v, Q_{\tilde{z}})\) can be written as a finite sum of terms (the number of which depends only on N) of the form \(L(v, I_j)\), where each \(I_j\) is a subinterval of T with at least one of its side length \(\le |z - \tilde{z}|\), so we can use Proposition 4.7(i) to bound this term. Also, we can bound the second term by Proposition 4.7(ii). Hence, we have

$$\begin{aligned} {\mathbb E}[(L(v, Q_z) - L(\tilde{v}, Q_{\tilde{z}}))^n] \le C_n (|v - \tilde{v}|^\gamma + |z - \tilde{z}|^\delta )^n \end{aligned}$$

with constants \(0< \gamma < \min \{\frac{1}{2} (\frac{1+N}{\alpha }-d), 1\}\) and \(\delta = 1-\frac{\alpha d}{1+N}\). Then, by a multiparameter version of Kolmogorov’s continuity Theorem ([14, Proposition 4.2] or [32, Theorem 1.4.1]), we can obtain a process \(\{ L^*(v, Q_z) : v \in {\mathbb R}^d, z \in T \}\) such that \((v, z) \mapsto L^*(v, Q_z)\) is jointly continuous (moreover, locally Hölder continuous of order \(< \gamma \) in v, and of order \(< \delta \) in z) and \({\mathbb P}\{ L^*(v, Q_z) = L(v, Q_z) \} = 1\) for every \(v \in {\mathbb R}^d\) and \(z \in T\). To verify that this version \(L^*\) is still a local time, note that, for each \(z \in T\), by Fubini’s theorem,

$$\begin{aligned} \int _\Omega \lambda _d\{ v : L^*(v, Q_z) \ne L(v, Q_z) \} \,d{\mathbb P}= \int _{{\mathbb R}^d} {\mathbb P}\{ \omega : L^*(v, Q_z) \ne L(v, Q_z)\} \, dv= 0. \end{aligned}$$

Then, there is a single event of probability 1 on which for all rational \(z \in T\) simultaneously, we have \(L^*(v, Q_z) = L(v, Q_z)\) a.e. v. This and (4.15) imply that \(L^*\) satisfies

$$\begin{aligned} \lambda _{1+N}\{ (t, x) \in Q_z : u(t, x) \in B \} = \int _B L^*(v, Q_z)\, dv \quad \text {a.s.} \end{aligned}$$

for all rational \(z \in T\), and hence for all \(z \in T\) by the continuity of \(z \mapsto L^*(v, Q_z)\). This proves that \(L^*\) is a local time and finishes the proof. \(\square \)

5 Regularity of the Local Times

In this section, we investigate the regularity of the local times of u(tx). As we have seen, the processes \(t \mapsto u(t, x)\) and \(x \mapsto u(t, x)\) satisfy the strong LND property, so the results of Xiao [51, 52] can be applied to obtain moment estimates, Hölder conditions and moduli of continuity for the respective local times. In the following, we will treat the local times of u regarded as the random field \((t, x) \mapsto u(t, x)\). First, we study the differentiability of the local time L(vT) in v and the Hölder regularity of its derivatives. Then, we give a result on the local and uniform moduli of continuity of L in the set variable T and discuss their implications on sample function oscillations.

Conditions for local times of Gaussian random fields to have square-integrable partial derivatives have been given in [8] and [27, §28]. In Theorem 5.1 below, we obtain conditions for the existence of continuous partial derivatives in v. We make use of the Fourier representation (4.2) and the estimates in Lemma 4.6 above, which have been established using the spherical integral form of strong LND, to provide sufficient conditions for the local times of u to have partial derivatives (up to certain order) that are jointly continuous and Hölder continuous. We employ the Fourier analytic approach of [25] to prove this result.

Theorem 5.1

Assume (3.1). Let T be a compact interval in \((0, \infty ) \times {\mathbb R}^N\). If \(\alpha (d + 2K) < 1+N\) for some integer \(K \ge 1\), then \(u = \{u(t, x) : (t, x) \in T\}\) has a local time L(vT) such that all of its partial derivatives

$$\begin{aligned} \partial ^p L(v, T) = \frac{\partial ^{p_1} \cdots \partial ^{p_d}}{\partial v_1^{p_1} \cdots \partial v_d^{p_d}} L(v, T)\end{aligned}$$

of order \(|p| \le K\) exist and are a.s. jointly continuous and locally Hölder continuous in v of any exponent \(\gamma < \min \{\frac{1}{2} (\frac{1+N}{\alpha } - d - 2|p|), 1\}\).

Proof

By Theorem 4.8, u has a jointly continuous local time \(L(\cdot , I)\) on any interval \(I \subset T\). Also, according to the paragraph preceding Theorem 4.1, \(L(\cdot , I)\) is a.e. equal to the inverse \(L^2\)-Fourier transform of \(\hat{\nu }_I\), which can be expressed as the limit of \(L_M(\cdot , I)\) in \(L^2({\mathbb R}^d)\) as \(M \rightarrow \infty \), where

$$\begin{aligned} L_M(v, I) = (2\pi )^{-d} \int _{[-M, M]^d} e^{-i\xi \cdot v} \int _I e^{i\xi \cdot u(z)} dz\,d\xi , \quad v \in {\mathbb R}^d. \end{aligned}$$

For any multi-index \(p = (p_1, \dots , p_d)\) with \(|p| = p_1 + \dots + p_d \le K\),

$$\begin{aligned} \partial ^p L_M(v, I) = (2\pi )^{-d} \int _{[-M, M]^d} \prod _{k=1}^d (-i\xi _k)^{p_k} e^{-i\xi \cdot v} \int _I e^{i\xi \cdot u(z)} dz\,d\xi . \end{aligned}$$

Let \(n = 2m > 0\) be an even number. We are going to show that, for small subintervals I of T, \(\partial ^p L(v, I)\) exists and

$$\begin{aligned} \sup _{v \in {\mathbb R}^d} {\mathbb E}(|\partial ^p L_M(v, I) - \partial ^p L(v, I)|^{2m}) \rightarrow 0 \quad \text {as } M \rightarrow \infty . \end{aligned}$$
(5.1)

Indeed, note that \(\partial ^p L_M(v, I)\) is real-valued, and for \(0 \le M < M'\),

$$\begin{aligned} \begin{aligned}&{\mathbb E}((\partial ^p L_{M'}(v, I) - \partial ^p L_M(v, I))^{2m})\\&\quad =(2\pi )^{-2md}\, {\mathbb E}\int _{([-M', M']^d\setminus [-M, M]^d)^{2m}} \prod _{j=1}^{2m} \bigg (\prod _{k=1}^d (-i\xi ^j_k)^{p_k} \bigg )e^{-i\xi ^j \cdot v} \int _{I^{2m}} e^{i\sum _{j=1}^{2m}\xi ^j \cdot u(z^j)} \,d\bar{z}\, d\bar{\xi }\\&\quad \le (2\pi )^{-2md}\, \int _{([-M', M']^d\setminus [-M, M]^d)^{2m}} \int _{I^{2m}} \bigg (\prod _{j=1}^{2m} \prod _{k=1}^d {|\xi ^j_k|}^{p_k}\bigg ) {\mathbb E}\big (e^{i\sum _{j=1}^{2m}\xi ^j \cdot u(z^j)}\big ) \, d\bar{z} \, d\bar{\xi }, \end{aligned} \end{aligned}$$

uniformly in v. By Lemma 4.6 with \(q_{j,k} = p_k\) and \(q = |p|\),

$$\begin{aligned} \int _{{\mathbb R}^{2md}} \int _{I^{2m}} \bigg (\prod _{j=1}^{2m} \prod _{k=1}^d {|\xi ^j_k|}^{p_k} \bigg ) {\mathbb E}\big (e^{i\sum _{j=1}^{2m}\xi ^j \cdot u(z^j)}\big )\, d\bar{z} \, d\bar{\xi }< \infty . \end{aligned}$$
(5.2)

Then, by the dominated convergence theorem, as \(M \rightarrow \infty \), \(\partial ^p L_M(v, I)\) converges to a limit, denoted by \(X_p(v, I)\), in \(L^{2m}({\mathbb P})\), uniformly in v. In particular, we can extract a subsequence \(M_j \rightarrow \infty \) such that

$$\begin{aligned} \sup _{v \in {\mathbb R}^d} {\mathbb E}(|\partial ^pL_{M_j}(v, I) - \partial ^p L_{M_{j-1}}(v, I)|) \le 2^{-j}.\end{aligned}$$

For each compact set \(F \subset {\mathbb R}^d\), by Fubini’s theorem,

$$\begin{aligned} {\mathbb E}\int _{F} \sum _{j=2}^\infty |\partial ^p L_{M_j}(v, I) - \partial ^p L_{M_{j-1}}(v, I)|\, dv \le \sum _{j=2}^\infty 2^{-j} \lambda _d(F) < \infty .\end{aligned}$$

Hence, by taking a sequence of compact sets \(F_i \uparrow {\mathbb R}^d\), we can find a single event of probability 1 on which \(\sum _{j=2}^\infty |\partial ^p L_{M_j}(v, I) - \partial ^p L_{M_{j-1}}(v, I)|\) is locally integrable in v, so that \(\partial ^p L_{M_j}(v, I) \rightarrow X_p(v, I)\) in \(L^1_{\mathrm {loc}}({\mathbb R}^d)\) a.s. Also, we know that \(\int |L_M(v, I) - L(v, I)|^2 dv \rightarrow 0\) a.s. These imply that, on an event of probability 1, for all smooth test functions \(\phi (v)\) with compact support,

$$\begin{aligned} \begin{aligned} \int _{{\mathbb R}^d} X_p(v, I)\, \phi (v)\, dv&= \lim _{j \rightarrow \infty }\int _{{\mathbb R}^d} \partial ^p L_{M_j}(v, I)\, \phi (v)\, dv\\&= \lim _{j \rightarrow \infty } (-1)^{|p|} \int _{{\mathbb R}^d} L_{M_j}(v, I)\, \partial ^p \phi (v)\, dv\\&= (-1)^{|p|} \int _{{\mathbb R}^d} L(v, I)\, \partial ^p \phi (v)\, dv. \end{aligned} \end{aligned}$$

This proves, for each \(|p|\le K\), the existence of \(\partial ^p L(v, I) (= X_p(v, I))\) as a weak derivative of L(vI), which satisfies (5.1). The existence of (jointly) continuous derivatives and their Hölder continuity will follow from an application of Kolmogorov’s continuity theorem as in Theorem 4.8 once we show that there is some constant \(0< \delta < 1\) such that for all sufficiently small intervals \(I \subset T\),

$$\begin{aligned} {\mathbb E}(|\partial ^p L(v, I)|^n)&\le C_n {[\lambda _{1+N}(I)]}^{n\delta }, \end{aligned}$$
(5.3)

and for any constant \(0< \gamma < \min \{ \frac{1}{2} (\frac{1+N}{\alpha } - d - 2|p|), 1\}\),

$$\begin{aligned} {\mathbb E}(|\partial ^p L(v, I) - \partial ^p L(\tilde{v}, I)|^n)&\le C'_n |v - \tilde{v}|^{n\gamma }, \end{aligned}$$
(5.4)

where \(C_n\) and \(C'_n\) are constants depending on \(n = 2m\) (and also on \(\gamma \) for \(C'_n\)), but not on \(v, \tilde{v}\) or I. In fact, (5.3) follows from

$$\begin{aligned} \begin{aligned} {\mathbb E}(|\partial ^p L(v, I)|^{2m})&= \lim _{M \rightarrow \infty } {\mathbb E}((\partial ^p L_M(v, I))^{2m})\\&\le C^{2m} \int _{{\mathbb R}^{2md}} \int _{I^{2m}} \bigg (\prod _{j=1}^{2m} \prod _{k=1}^d {|\xi ^j_k|}^{p_k}\bigg ) {\mathbb E}\big (e^{i\sum _{j=1}^{2m}\xi ^j \cdot u(z^j)}\big )\, d\bar{z} \, d\bar{\xi }\end{aligned} \end{aligned}$$

and Lemma 4.6(i) with \(q_{j, k} = p_k\) and \(q = |p|\), which yields \(\delta = (1 - \frac{\alpha (d+2|p|)}{1+N})\frac{d}{d+2|p|}\). As for (5.4), use the inequality \(|e^{-ix} - e^{-iy}| \le 2|x-y|^\gamma \) for \(0< \gamma < 1\) to get that

$$\begin{aligned} \begin{aligned}&{\mathbb E}((\partial ^p L(v, I) - \partial ^p L(\tilde{v}, I))^{2m})\\&\quad \le C^{2m} \int _{{\mathbb R}^{2md}} \int _{I^{2m}} \bigg ( \prod _{j=1}^{2m} \prod _{k=1}^d |\xi ^j_k|^{p_k}\bigg ) \bigg (\prod _{j=1}^{2m}\big |e^{-i\xi ^j\cdot v} - e^{-i\xi ^j\cdot \tilde{v}}\big |\bigg ){\mathbb E}\big (e^{i\sum _{j=1}^{2m} \xi ^j\cdot u(z^j)}\big ) \, d\bar{z}\, d\bar{\xi }\\&\quad \le C^{2m} |v - \tilde{v}|^{2m\gamma } \sum _{(k_1, \dots , k_{2m})} \int _{{\mathbb R}^{2md}} \int _{I^{2m}} \bigg ( \prod _{j=1}^{2m}\prod _{k=1}^d |\xi ^j_k|^{p_k} \bigg ) \bigg ( \prod _{j=1}^{2m} |\xi ^j_{k_j}|^{\gamma } \bigg ) {\mathbb E}\big (e^{i\sum _{j=1}^{2m} \xi ^j\cdot u(z^j)}\big ) \, d\bar{z}\, d\bar{\xi }, \end{aligned} \end{aligned}$$

where the sum is taken over all \((k_1, \dots , k_{2m}) \in \{1, \dots , d\}^{2m}\). Then, estimate the integral term using Lemma 4.6 with \(q_{j, k} = p_k + \gamma \) if \(k = k_j\), and \(q_{j, k} = p_k\) otherwise, to finish the proof. \(\square \)

Next, we study the regularity of the local time in the set variable. The following definition can be found in [1, p.227]. Let \(0 < \gamma \le 1\). We say that the local time L satisfies a uniform Hölder condition of order \(\gamma \) in the set variable if there exists \(C< \infty \) such that for all \(v \in {\mathbb R}^d\) and all cubes \(I \subset T\) with a sufficiently small side length, we have

$$\begin{aligned} L(v, I) \le C [\lambda (I)]^\gamma , \end{aligned}$$

where \(\lambda (I)\) is the Lebesgue measure of the cube I. Hölder conditions of the local times of random fields contain rich information about irregularity properties of the sample paths; see [1, 10, 27].

In fact, we are going to present a result (Theorem 5.3) which provides not only information about Hölder condition but also the moduli of continuity of the local times in the set variable. To establish this result, we need the following lemma.

Lemma 5.2

Assume (3.1) and \(\alpha d < 1+N\). Let T be a compact interval in \((0, \infty )\times {\mathbb R}^N\). Then the following hold for some constant \(r_0 > 0\):

  1. (i)

    For any \(b > 0\), there exists a finite constant c such that for all \((t, x) \in T \cup \{ 0 \}\) and intervals I in T with side lengths \(= r \le r_0\), for all \(v \in {\mathbb R}^d\) and \(A > 1\),

    $$\begin{aligned} {\mathbb P}\Big \{ L(v + u(t, x), I) \ge c \, r^{1+N-\alpha d} A^{\alpha d} \Big \} \le \exp (-bA). \end{aligned}$$
    (5.5)
  2. (ii)

    For any \(b > 0\) and \(0< \gamma < \min \{\frac{1}{2} (\frac{1+N}{\alpha } - d), 1\}\), there exists a finite constant c such that for all \((t, x) \in T \cup \{ 0 \}\), for all intervals I in T with side lengths \(= r \le r_0\), for all \(v, \tilde{v} \in {\mathbb R}^d\) and \(A > 1\),

    $$\begin{aligned} \begin{aligned}&{\mathbb P}\Big \{ |L(v + u(t, x), I) - L(\tilde{v} + u(t, x), I)|\\&\quad \ge c\, |v - \tilde{v}|^\gamma r^{1+N-\alpha d - \alpha \gamma } A^{\alpha d + (\frac{1}{2} +\alpha )\gamma } \Big \} \le \exp (-bA). \end{aligned} \end{aligned}$$
    (5.6)

Proof

For \((t, x) = 0\), note that \(u(t, x) = 0\). In this case, (i) and (ii) can be proved by the moment estimates in Proposition 4.7 with the use of Chebyshev’s inequality and Stirling’s formula. For \((t, x) = (t^0, x^0) \in T\), consider the process \(\tilde{u} = \{ \tilde{u}(s, y) := u(s, y) - u(t^0, x^0) : (s, y) \in T\}\), so that the local time \(\tilde{L}\) of \(\tilde{u}\) exists and satisfies \(\tilde{L}(v, I) = L(v + u(t^0, x^0), I)\). By Proposition 3.2, the conditional variance of \(\tilde{u}\) satisfies

$$\begin{aligned}&\mathrm {Var}(\tilde{u}(t, x)|\tilde{u}(t^1, x^1), \dots , \tilde{u}(t^{n}, x^{n}))\\&\quad \ge \mathrm {Var}(u(t, x)|u(t^0, x^0), u(t^1, x^1), \dots , u(t^{n}, x^{n}))\\&\quad \ge C \int _{\mathbb {S}^{N-1}}\, \min _{0\le i \le n} |(t-t^i) + (x - x^i) \cdot w|^{2\alpha } \, \sigma (dw). \end{aligned}$$

With slight modifications, the proof of Proposition 4.7 can be carried over to \(\tilde{u}\) and \(\tilde{L}\) to yield

$$\begin{aligned}&{\mathbb E}[L(v + u(t, x), I)^n] \le C^n (n!)^{\alpha d} r^{n(1+N-\alpha d)},\\&{\mathbb E}[(L(v + u(t, x), I) - L(\tilde{v} + u(t, x), I))^n] \le C^n |v - \tilde{v}|^{n\gamma } (n!)^{\alpha d + (\frac{1}{2} +\alpha )\gamma } r^{n(1+N - \alpha d - \alpha \gamma )} \end{aligned}$$

for \(\gamma < \min \{\frac{1}{2} (\frac{1+N}{\alpha }-d), 1\}\) and \(n\ge 2\) even. These estimates imply (i) and (ii) as in the first part the proof. \(\square \)

From this, we can deduce the local and uniform moduli of continuity of the local times in the set variable. Let \(I_r(t, x) = [t-r, t+r] \times \prod _{\ell =1}^N [x_\ell - r, x_\ell +r]\), and let \(\mathscr {I}(T, r)\) denote the set of all intervals \(I_\rho (t, x)\) in T with \(\rho \le r\).

Theorem 5.3

Assume (3.1) and \(\alpha d < 1+N\). For any compact interval T in \((0, \infty ) \times {\mathbb R}^N\), there exists a finite constant \(C_1\) such that for any fixed \((t, x) \in T\),

$$\begin{aligned} \limsup _{r \rightarrow 0} \sup _{v \in {\mathbb R}^d}\,\frac{L(v, I_r(t, x))}{r^{1+N - \alpha d} (\log \log (1/r))^{\alpha d}} \le C_1 \quad \text {a.s.} \end{aligned}$$
(5.7)

Moreover, there exists a finite constant \(C_2\) such that

$$\begin{aligned} \limsup _{r \rightarrow 0} \sup _{I \in \mathscr {I}(T, r)} \sup _{v \in {\mathbb R}^d}\,\frac{L(v, I)}{[\lambda _{1+N}(I)]^{1- \frac{\alpha d}{1+N}} {(\log (1/\lambda _{1+N}(I)))}^{\alpha d}} \le C_2 \quad \text {a.s.} \end{aligned}$$
(5.8)

Hence, L satisfies a uniform Hölder condition of order \(\gamma \) in the set variable for any \(0< \gamma < 1-\frac{\alpha d}{1+N}\).

Remark 5.4

By the strong LND property and the results of Xiao [51, 52], the local and uniform moduli of continuity for the local times of \(t \mapsto u(t, x_0)\) and \(x \mapsto u(t_0, x)\) are as follows in comparison to the above theorem. For \(t \mapsto u(t, x_0)\) and \(\alpha d < 1\), those are

$$\begin{aligned} r^{1-\alpha d}(\log \log (1/r))^{\alpha d} \quad \text {and} \quad [\lambda _1(I)]^{1-\alpha d}(\log (1/\lambda _1(I)))^{\alpha d}.\end{aligned}$$

For \(x \mapsto u(t_0, x)\) and \(\alpha d < N\), those are

$$\begin{aligned} r^{N-\alpha d}(\log \log (1/r))^{\frac{\alpha d}{N}} \quad \text {and} \quad [\lambda _N(I)]^{1-\frac{\alpha d}{N}}(\log (1/\lambda _N(I)))^{\frac{\alpha d}{N}}.\end{aligned}$$

Proof of Theorem 5.3

The proof of this theorem is based on Lemma 5.2 and a chaining argument as in [25, 51]. We will make some necessary changes for our purposes.

Let \(L^*(I) = \sup \{ L(v, I) : v \in {\mathbb R}^d\}\) and \(\varphi (r) = r^{1+N - \alpha d} (\log \log (1/r))^{\alpha d}\). To prove (5.7), it suffices to prove that for any fixed \((t, x) \in I\),

$$\begin{aligned} \limsup _{n \rightarrow \infty } \frac{L^*(J_n)}{\varphi (2^{-n})} \le C_1 \quad \text {a.s.}, \end{aligned}$$
(5.9)

where \(J_n = I_{2^{-n}}(t, x)\). The proof of (5.9) is divided into four steps.

Step 1 Let \(\delta _n = c_0\, 2^{-n\alpha } \sqrt{\log n}\). By Lemma 2.1 of Talagrand [46], there exists a constant \(c_0< \infty \) such that for n large,

$$\begin{aligned} {\mathbb P}\bigg \{ \sup _{(s, y) \in J_n} |u(s, y) - u(t, x)| \ge \delta _n \bigg \} \le n^{-2}, \end{aligned}$$

which, by the Borel–Cantelli lemma, implies that with probability 1, for all n large,

$$\begin{aligned} \sup _{(s, y) \in J_n} |u(s, y) - u(t, x)| \le \delta _n. \end{aligned}$$
(5.10)

Step 2 Let \(\theta _n = 2^{-n\alpha } (\log n)^{-\frac{1}{2} -\alpha }\) and

$$\begin{aligned} G_n = \Big \{ v \in {\mathbb R}^d: |v| \le \delta _n \text { and } v = \theta _n p \text { for some } p \in {\mathbb Z}^d \Big \}. \end{aligned}$$

The cardinality of \(G_n\) is \(\le C (\log n)^{(1 +\alpha )d}\). By Lemma 5.2(i) with \(b = 2\), we can find a finite constant c such that for all n large

$$\begin{aligned} {\mathbb P}\bigg \{ \max _{v \in G_n} L(v+u(t, x), J_n) \ge c\, \varphi (2^{-n}) \bigg \} \le C (\log n)^{(1 +\alpha )d}\, n^{-2}. \end{aligned}$$

It follows from the Borel–Cantelli lemma that with probability 1, for all n large,

$$\begin{aligned} \max _{v \in G_n} L(v + u(t, x), J_n) \le c\, \varphi (2^{-n}). \end{aligned}$$
(5.11)

Step 3 For \(n, k \ge 1\) and \(v \in G_n\), define

$$\begin{aligned} F(n, k, v) = \Big \{ y \in {\mathbb R}^d : y = v + \theta _n \sum _{j=1}^k \varepsilon _j 2^{-j}, \varepsilon _j \in \{0, 1\}^d \text { for } 1\le j \le k \Big \}. \end{aligned}$$

We say that a pair \(y_1, y_2 \in F(n, k, v)\) is linked if \(y_1 - y_2 = \theta _n \varepsilon 2^{-k}\) for some \(\varepsilon \in \{0, 1\}^d\). Fix any \(0< \gamma < \min \{\frac{1}{2} (\frac{1+N}{\alpha } - d), 1\}\). Consider the event \(A_n\) defined by

$$\begin{aligned} \begin{aligned} A_n = \bigcup _{v \in G_n} \bigcup _{k \ge 1} \bigcup _{y_1\sim y_2} \Big \{&|L(y_1 + u(t, x), J_n) - L(y_2 + u(t, x), J_n)|\\&\quad \ge c\, 2^{-n(1+N - \alpha d - \alpha \gamma )} |y_1 - y_2|^{\gamma } (k\log n)^{\alpha d + (\frac{1}{2}+\alpha ) \gamma } \Big \}, \end{aligned} \end{aligned}$$

where \(\bigcup _{y_1\sim y_2}\) denotes the union over all linked pairs in \(y_1, y_2 \in F(n, k, v)\). There are at most \(2^{kd} 3^d\) linked pairs in F(nkv). Then by Lemma 5.2(ii) with \(b = 2\), for n large,

$$\begin{aligned} \begin{aligned} {\mathbb P}(A_n)&\le C (\log n)^{(1+\alpha )d} \sum _{k=1}^\infty 2^{kd} \exp (-2k\log n)\\&= C (\log n)^{(1+\alpha )d} \frac{2^d n^{-2}}{1-2^d n^{-2}}, \end{aligned} \end{aligned}$$

so \(\sum _{n=1}^\infty {\mathbb P}(A_n) < \infty \). By the Borel–Cantelli lemma, a.s. \(A_n\) occurs at most finitely many times.

Step 4. We proceed with the chaining argument in [25, 51]. For \(y \in {\mathbb R}^d\) with \(|y| \le \delta _n\), we can represent y as the limit of \((y_k)\), where

$$\begin{aligned} y_k = v + \theta _n \sum _{j=1}^k \varepsilon _j 2^{-j}, \end{aligned}$$

\(y_0 := v \in G_n\) and \(\varepsilon _j \in \{0, 1\}^d\) for \(j = 1, \dots , k\). Since \(L(v, J_n)\) is continuous in v, we see that on the event \(A_n^c\),

$$\begin{aligned} \begin{aligned}&|L(y + u(t, x), J_n) - L(v + u(t, x), J_n)|\\&\le \sum _{k=1}^\infty |L(y_k + u(t, x), J_n) - L(y_{k-1} + u(t, x), J_n)|\\&\le \sum _{k=1}^\infty c\, 2^{-n(1+N-\alpha d - \alpha \gamma )} {(\theta _n 2^{-k})}^\gamma {(k\log n)}^{\alpha d + (\frac{1}{2}+\alpha )\gamma }\\&\le c \, 2^{-n(1+N-\alpha d)} (\log n)^{\alpha d} \sum _{k=1}^\infty k^{\alpha d + (\frac{1}{2}+\alpha )\gamma } 2^{-k\gamma }\\&\le C \varphi (2^{-n}). \end{aligned} \end{aligned}$$
(5.12)

Now, (5.11) and (5.12) imply that a.s. for all n large,

$$\begin{aligned} \sup _{|y| \le \delta _n} L(y + u(t, x), J_n) \le C \varphi (2^{-n}). \end{aligned}$$

Since \(L(\cdot , J_n)\) is supported on \(\overline{u(J_n)}\), this, together with (5.10), implies (5.9) and hence (5.7).

The proof of (5.8) is similar. Let \(\Phi (r) = r^{1- \frac{\alpha d}{1+N}}(\log (1/r))^{\alpha d}\) and \(\mathscr {D}_n\) denote the collection of dyadic cubes \(\prod _{j=1}^{1+N}[i_j2^{-n}, (i_j+1)2^{-n}]\) that intersect with T, where \(i_j \in {\mathbb Z}\). Note that any interval \(I = I_r(t, x)\) with \(r \le 2^{-n}\) can be covered by at most \(8^{1+N}\) dyadic intervals \(D_i\) of side length \(\le r\) such that each \(D_i\) is in \(\bigcup _{m \ge n} \mathscr {D}_m\) and has Lebesgue measure \(\lambda _{1+N}(D_i) \le \lambda _{1+N}(I)\). Then

$$\begin{aligned}L^*(I) \le 8^{1+N} \sup _{m \ge n} \sup _{D \in \mathscr {D}_m} L^*(D).\end{aligned}$$

Also, \(\Phi (r)\) is increasing for \(r > 0\) small, so it suffices to prove that

$$\begin{aligned} \limsup _{n \rightarrow \infty } \max _{D \in \mathscr {D}_n} \frac{L^*(D)}{\Phi (\lambda _{1+N}(D))} \le C_2 \quad \text {a.s.} \end{aligned}$$
(5.13)

To this end, define \(\theta _n = 2^{-n\alpha }(\log 2^n)^{-\frac{1}{2} -\alpha }\) and

$$\begin{aligned} G_n = \Big \{ v \in {\mathbb R}^d : |v| \le n \text { and } v = \theta _n p \text { for some } p \in {\mathbb Z}^d \Big \}. \end{aligned}$$

By Lemma 5.2(i), we can find a large enough C so that a.s. for all n large,

$$\begin{aligned} \max _{D \in \mathscr {D}_n} \max _{v \in G_n} L(v, D) \le C \Phi (\lambda _{1+N}(D)) \quad \text {a.s.} \end{aligned}$$
(5.14)

Define F(nkv) as in the the proof of (5.7) above, and similarly, let

$$\begin{aligned} A_n = \bigcup _{D \in \mathscr {D}_n} \bigcup _{v \in G_n} \bigcup _{k \ge 1} \bigcup _{y_1\sim y_2} \Big \{&|L(y_1, D) - L(y_2, D)| \\&\quad \ge C 2^{-n(1+N - \alpha d - \alpha \gamma )}|y_1 - y_2|^\gamma (k \log 2^n)^{\alpha d + (\frac{1}{2}+\alpha ) \gamma }\Big \}. \end{aligned}$$

Then by Lemma 5.2(ii), a.s. \(A_n\) occurs at most finitely many times. Since u(tx) is continuous, there exists \(n_0 = n_0(\omega )\) such that \(\sup _{(t, x) \in T} |u(t, x)| \le n_0\) a.s. If \(|y| \le n\), then by the chaining argument as before, we can deduce that on the event \(A_n^c\),

$$\begin{aligned} |L(y, D) - L(v, D)|&\le C 2^{-n(1+N-\alpha d)} (\log 2^n)^{\alpha d}\\&\le C \Phi (\lambda _{1+N}(D)). \end{aligned}$$

Then by (5.14), we see that a.s. for all n large,

$$\begin{aligned} \max _{D \in \mathscr {D}_n} \sup _{|y|\le n} L(y, D) \le C \Phi (\lambda _{1+N}(D)). \end{aligned}$$
(5.15)

If \(|y| > n_0\), then \(y \not \in \overline{u(T)}\), thus \(L(y, D) = 0\). This together with (5.15) implies (5.13), and hence completes the proof of (5.8). \(\square \)

As discussed in [25], the moduli of continuity of the local times are closely related to the degree of oscillations of the sample functions. The former leads to lower envelopes for the oscillations. A consequence of this is that the solution is nowhere differentiable.

Theorem 5.5

Assume (3.1). Let T be a compact interval in \((0, \infty ) \times {\mathbb R}^N\). Then there exist positive constants \(C_3\) and \(C_4\) such that for each fixed \((t, x) \in T\),

$$\begin{aligned} \liminf _{r \rightarrow 0} \sup _{(s, y) \in I_r(t, x)} \frac{|u(s, y) - u(t, x)|}{r^\alpha (\log \log (1/r))^{-\alpha }} \ge C_3 \quad \text {a.s.} \end{aligned}$$
(5.16)

and

$$\begin{aligned} \liminf _{r \rightarrow 0} \inf _{(t, x) \in T} \sup _{(s, y) \in I_r(t, x)} \frac{|u(s, y) - u(t, x)|}{r^\alpha (\log (1/r))^{-\alpha }} \ge C_4 \quad \text {a.s.} \end{aligned}$$
(5.17)

In particular, (5.17) implies that \((t, x) \mapsto u(t, x)\) is a.s. nowhere differentiable on \((0, \infty ) \times {\mathbb R}^N\).

Proof of Theorem 5.5

Since u has i.i.d. components, it suffices to prove the result for \(d = 1\). Then \(\alpha d = \alpha < 1+N\), so that u(tx) has a jointly continuous local time L(vT), \(v \in {\mathbb R}\). Fix \((t, x) \in T\) and \(r > 0\). Since \(L(v, I_r(t, x)) = 0\) for \(v \not \in \overline{u(I_r(t, x))}\), we have

$$\begin{aligned} \begin{aligned} \lambda _{1+N}(I_r(t, x))&= \int _{\, \overline{u(I_r(t, x))}} L(v, I_r(t, x))\, dv\\&\le 2 \sup _{(s, y) \in I_r(t, x)} |u(s, y) - u(t, x)| \times \sup _{v \in {\mathbb R}}L(v, I_r(t, x)) . \end{aligned} \end{aligned}$$
(5.18)

Therefore, (5.16) follows from (5.18) and (5.7), while (5.17) follows from (5.18) and (5.8). \(\square \)

The corresponding results for \(t \mapsto u(t, x_0)\) and \(x \mapsto u(t_0, x)\) are:

$$\begin{aligned}\liminf _{r \rightarrow 0} \sup _{s \in I_r(t)} \frac{|u(s, x_0) - u(t, x_0)|}{r^\alpha (\log \log (1/r))^{-\alpha }} \ge C, \quad \liminf _{r \rightarrow 0} \inf _{t \in T_1} \sup _{s \in I_r(t)} \frac{|u(s, x_0) - u(t, x_0)|}{r^\alpha (\log (1/r))^{-\alpha }} \ge C, \end{aligned}$$
$$\begin{aligned}\liminf _{r \rightarrow 0} \sup _{y \in I_r(x)} \frac{|u(t_0, y) - u(t_0, x)|}{r^\alpha (\log \log (1/r))^{-\alpha /N}} \ge C, \quad \liminf _{r \rightarrow 0} \inf _{x \in T_2} \sup _{y \in I_r(x)} \frac{|u(t_0, y) - u(t_0, x)|}{r^\alpha (\log (1/r))^{-\alpha /N}} \ge C, \end{aligned}$$

which follow from the strong LND property and the results of Xiao [51, 52]. In particular, this indicates that (5.16) is not sharp when \(N \ge 2\). It would be interesting to derive sharp envelopes for (5.16) and (5.17) so that the \(\liminf \)s are positive and finite. In fact, when \(N=1\) and \(\dot{W}(t, x)\) is white in time (\(H=1/2\)) and has spatial covariance given by the Riesz kernel, the following Chung-type law of the iterated logarithm has been proved in [33]:

$$\begin{aligned} \liminf _{r \rightarrow 0} \sup _{(s, y) \in I_r(t, x)} \frac{|u(s, y) - u(t, x)|}{r^\alpha (\log \log (1/r))^{-\alpha }} = C \quad \text {a.s.}\end{aligned}$$

where C is a positive finite constant.