1 Motivation and summary

The formal equation discussed in this paper is

$$\begin{aligned} \frac{\partial }{\partial t}Y \,=\, \frac{\partial ^2}{\partial u^2}Y +{\gamma }\,\frac{\partial }{\partial u}(Y^2) +\sqrt{2}\,\frac{\partial ^3 B}{\partial t\partial u^2} \end{aligned}$$
(1.1)

where \({\gamma }\) is a real-valued parameter and \(B\) stands for a Brownian sheet thus \(\frac{\partial ^3 B}{\partial t\partial u^2}\) can be interpreted as the spatial derivative of a space-time white noise driving force. The potential solutions \(Y\) to this equation which were first constructed in [4] take values in the space \(D([0,T];{\fancyscript{D}}^\prime (\mathbb{R }))\) of all cadlag functions mapping \([0,T]\) into the space of Schwartz distributions \({\fancyscript{D}}^\prime (\mathbb{R })\). So the problem arises to give meaning to the non-linear term \(\frac{\partial }{\partial u}(Y^2)\) and this is meant by stating a rigorous equation in this paper.

Equation (1.1) is the equation the spatial derivative of a solution of the KPZ equation for growing interfaces would formally satisfy and the main result in [4] is actually an approximation scheme for the KPZ equation. The limiting field of this approximation scheme equals the Cole–Hopf transform of another process and the community started to call it the Cole–Hopf solution of the KPZ equation. Taking the spatial derivative of the KPZ equation turns it into a conservative system with an invariant state and that’s why (1.1) is also called conservative KPZ equation.

There has been a recent breakthrough in the theory of solutions to the KPZ equation, see [8], and the reader is referred to this work and the references therein for a good account on the progress being made over the past few years in the understanding of the KPZ equation. But, as in [4], the main focus in [8] is on an approximation scheme and it is not shown that the limiting field, which again equals the Cole–Hopf solution, is the solution of a well-defined equation.

Since \(Y_t\in {\fancyscript{D}}^\prime (\mathbb{R })\) for fixed \(t\), the canonical definition of the ill-posed term \(\frac{\partial }{\partial u}(Y_t^2)\) would be a limit of type \(\frac{\partial }{\partial u}[(Y_t\star J_N)^2],\,N\rightarrow \infty \), using a mollifier \(J\in {\fancyscript{D}}(\mathbb{R })\) to approximate the identity. Here \(Y_t\star J_N\) denotes the convolution of the generalized function \(Y_t\) and the smooth function \(J_N(u)=NJ(Nu),\,u\in \mathbb{R }\).

It turned out that, even in the case where \(Y_t\) is stationary, it is hard to make sense of such a limit in an appropriate space. The author only achieved to get convergence in a rather artificial space of so-called generalized random variables which made it kind of impossible to understand (1.1) as a PDE and the notion of solution was based on a generalized martingale problem (see [1]). It even remains to be shown that \(Y\) is indeed a solution of this generalized martingale problem.

The difficulty seems to be that, as far as we know, there is no control of moments higher than two. Very good if not the best second order moment estimates for \(Y_t(G)\) in the case where \(Y_t\) is stationary can be found in [5] but the authors themselves remark that their method cannot be applied to moments of higher order.

On the other hand, the convergence of time integrals \(\int _r^t\frac{\partial }{\partial u}[(Y_s\star J_N)^2]\,\mathrm{d}s,\,N\rightarrow \infty \), \(r\le t\) fixed, is much more regular and the notion of solution to (1.1) introduced in [7] is based on the existence of such a limit.

However, in [7] it is not explained how \(\frac{\partial }{\partial u}(Y^2)\) should be understood for a chosen \(Y\in D([0,T];{\fancyscript{D}}^\prime (\mathbb{R }))\). Instead, first showing very useful estimates, the authors of [7] conclude that

$$\begin{aligned} -\!\lim _{N\rightarrow \infty }\int _r^t\!\int _\mathbb{R }(Y_s\star J_N)^2(u) \frac{G(u\!+\!1/N)\!-\!G(u)}{1/N}\,\mathrm{d}u\mathrm{d}s \quad \text{ exists } \text{ in } \text{ mean } \text{ square } \end{aligned}$$
(1.2)

for every \(r\le t\) and every test function \(G\) in the Schwartz space \({\fancyscript{S}}(\mathbb{R })\). If \(\frac{\partial }{\partial u}(Y^2)\) is defined by a limit for every \(Y\in D([0,T];{\fancyscript{D}}^\prime (\mathbb{R }))\) then verifying Eq. (1.1) for a possible solution requires a further limit-exchange and this has not been accomplished in [7].

The main message from [1] is that interchanging \(\lim _{N\rightarrow \infty }\) and the time integration in (1.2) leads to severe complications. So one wants to define

$$\begin{aligned} \left\langle \mathbf{1}_{[r,t]}\otimes G\,,\, \frac{\partial }{\partial u}(Y^2)\right\rangle \quad \text{ by }\quad -\lim _{N\rightarrow \infty } \int _r^t\int _\mathbb{R }G^\prime (u)\,(Y_s\star J_N)^2(u)\,\mathrm{d}u\mathrm{d}s, \end{aligned}$$

thinking of \(\mathbf{1}_{[r,t]}\otimes G\) as a test function and of \(\langle \,\,,\,\rangle \) as a dual pairing, which triggers the idea to explain \(\frac{\partial }{\partial u}(Y^2)\) as an element of \({\fancyscript{D}}^\prime ((0,T)\times \mathbb{R })\). Indeed, if

$$\begin{aligned} \lim _{N\rightarrow \infty }\int _0^T \int _\mathbb{R }\frac{\partial }{\partial u}\phi (t,u)\,(Y_t\star J_{N})^2(u)\,\mathrm{d}u\mathrm{d}t \quad \text{ exists } \text{ for } \text{ all }\; \phi \in {\fancyscript{D}}((0,T)\times \mathbb{R }) \end{aligned}$$

then this defines an element in \({\fancyscript{D}}^\prime ((0,T)\times \mathbb{R })\). Of course, the above limit does not exist for all \(Y\in D([0,T];{\fancyscript{D}}^\prime (\mathbb{R }))\) and limits of subsequences can be different depending on \(\phi \). So the definition of \(\frac{\partial }{\partial u}(Y^2)\) justified in this paper requires finding a suitable subsequence \((N_k)_{k=1}^\infty \) which is used to split \(D([0,T];{\fancyscript{D}}^\prime (\mathbb{R }))\) into two sets \({\fancyscript{N}}_{div}\cup {\fancyscript{N}}_{div}^c\) where

$$\begin{aligned}&{\fancyscript{N}}_{div}\,\mathop {=}\limits ^{\text{ def }}\, \left\{ Y\in D([0,T];{\fancyscript{D}}^\prime (\mathbb{R })): \lim _{k\rightarrow \infty }\int _0^T\,\int _\mathbb{R }\frac{\partial }{\partial u}\phi (t,u) \,(Y_t\star J_{N_k})^2(u)\,\mathrm{d}u\mathrm{d}t \right. \nonumber \\&\qquad \qquad \qquad \left. \text{ does } \text{ not } \text{ exist } \text{ for } \text{ some } \phi \in {\fancyscript{D}}((0,T)\times \mathbb{R })\!\!\!\!\!\! \right\} . \end{aligned}$$
(1.3)

Defining \(\frac{\partial }{\partial u}(Y^2)\in {\fancyscript{D}}^\prime ((0,T)\times \mathbb{R })\) for every \(\phi \in {\fancyscript{D}}((0,T)\times \mathbb{R })\) by

$$\begin{aligned} \langle \phi , \frac{\partial }{\partial u}(Y^2)\rangle \,\mathop {\!=\!}\limits ^{\text{ def }}\left\{ \begin{array}{lcl} 0&{}:&{}Y\!\in \!{\fancyscript{N}}_{div}\\ -\lim \nolimits _{k\rightarrow \infty }\int _0^T\int _\mathbb{R }\frac{\partial }{\partial u}\phi (t,u) \,(Y_t\star J_{N_k})^2(u)\,\mathrm{d}u\mathrm{d}t&{}:&{}Y\!\in \!{\fancyscript{N}}_{div}^c \end{array}\right. \end{aligned}$$
(1.4)

turns the equation (1.1) into a classical SPDE and it will be shown in this paper that

$$\begin{aligned} \left\langle \phi , \frac{\partial }{\partial t}Y \!-\!\frac{\partial ^2}{\partial u^2}Y \!-\!{\gamma }\,\frac{\partial }{\partial u}(Y^2) \!-\!\sqrt{2}\,\frac{\partial ^3 B}{\partial t\partial u^2}\right\rangle \!=\!0 \quad \text{ for } \text{ all }\; \phi \!\in \!{\fancyscript{D}}((0,T)\!\times \!\mathbb{R })\,\,\, \text{ a.s. } \end{aligned}$$
(1.5)

for the stationary (potential) solution \(Y\) constructed in [4] and some Brownian sheet \(B\) both given on the same probability space.

Notice that the limits defining \(\frac{\partial }{\partial u}(Y^2)\) in the case where \(Y\in {\fancyscript{N}}_{div}^c\) could depend on the choice of the mollifier \(J\). But, when verifying (1.5) for a fixed \(\gamma \) in this paper, a subset \(\Omega _\gamma \subseteq {\fancyscript{N}}_{div}^c\) is constructed such that (1.5) holds for all \(Y\in \Omega _\gamma \) and \(\frac{\partial }{\partial u}(Y^2)\) given by (1.4) on \(\Omega _\gamma \) is the same for all even mollifiers \(J\).

If \(Y^\varepsilon \) approximates \(Y\) then the standard method for showing that \(Y\) satisfies (1.5) with \(\frac{\partial }{\partial u}(Y_t^2)\) defined by (1.4) would be:

$$\begin{aligned} \text{ control }\,\, \mathbf{E}_\varepsilon \left[ \int _0^T \int _\mathbb{R }\,\frac{\partial }{\partial u}\phi (s,u)\, (Y_s^\varepsilon \star J_N)^2(u)\,\mathrm{d}u\mathrm{d}s \right] ^2 \text{ in }\,\, \varepsilon ,N,\phi . \end{aligned}$$
(1.6)

A good control of this type has been obtained in [7] for \(\phi =\mathbf{1}_{[r,t]}\otimes G\) using the density fluctuations \(Y^\varepsilon \) in \(\sqrt{\varepsilon }\)-asymmetric exclusion as approximation scheme. But, sharp bounds on the spectral gap of the symmetric exclusion processes restricted to finite boxes were required.

In this paper it is demonstrated that (1.6) can be based on the weaker estimates obtained in [3, Lemma 3.3]. Using these weaker estimates makes it more difficult to verify that \(Y\) satisfies (1.5). But the proof of Proposition 2.5, which is the main achievement of this paper, presents a method of how to overcome this difficulty. Having a method based on weaker estimates might be beneficial when it comes to a similar problem with other highly singular SPDEs.

Finally it should be mentioned that the estimates used in this paper, just as the estimates found in [7], are only justified in the case where \(Y\) is the spatial derivative of the Cole–Hopf solution starting from Gaussian white noise on \(\mathbb{R }\) which is a stationary state. In this case, in particular since this invariant state is Gaussian, the state space of \(Y\) can be relaxed to be \(D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\) with \({\fancyscript{S}}^\prime (\mathbb{R })\) being the space of tempered distributions—see Remark 2.2(i). But, in the non-stationary case, the growth conditions implied by the theorems in [4] would not allow for \({\fancyscript{S}}^\prime (\mathbb{R })\) without further analysis. As a consequence \(\frac{\partial }{\partial u}(Y^2)\) was defined to be an element of \({\fancyscript{D}}^\prime ((0,T)\times \mathbb{R })\) to leave room for non-stationary solutions.

It remains an open problem to show that the spatial derivative of the Cole–Hopf solution starting from initial conditions other than Gaussian white noise on \(\mathbb{R }\) satisfies (1.5).

2 Notation and results

The approximation scheme for the conservative KPZ equation used in this paper goes back to [4]. It is based on \(\sqrt{\varepsilon }\)-asymmetric exclusion processes and will be briefly explained in what follows. The reader is referred to [10] for the underlying theory of exclusion processes.

Fix \(\gamma \not =0\) and consider a scaling parameter \(\varepsilon >0\) small enough such that \(\sqrt{\varepsilon }\gamma \in [-1,1]\). Denote by \((\Omega ,\mathcal{F },\mathbf{P}^\varepsilon _{\!\!\eta },\eta \in \{0,1\}^\mathbb{Z }, (\eta _t)_{t\ge 0})\) the strong Markov Feller process whose generator \(L_\varepsilon \) acts on local functions \(f:\{0,1\}^\mathbb{Z }\rightarrow \mathbb{R }\) as

$$\begin{aligned} L_\varepsilon f(\eta )&= \sum _{x\in \mathbb{Z }}\left( (1+\sqrt{\varepsilon }\gamma )\,\eta (x)(1-\eta (x+1)) [f(\eta ^{x,x+1})-f(\eta )]\right. \nonumber \\&\left. +(1-\sqrt{\varepsilon }\gamma )\,\eta (x)(1-\eta (x-1)) [f(\eta ^{x,x-1})-f(\eta )] \right) \end{aligned}$$
(2.1)

where \(\eta ^{x,y}\) is standard notation for the operation which exchanges the ‘spins’ at \(x\) and \(y\).

Denote by \(\nu _{1/2}\) the Bernoulli product measure on \(\{0,1\}^{\mathbb{Z }}\) satisfying \(\nu _{1/2}(\eta (x)=1)=1/2\) for all \(x\in \mathbb{Z }\). Define

$$\begin{aligned} \mathbf{P}_{\varepsilon }\,=\,\int \mathbf{P}^\varepsilon _{\!\!\eta }\,\mathrm{d}\nu _{1/2}(\eta ), \quad \xi _t(x)\,=\,\frac{\eta _t(x)-1/2}{\sqrt{1/4}},\;t\ge 0,\,x\in \mathbb{Z }, \end{aligned}$$

and notice that the process \((\xi _t)_{t\ge 0}\) is a mean-zero stationary process on \((\Omega ,\mathcal{F },\mathbf{P}_{\varepsilon })\) which takes values in \(\{-1,1\}^{\mathbb{Z }}\).

Denote by \(\delta _{\varepsilon x}\) the Dirac measure concentrated in the macroscopic point \(\varepsilon x\) and define by

$$\begin{aligned} Y_t^\varepsilon \,=\,\sqrt{\varepsilon } \sum _{x\in \mathbb{Z }}\xi _{t\varepsilon ^{-2}}(x) \delta _{\varepsilon x},\quad t\ge 0, \end{aligned}$$

the measure-valued density fluctuation field. Fix a finite time horizon \(T\) and regard \(Y^\varepsilon =(Y_t^\varepsilon )_{t\in [0,T]}\) as a random variable taking values in the space \(D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\) of all cadlag functions mapping \([0,T]\) into the space of tempered distributions \({\fancyscript{S}}^\prime (\mathbb{R })\). Equip \(D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\) with the Skorokhod topology \(J_1\) and let \(Y\) be the notation for both an element in and the identity map on \(D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\). So \(Y=(Y_t)_{t\in [0,T]}\) plays the role of the coordinate process on \(D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\) and it is evident that the topological \(\sigma \)-algebra on \(D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\) is equal to \(\mathcal{F }^Y_T=\sigma (\{Y_t(G):t\le T,G\in {\fancyscript{S}}(\mathbb{R })\})\).

Theorem 2.1

[4], Th.B.1 & Prop.B.2] Let \(\hat{\mathbf{P}}_{\varepsilon }\) denote the push forward of \(\mathbf{P}_{\varepsilon }\) with respect to the map \(Y^\varepsilon \). Then, when \(\varepsilon \downarrow 0\), the probability measures \(\hat{\mathbf{P}}_{\varepsilon }\) converge weakly to a probability measure on \(D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\) which is denoted by \(\mathbf{P}_{{\gamma }}\) in what follows. The measure \(\mathbf{P}_{{\gamma }}\) has the following properties:

  1. (i)

    the support of \(\mathbf{P}_{{\gamma }}\) is a subset of \(C([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\);

  2. (ii)

    the process \(Y\) is stationary under \(\mathbf{P}_{{\gamma }}\) satisfying \(Y_t\sim \mu ,\,t\in [0,T]\), where \(\mu \) is the mean-zero Gaussian white noise measure with covariance \(\mathbf{E}_{{\gamma }}Y_t(G)Y_t(H)=\int _\mathbb{R }GH\,\mathrm{d}u\);

  3. (iii)

    \(\mathbf{P}_{{\gamma }}\) is equal to the law of the spatial derivative of the so-called Cole–Hopf solution of the KPZ equation for growing interfaces starting from a two-sided Brownian motion.

Remark 2.2

  1. (i)

    The space used in Th.B.1 of [4] is \(D([0,T];{\fancyscript{D}}^\prime (\mathbb{R }))\). But this can be relaxed to \(D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\) because \(\nu _{1/2}\) is the initial condition of \((\eta _t)_{t\ge 0}\). Indeed, this implies that condition (2.13) on page 578 in [4] is satisfied for \(m\equiv 0\) and one can rule out that the functions \(f_X\) used in the proof of Th.B.1 have exponential growth.

  2. (ii)

    This result in [4] is stronger than the tightness of \(\{\hat{\mathbf{P}}_{\varepsilon },\,\varepsilon >0\}\) shown in [7] as tightness would only give the weak convergence with respect to certain subsequences \(\varepsilon _k,\,\varepsilon _k\downarrow 0\), with possibly different limit points. The identification of all limit points is a consequence of the Cole–Hopf transform for discrete systems applied in [4].

Definition 2.3

The coordinate process \(Y\) on the probability space \((D([0,T];{\fancyscript{S}}^\prime (\mathbb{R })), \mathcal{F }_T^Y,\mathbf{P}_{{\gamma }})\) is called Cole–Hopf solution of the conservative KPZ equation (1.1).

The following two results whose proofs will be given in the next section form the basis for the method of verification used in this paper to show that \(Y\) solves Eq. (1.1) in the sense of (1.5) where \(\frac{\partial }{\partial u}(Y_t^2)\) is defined by (1.4). Notice that, by technical reasons, the mollifier \(J\in {\fancyscript{D}}(\mathbb{R })\) defining \(J_N\) by \(u\mapsto NJ(Nu),\,N\ge 1\), should be taken to be even.

Lemma 2.4

Fix \(G\in {\fancyscript{S}}(\mathbb{R })\). Then

$$\begin{aligned}&\int _0^T\mathrm{d}t\,\mathbf{E}_{{\gamma }}\left[ \int _{0}^{t}\int _\mathbb{R }G^\prime (u)\, \left( (Y_{s}\star J_{\tilde{N}})^2(u) - (Y_s\star J_{N})^2(u)\right) \,\mathrm{d}u\mathrm{d}s\right] ^2\\&\quad \le e^TC_{J}\,N^{-1/3} \sum _{m=1}^3\sup _{u} |(1+u^2)\frac{\partial ^m}{\partial u^m}G(u)|^2 \end{aligned}$$

for all \(\tilde{N}\ge N\ge 1\) where \(C_{J}\) is a constant which only depends on the choice of the mollifier \(J\).

This lemma is proven using the estimates obtained in [3, Lemma 3.3] by applying a resolvent-type method. It only gives a bound on the \((\ell \otimes \mathbf{P}_{{\gamma }})\)—average of the square of the functional

$$\begin{aligned} (t,Y)\mapsto \int _{0}^{t}\int _\mathbb{R }G^\prime (u)\, \left( (Y_{s}\star J_{\tilde{N}})^2(u) - (Y_s\star J_{N})^2(u)\right) \mathrm{d}u\mathrm{d}s \end{aligned}$$

where \(\ell \) denotes the Lebesgue measure on \([0,T]\). The main disadvantage of using an \(L^2(\ell \otimes \mathbf{P}_{{\gamma }})\)—estimate of the above functional is that it complicates the method of identifying the Brownian sheet in (1.5). The next proposition deals with each single step of this method in detail. It’s proof is also based on [3, Lemma 3.3], only. This means that fairly weak \(L^2(\ell \otimes \mathbf{P})\) a priori estimates are still good enough for solving singular SPDEs.

Define the map

$$\begin{aligned} {\mathfrak{M }}_N: D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\rightarrow D([0,T];{\fancyscript{S}}^\prime (\mathbb{R })) \end{aligned}$$

by

$$\begin{aligned} \mathfrak{M }_N(Y)_t^G\,=\, Y_t(G)-Y_0(G) -\int _0^t\,Y_s(G^{\prime \prime })\,\mathrm{d}s \,+{\gamma } \int _{0}^{t}\int _\mathbb{R }G^\prime (u) (Y_s\star J_{N})^2(u) \,\mathrm{d}u\mathrm{d}s. \end{aligned}$$

Applying Lemma 2.4 gives that, for every \(G\in {\fancyscript{S}}(\mathbb{R })\), there exists a \(\mathcal{B }([0,T])\otimes \mathcal{F }^Y_T\)-measurable process

$$\begin{aligned} \tilde{M}^G: [0,T]\times D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\rightarrow \mathbb{R }\end{aligned}$$

such that

$$\begin{aligned} \int _0^T\mathrm{d}t\,\mathbf{E}_{{\gamma }}\left[ \tilde{M}^G_t-\mathfrak{M }_N(Y)_t^G\right] ^2 \,\rightarrow \,0,\quad N\rightarrow \infty . \end{aligned}$$
(2.2)

Denote by \(\mathbb{F }\) the filtration \((\mathcal{F }_t)_{t\in [0,T]}\) with \(\mathcal{F }_t= \sigma (\{Y_s(G):s\le t,G\in {\fancyscript{S}}(\mathbb{R })\}\cup \mathcal{N })\) where \(\mathcal{N }\) is the collection of all \(\mathbf{P}_{{\gamma }}\)-null sets in \(\mathcal{F }^Y_T\).

Proposition 2.5

  1. (i)

    For every \(G\in {\fancyscript{S}}(\mathbb{R })\), there exists an \(\mathbb{F }\)-adapted process \(M^G=(M^G_t)_{t\in [0,T]}\) on \((D([0,T];{\fancyscript{S}}^\prime (\mathbb{R })),\mathcal{F }^Y_T,\mathbf{P}_{{\gamma }})\) which is a continuous version of \(\tilde{M}^G\) in the following sense: there is a measurable subset \(\mathcal{T }_G\subseteq [0,T]\) with \(\ell (\mathcal{T }_G)=T\) such that \(\tilde{M}_t^G=M^G_t\) a.s. for all \(t\in \mathcal{T }_G\). For every positive \(T^\prime <T\), when restricted to \([0,T^\prime ]\), the process \(M^G\) is a square integrable \(\mathbb{F }\)- martingale.

  2. (ii)

    For every \(G\in {\fancyscript{S}}(\mathbb{R })\), the process \(M^G=(M^G_t)_{t\in [0,T]}\) is an \(\mathbb{F }\)-Brownian motion with variance \(2\Vert G^\prime \Vert ^2_2\) on the probability space \((D([0,T];{\fancyscript{S}}^\prime (\mathbb{R })),\mathcal{F }^Y_T,\mathbf{P}_{{\gamma }})\).

  3. (iii)

    It holds that

    $$\begin{aligned} M_t^{a_1 G_1+a_2 G_2}\,=\,a_1 M_t^{G_1}+a_2 M_t^{G_2} \quad \text{ a.s. } \end{aligned}$$

    for every \(t\in [0,T],\,a_1,a_2\in \mathbb{R }\) and \(G_1,G_2\in {\fancyscript{S}}(\mathbb{R })\).

  4. (iv)

    The process \(M_t^G\) indexed by \(t\in [0,T]\) and \(G\in {\fancyscript{S}}(\mathbb{R })\) is a centred Gaussian process on \((D([0,T];{\fancyscript{S}}^\prime (\mathbb{R })),\mathcal{F }^Y_T,\mathbf{P}_{{\gamma }})\) with covariance

    $$\begin{aligned} \mathbf{E}_{{\gamma }}M_{t_1}^{G_1}M_{t_2}^{G_2}\,=\, 2(t_1\wedge t_2)\int _\mathbb{R }G_1^\prime (u)G_2^\prime (u)\,\mathrm{d}u \end{aligned}$$

    hence there is a Brownian sheet \(B(t,u),\,t\!\in \![0,T],\,u\!\in \!\mathbb{R }\), on \((D([0,T];{\fancyscript{S}}^\prime (\mathbb{R })), \mathcal{F }^Y_T,\mathbf{P}_{{\gamma }})\) such that

    $$\begin{aligned} M_t^G\,=\,\sqrt{2} \int _\mathbb{R }B(t,u)G^{\prime \prime }(u)\,\mathrm{d}u \quad \text{ a.s. } \end{aligned}$$

    for every \(t\in [0,T]\) and \(G\in {\fancyscript{S}}(\mathbb{R })\).

In what follows let \(M=(M_t)_{t\in [0,T]}\) denote the continuous \({\fancyscript{S}}^\prime (\mathbb{R })\)-valued process defined by

$$\begin{aligned} M_t(G)\,\mathop {=}\limits ^{\text{ def }}\,\sqrt{2} \int _\mathbb{R }B(t,u)G^{\prime \prime }(u)\,\mathrm{d}u, \quad t\in [0,T],\,G\in {\fancyscript{S}}(\mathbb{R }). \end{aligned}$$
(2.3)

Remark that, by Schwartz’ kernel theorem, \(M\) and \(Y\) can also be considered random variables taking values in \({\fancyscript{D}}^\prime ((0,T)\times \mathbb{R })\) such that

$$\begin{aligned}&\int _0^T\mathrm{d}t\,g^\prime (t) \left[ -Y_t(G)+Y_0(G) +\int _0^tY_s(G^{\prime \prime })\,\mathrm{d}s +M_t(G) \right] \\&\qquad = \left\langle g\otimes G\,, \frac{\partial }{\partial t}Y-\frac{\partial ^2}{\partial u^2}Y -\frac{\partial }{\partial t}M\right\rangle \end{aligned}$$

for all \(g\in {\fancyscript{D}}((0,T)),\,G\in {\fancyscript{D}}(\mathbb{R })\) where \(\langle \cdot \,,\cdot \rangle \) denotes the dual pairing between \({\fancyscript{D}}((0,T)\times \mathbb{R })\) and \({\fancyscript{D}}^\prime ((0,T)\times \mathbb{R })\). Notice that the last equality can be extended to hold for all \(g\in C^1([0,T])\) with \(g(T)=0\) and \(G\in {\fancyscript{S}}(\mathbb{R })\). Then it is an easy consequence of Lemma 2.4, (2.2) and the Cauchy–Schwarz inequality that

$$\begin{aligned}&\mathbf{E}_{{\gamma }}\left| -\!\int _0^T\,\int _\mathbb{R }g(t)G^\prime (u)\,(Y_t\star J_{N})^2(u)\,\mathrm{d}u\mathrm{d}t -\left\langle g\otimes G\,, \frac{\partial }{\partial t}Y-\frac{\partial ^2}{\partial u^2}Y -\frac{\partial }{\partial t}M\right\rangle /{\gamma } \right| ^2\nonumber \\&\quad \le 2e^TC_{J}\,N^{-1/3}\, \Vert g^\prime \Vert _{L^2[0,T]}^2 \sum _{m=1}^3\sup _{u} |(1+u^2)\frac{\partial ^m}{\partial u^m}G(u)|^2 ,\quad N\ge 1, \end{aligned}$$
(2.4)

for all \(g\in C^1([0,T])\) with \(g(T)=0\) and \(G\in {\fancyscript{S}}(\mathbb{R })\).

The next step consists in finding a subsequence \((N_k)_{k=1}^\infty \) and a subset \(\Omega _\gamma \in \mathcal{F }_T^Y\) of measure \(\mathbf{P}_{{\gamma }}(\Omega _\gamma )=1\) such that \(\Omega _\gamma \subseteq {\fancyscript{N}}_{div}^c\) where \({\fancyscript{N}}_{div}\) is defined by (1.3). The ultimate goal would of course be a subsequence \((N_k)_{k=1}^\infty \) which is the same for all \(\gamma \not =0\).

For this purpose it turns out to be useful to think of the function

$$\begin{aligned} (t,u)\mapsto (Y_t\star J_N)^2(u) \quad \text{ where }\quad Y\in D([0,T];{\fancyscript{S}}^\prime (\mathbb{R })) \end{aligned}$$

as a regular distribution in \({\fancyscript{D}}^\prime ((0,T)\times \mathbb{R })\). This regular distribution is denoted by \((Y\star _2 J_N)^2\) in what follows. Notice the notation \(\star _2\) which emphasises that the convolution only acts on the space component of \(Y\).

Then the idea is to construct a Banach space \((\mathcal{E },|\!|\!|\cdot |\!|\!|)\) satisfying \({\fancyscript{D}}((0,T)\times \mathbb{R })\subseteq \mathcal{E }^\prime \subseteq L^2([0,T]\times \mathbb{R })\subseteq \mathcal{E }\) such that

$$\begin{aligned} \mathbf{E}_{{\gamma }}|\!|\!|\frac{\partial }{\partial u}(Y\star _2 J_N)^2\!-\! \left( \frac{\partial }{\partial t}Y\!-\!\frac{\partial ^2}{\partial u^2}Y \!-\!\frac{\partial }{\partial t}M\right) /{\gamma }|\!|\!|^2 \,\le \,const/N^\alpha , \quad N\ge 1,\qquad \quad \end{aligned}$$
(2.5)

for some \(\alpha >0\).

Remark 2.6

  • Suppose for now that (2.5) can be achieved by finding \(|\!|\!|\cdot |\!|\!|,\alpha \) and \(const\) where the latter might depend on \(T,J\) and \(\gamma \). Choosing \((N_k)_{k=1}^\infty \) to be

    $$\begin{aligned} N_k\,=\left\{ \begin{array}{lcc} k^{\tilde{\alpha }}\quad \text{ for } \text{ some }\,\, \tilde{\alpha }>1/\alpha &{}:&{}\alpha \le 1\\ k\quad &{}:&{}\alpha >1 \end{array}\right. \end{aligned}$$

    would then yield

    $$\begin{aligned} \sum _{k=1}^\infty \mathbf{P}_{{\gamma }}\left( \left\{ |\!|\!|\frac{\partial }{\partial u}(Y\star _2 J_{N_k})^2\!-\! \left( \frac{\partial }{\partial t}Y\!-\!\frac{\partial ^2}{\partial u^2}Y \!-\!\frac{\partial }{\partial t}M\right) /{\gamma }|\!|\!| \!\ge \!\delta \right\} \right) \!<\!\infty , \quad \forall \,\delta >0, \end{aligned}$$

    hence

    $$\begin{aligned} \left| \!\left| \!\left| \frac{\partial }{\partial u}(Y\star _2 J_{N_k})^2- \left( \frac{\partial }{\partial t}Y-\frac{\partial ^2}{\partial u^2}Y -\frac{\partial }{\partial t}M\right) /{\gamma }\right| \!\right| \!\right| \longrightarrow 0, \quad k\rightarrow \infty , \end{aligned}$$

    for all \(Y\in \Omega _{{\gamma }}\) for some \(\Omega _{{\gamma }}\in \mathcal{F }^Y_T\) with \(\mathbf{P}_{{\gamma }}(\Omega _{{\gamma }})=1\). Since weak convergence in \(\mathcal{E }\) implies weak convergence in \({\fancyscript{D}}^\prime ((0,T)\times \mathbb{R })\) one would obtain that

    $$\begin{aligned} \left\langle \phi , \frac{\partial }{\partial t}Y-\frac{\partial ^2}{\partial u^2}Y -\frac{\partial }{\partial t}M\right\rangle /{\gamma }&= \lim _{k\rightarrow \infty }\left\langle \phi , \frac{\partial }{\partial u}(Y\star _2 J_{N_k})^2\right\rangle \\&= -\lim _{k\rightarrow \infty }\int _0^T\,\int _\mathbb{R }\frac{\partial }{\partial u}\phi (t,u) \,(Y_t \star J_{N_k})^2(u)\,\mathrm{d}u\mathrm{d}t \end{aligned}$$

    for all \(\phi \in {\fancyscript{D}}((0,T)\times \mathbb{R })\) and \(Y\in \Omega _{{\gamma }}\) which obviously means \(\Omega _\gamma \subseteq {\fancyscript{N}}_{div}^c\) where the chosen subsequence \((N_k)_{k=1}^\infty \) would indeed be the same for all \(\gamma \not =0\). Notice that \(\langle \phi \,, \frac{\partial }{\partial t}Y-\frac{\partial ^2}{\partial u^2}Y -\frac{\partial }{\partial t}M\rangle /{\gamma }\) does not depend on the choice of \(J\) so that \(\frac{\partial }{\partial u}(Y^2)\) given by (1.4) on \(\Omega _\gamma \) would be the same for all even mollifiers \(J\). Furthermore, the equality in (1.5) would also be true for all \(\phi \in {\fancyscript{D}}((0,T)\times \mathbb{R })\) and all \(Y\in \Omega _\gamma \) because \(\partial {M}/\partial {t}= \sqrt{2}\,{\partial ^3 B}/\partial {t}/{\partial u^2}\) by (2.3).

So it remains to justify (2.5). Of course, one wants to use the bounds given by the right-hand side of (2.4) to construct the Banach space \((\mathcal{E },|\!|\!|\cdot |\!|\!|)\) but some care is needed to ensure that \({\fancyscript{D}}((0,T)\times \mathbb{R })\subseteq \mathcal{E }^\prime \). A straight forward approach to tackle this problem is using a so-called negative-order Sobolev space which is introduced next.

First observe that

$$\begin{aligned} \!\!\!\!\!\sup _{u\in \mathbb{R }}|(1\!+\!u^2)H(u)|^2 \!\le \! 4\,\Vert (1+u^2)H\Vert ^2_{L^2(\mathbb{R })}+ 2\,\Vert (1+u^2)H\Vert _{L^2(\mathbb{R })}\Vert (1+\!u^2)H^\prime \Vert _{L^2(\mathbb{R })}\nonumber \\ \end{aligned}$$
(2.6)

for any test function \(H\in {\fancyscript{S}}(\mathbb{R })\). Now let \((g_m)_{m=1}^\infty \) be the eigenbasis of the one-dimensional Laplacian on \([0,T]\) with Dirichlet boundary conditions and let \((G_n)_{n=1}^\infty \) be the collection of Hermite functions. Then \((g_n\otimes G_m)_{n,m}\) forms an orthonormal basis in \(L^2([0,T]\times \mathbb{R })\) and it follows from (2.4) and (2.6) that

$$\begin{aligned} \mathbf{E}_{{\gamma }}\left| \left\langle g_m\otimes G_n\,, \frac{\partial }{\partial u}(Y\star _2 J_{N_k})^2\!-\! \left( \frac{\partial }{\partial t}Y\!-\!\frac{\partial ^2}{\partial u^2}Y \!-\!\frac{\partial }{\partial t}M\right) /{\gamma }\right\rangle \right| ^2 \!\!\le \! {const}\cdot m^2 n^6/N^{1/3}\nonumber \\ \end{aligned}$$
(2.7)

where \(const\) does not depend on the choice of \(m\) and \(n\). Of course the factor \(m^2\) goes back to the eigenvalue associated with \(g_m\) and, using the combinatorical properties of the Hermite functions, \(O(n^6)\) is a quite crude estimate of the norms of \(H\) and its derivative in (2.6) when \(H=G_n^\prime ,G_n^{\prime \prime },G_n^{\prime \prime \prime }\). So an appropriate choice of the Banach space \(\mathcal{E }\) is the completion of \({\fancyscript{D}}((0,T)\times \mathbb{R })\) with respect to the norm \(|\!|\!|\cdot |\!|\!|\) given by

$$\begin{aligned} |\!|\!|\phi |\!|\!|^2\,=\, \sum _{m,n}\left[ (m^3+n^3)m^2 n^6\right] ^{-1}\, \langle g_m\otimes G_n, \phi \rangle ^2. \end{aligned}$$

Notice that \({\fancyscript{D}}((0,T)\times \mathbb{R })\subseteq \mathcal{E }^\prime \) is a standard consequence when choosing \((g_m)_{m=1}^\infty \) and \((G_n)_{n=1}^\infty \) as above.

Using this Banach space and applying (2.7) to calculate \(\mathbf{E}_{{\gamma }}|\!|\!|\frac{\partial }{\partial u}(Y\star _2 J_N)^2- (\frac{\partial }{\partial t}Y-\frac{\partial ^2}{\partial u^2}Y -\frac{\partial }{\partial t}M)/{\gamma }|\!|\!|^2\) results in (2.5) for \(\alpha =1/3\) hence Remark 2.6 proves the following theorem.

Theorem 2.7

  1. (i)

    There exists a subsequence \((N_k)_{k=1}^\infty \) such that for every \({\gamma }\not =0\) there is a set \(\Omega _{{\gamma }}\in \mathcal{F }^Y_T\) with \(\mathbf{P}_{{\gamma }}(\Omega _{{\gamma }})=1\) such that \(\Omega _\gamma \subseteq {\fancyscript{N}}_{div}^c\) where \({\fancyscript{N}}_{div}\) is defined by (1.3) and \(\frac{\partial }{\partial u}(Y^2)\) given by (1.4) on \(\Omega _\gamma \) is the same for all even mollifiers \(J\).

  2. (ii)

    There exists a Brownian sheet \(B(t,u),\,t\in [0,T],\,u\in \mathbb{R }\), on \((D([0,T];{\fancyscript{D}}^\prime (\mathbb{R })),\) \(\mathcal{F }^Y_T,\mathbf{P}_{{\gamma }})\) such that the coordinate process \(Y\) solves the Eq. (1.1) in the sense of (1.5).

Remark 2.8

  1. (i)

    The choice of the subsequence used in the definition (1.4) of \(\frac{\partial }{\partial u}(Y^2)\) depends on the power \(\alpha \) needed to establish (2.5). The power \(\alpha =1/3\) used in this paper goes back to [2]. The results in [7] suggest that \(\alpha =1/2\) seems to be possible. However, for the purpose of giving rigorous sense to the Eq. (1.1), the choice of an optimal subsequence is not intrinsic and so the author used what he had proved himself in [2]. But, in the light of the new techniques applied in [8], he would like to conjecture the following: Eq. (1.1) holds true in the sense of (1.5) using \(N_k=k\) in the definition (1.4) of \(\frac{\partial }{\partial u}(Y^2)\).

  2. (ii)

    It is a consequence of Theorem 2.7(i) that \(\bigcup _{{\gamma }\not =0}\Omega _{{\gamma }} \subseteq {\fancyscript{N}}_{div}^c\). But, as shown in [4], each measure \(\mathbf{P}_{{\gamma }}\) is related to the solution of a corresponding stochastic heat equation

    $$\begin{aligned} \frac{\partial }{\partial t}Z \,=\, \frac{\partial ^2}{\partial u^2}Z +\gamma \sqrt{2}\,Z\,\frac{\partial ^2 B}{\partial t\partial u}, \quad \gamma \not =0, \end{aligned}$$

    through the Cole–Hopf transform and changing the diffusion coefficient by \(\gamma \) indicates that all measures \(\mathbf{P}_{{\gamma }},\,\gamma \not =0\), are singular to each other. Thus, the set \(\bigcup _{{\gamma }\not =0}\Omega _{{\gamma }}\) is not too small since \(\mathbf{P}_{{\gamma }}(\Omega _{{\gamma }})=1\) for all \(\gamma \not =0\).

3 Proofs

This section contains the proofs of Lemma 2.4 and Proposition 2.5, but first, further notation and auxiliary results need to be provided.

Fix \(\varepsilon >0\) small enough such that \(\sqrt{\varepsilon }\gamma \in [-1,1]\), fix a test function \(G\in {\fancyscript{S}}(\mathbb{R })\) and denote by \(\Vert \cdot \Vert _p\) the norm in \(L^p(\mathbb{R }),\,1\le p\le \infty \). Then

$$\begin{aligned} M^{G,\varepsilon }_t\,=\,Y_t^\varepsilon (G)-Y_0^\varepsilon (G)- \int _0^t \varepsilon ^{-2}L_\varepsilon Y_s^\varepsilon (G) \,\mathrm{d}s,\;t\ge 0, \end{aligned}$$

is a martingale on \((\Omega ,\mathcal{F },\mathbf{P}_{\varepsilon })\) by standard theory on strong Markov processes and

$$\begin{aligned} \int _0^t \varepsilon ^{-2}L_\varepsilon Y_s^\varepsilon (G)\,\mathrm{d}s \,=\int _0^t\varepsilon ^{-\frac{3}{2}} \sum _{x\in \mathbb{Z }}G(\varepsilon x) L_\varepsilon \xi _{s\varepsilon ^{-2}}(x)\,\mathrm{d}s,\;t\ge 0, \end{aligned}$$
(3.1)

where

$$\begin{aligned} \begin{array}{rcl} \displaystyle L_\varepsilon \xi _{s\varepsilon ^{-2}}(x) &{}=&{}\left[ \left( \xi _{s\varepsilon ^{-2}}(x-1)-2 \xi _{s\varepsilon ^{-2}}(x) +\xi _{s\varepsilon ^{-2}}(x+1) \right) \right. \\ &{}&{}\displaystyle \left. +\,\sqrt{\varepsilon }\gamma \left( \xi _{s\varepsilon ^{-2}} (x)\xi _{s\varepsilon ^{-2}}(x+1) -\xi _{s\varepsilon ^{-2}}(x-1)\xi _{s\varepsilon ^{-2}}(x)\right) \right] \end{array} \end{aligned}$$
(3.2)

follows from (2.1). Substituting (3.2) into (3.1), performing a summation by parts and approximating by Taylor expansion implies

$$\begin{aligned} \int _0^t \varepsilon ^{-2}L_\varepsilon Y_s^\varepsilon (G)\,\mathrm{d}s&= \int _0^t Y_s^\varepsilon (G^{\prime \prime })\,\mathrm{d}s \,-\,\gamma \int _0^t \sum _{x\in \mathbb{Z }}G^\prime (\varepsilon x)\, \xi _{s\varepsilon ^{-2}}(x)\xi _{s\varepsilon ^{-2}}(x+1)\,\mathrm{d}s\\&+\frac{\gamma \varepsilon }{2}\int _0^t \sum _{x\in \mathbb{Z }}G^{\prime \prime }(\varepsilon x)\, \xi _{s\varepsilon ^{-2}}(x)\xi _{s\varepsilon ^{-2}}(x+1)\,\mathrm{d}s \,+R^G_\varepsilon (t) \end{aligned}$$

with

$$\begin{aligned} |R^G_\varepsilon (t)|\,\le \, \sqrt{\varepsilon }\,\frac{1}{6} (2+\sqrt{\varepsilon }\gamma ) (\pi +2\varepsilon ) \Vert (1+u^2)G^{\prime \prime \prime }\Vert _\infty \cdot t, \quad t\ge 0, \end{aligned}$$
(3.3)

where \(\pi +2\varepsilon \) is an upper bound of the discretization of the integral \(\int _\mathbb{R }(1+u^2)^{-1}\mathrm{d}u\) in this context. Now, by notational purpose, define

$$\begin{aligned} R_{\varepsilon ,N}^{G^\prime ,0}(t) \,\mathop {=}\limits ^{\text{ def }}\, \frac{\varepsilon }{2}\int _0^{t} \sum _{x\in \mathbb{Z }}G^{\prime \prime }(\varepsilon x) \xi _{s\varepsilon ^{-2}}(x)\xi _{s\varepsilon ^{-2}}(x+1)\,\mathrm{d}s, \quad t\ge 0, \end{aligned}$$
(3.4)

although the right-hand side does not depend on \(N\) and includes \(G^{\prime \prime }\) instead of \(G^\prime \). Using this notation leads to the decomposition

$$\begin{aligned} \begin{array}{rcl} \displaystyle M^{G,\varepsilon }_t+\,R^G_\varepsilon (t) +\gamma R_{\varepsilon ,N}^{G^\prime ,0}(t) &{}=&{}\displaystyle Y_t^\varepsilon (G)-Y_0^\varepsilon (G) -\int _0^t Y_s^\varepsilon (G^{\prime \prime })\,\mathrm{d}s\\ &{}&{}+\displaystyle {\gamma }\int _0^t \sum _{x\in \mathbb{Z }}G^{\prime }(\varepsilon x) \xi _{s\varepsilon ^{-2}}(x)\xi _{s\varepsilon ^{-2}}(x+1)\,\mathrm{d}s \end{array} \end{aligned}$$
(3.5)

for all \(t\ge 0\).

It turns out to be useful to rewrite the difference below as follows

$$\begin{aligned} \int _{0}^{t}\int _\mathbb{R }G^\prime (u) (Y_s^\varepsilon \star J_{N})^2(u) \,\mathrm{d}u\mathrm{d}s \,-\, \int _0^t \sum _{x\in \mathbb{Z }}G^\prime (\varepsilon x) \xi _{s\varepsilon ^{-2}}(x)\xi _{s\varepsilon ^{-2}}(x+1)\,\mathrm{d}s \,= \sum _{i=1}^4 R_{\varepsilon ,N}^{G^\prime ,i}(t)\nonumber \\ \end{aligned}$$
(3.6)

where

$$\begin{aligned} R_{\varepsilon ,N}^{G^\prime ,i}(t) \,\mathop {=}\limits ^{\text{ def }} \int _0^{t} V_{\varepsilon ,N}^{G^\prime ,i}(\xi _{s\varepsilon ^{-2}})\,\mathrm{d}s, \quad t\ge 0,\quad i=1,2,3,4, \end{aligned}$$
(3.7)

are given by

$$\begin{aligned} V_{\varepsilon ,N}^{G^\prime ,1}(\xi )&= \sum _{x\in \mathbb{Z }} \int _\mathbb{R }[G^\prime (u)-G^\prime (\varepsilon x)] J_N(u-\varepsilon x) \sum _{\tilde{x}\in \mathbb{Z }} \varepsilon J_N(u-\varepsilon \tilde{x})\,\mathrm{d}u\; \xi (x)\xi (\tilde{x}),\\ V_{\varepsilon ,N}^{G^\prime ,2}(\xi )&= \varepsilon \sum _{x\in \mathbb{Z }}G^\prime (\varepsilon x) \int _\mathbb{R }J_N^2(u-\varepsilon x)\,\mathrm{d}u\; \xi (x)[\xi (x)-\xi (x+1)],\\ V_{\varepsilon ,N}^{G^\prime ,3}(\xi )&= \varepsilon \sum _{x\not =\tilde{x}}G^\prime (\varepsilon x) \int _\mathbb{R }J_N(u-\varepsilon x)J_N(u-\varepsilon \tilde{x})\,\mathrm{d}u\; \xi (x)[\xi (\tilde{x})-\xi (x+1)],\\ V_{\varepsilon ,N}^{G^\prime ,4}(\xi )&= \sum _{x\in \mathbb{Z }}G^\prime (\varepsilon x) \int _\mathbb{R }J_N(u-\varepsilon x) \left[ \sum _{\tilde{x}\in \mathbb{Z }}\varepsilon J_N(u-\varepsilon \tilde{x})-1 \right] \!\mathrm{d}u\; \xi (x)\xi (x+1). \end{aligned}$$

Notice that \(\int _\mathbb{R }G^\prime (u)\mathrm{d}u=0\) hence the following lemma can be applied in this context.

Lemma 3.1

[3, Lemma 3.3] Recall (3.7) for the definition of \(R_{\varepsilon ,N}^{G^\prime ,i},\,i=1,2,3,4\). Then

  1. (i)

    \(\int _0^T\mathrm{d}t\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,1}(t)\right] ^2 \le e^T\tilde{C_{J}} \Big (\frac{\Vert (1+u^2) G^{\prime \prime \prime }\Vert _\infty ^2}{N^2} +\frac{\Vert (1+u^2)G^{\prime \prime }\Vert _\infty ^2}{N} \Big )\)

  2. (ii)

    \(\int _0^T\mathrm{d}t\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,2}(t)\right] ^2 \le e^T\tilde{C_{J}} \Big (\varepsilon ^2 N^2\,\Vert G^{\prime \prime }\Vert _\infty ^2 +\varepsilon N^2\,\Vert (1+u^2)G^\prime \Vert _\infty ^2 \Big )\)

  3. (iii)

    \(\int _0^T\mathrm{d}t\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,3}(t)\right] ^2 \le e^T\tilde{C_{J}} \,\frac{\Vert (1+u^2)G^\prime \Vert _\infty ^2}{N^{1/3}}\)

  4. (iv)

    \(\int _0^T\mathrm{d}t\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,4}(t)\right] ^2 \le T^3\tilde{C_{J}} \,\varepsilon ^2 N^4\Vert G^\prime \Vert _1^2\)

for all \(\varepsilon >0,N\ge 1\) where \(\tilde{C_{J}}\) is a constant which only depends on the choice of the mollifier \(J\).

Remark 3.2

Recall the definition of \(R_{\varepsilon ,N}^{G^\prime ,0}\) given in (3.4) which does not depend on \(N\). Then the rate of convergence

$$\begin{aligned} \int _0^T\mathrm{d}t\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,0}(t)\right] ^2 \,=\,O(\varepsilon ^2), \quad \varepsilon \downarrow 0\text{, } \text{ uniformly } \text{ in } N\text{, } \end{aligned}$$
(3.8)

follows from Remark 1(iii) in [2] by the same method used in the proof of the above lemma in [3].

Proof of Lemma 2.4

Fix \(N\ge 1\), fix \(\delta >0\) and choose \(N_\delta \ge N\) such that

$$\begin{aligned} 8e^T\tilde{C_{J}}\Big ( N_\delta ^{-2}\Vert (1\!+\!u^2)G^{\prime \prime \prime }\Vert _\infty ^2 \!+\!N_\delta ^{-1}\Vert (1\!+\!u^2)G^{\prime \prime }\Vert _\infty ^2 \!+\!N_\delta ^{-1/3}\Vert (1\!+\!u^2)G^\prime \Vert _\infty ^2 \Big ) \,\le \,\delta /4 \end{aligned}$$

where \(\tilde{C_{J}}\) is the constant appearing in Lemma 3.1. Then

$$\begin{aligned}&\int _0^T\mathrm{d}t\,\mathbf{E}_{{\gamma }}\left[ \int _{0}^{t}\int _\mathbb{R }G^\prime (u)\, \left( (Y_{s}\star J_{{N_\delta }})^2(u) - (Y_s\star J_{N})^2(u)\right) \,\mathrm{d}u\mathrm{d}s\right] ^2\\&\quad =\int _0^T\mathrm{d}t\, \int _{0}^{t}\int _\mathbb{R }\int _{0}^{t}\int _\mathbb{R }\mathrm{d}s_1\mathrm{d}u_1\mathrm{d}s_2\mathrm{d}u_2\; G^\prime (u_1)\,G^\prime (u_2)\\&\quad \quad \times \!\;\mathbf{E}_{{\gamma }}\! \left( (Y_{s_1}\star J_{{N_\delta }})^2(u_1) \!-\! (Y_{s_1}\star J_{N})^2(u_1)\!\right) \left( (Y_{s_2}\star J_{{N_\delta }})^2(u_2) \!-\! (Y_{s_2}\star J_{N})^2(u_2)\right) \end{aligned}$$

where by Lemma 4.1 in the Appendix

$$\begin{aligned}&\mathbf{E}_{{\gamma }}\! \left( (Y_{s_1}\star J_{{N_\delta }})^2(u_1) - (Y_{s_1}\star J_{N})^2(u_1)\right) \left( (Y_{s_2}\star J_{{N_\delta }})^2(u_2) - (Y_{s_2}\star J_{N})^2(u_2)\right) \\&\quad =\!\,\lim _{\varepsilon \downarrow 0}\, \hat{\mathbf{E}}_{\varepsilon }\! \left( (Y_{s_1}\star J_{{N_\delta }})^2(u_1) \!-\! (Y_{s_1}\star J_{N})^2(u_1)\!\right) \!\left( (Y_{s_2}\star J_{{N_\delta }})^2(u_2) \!-\! (Y_{s_2}\star J_{N})^2(u_2)\right) \end{aligned}$$

such that

$$\begin{aligned}&\left| \hat{\mathbf{E}}_{\varepsilon }\! \left( (Y_{s_1}\star J_{{N_\delta }})^2(u_1) - (Y_{s_1}\star J_{N})^2(u_1)\right) \left( (Y_{s_2}\star J_{{N_\delta }})^2(u_2) - (Y_{s_2}\star J_{N})^2(u_2)\right) \right| ^2\\&\quad \le \hat{f}(\Vert J_N\Vert ^2,\Vert J_{{N_\delta }}\Vert ^2) \end{aligned}$$

for all \(\varepsilon \le 1,\,0\le s_1,s_2\le T\) and \(u_1,u_2\in \mathbb{R }\). Hence, by dominated convergence, it follows that

$$\begin{aligned}&\int _0^T\mathrm{d}t\,\mathbf{E}_{{\gamma }}\left[ \int _{0}^{t}\int _\mathbb{R }G^\prime (u)\, \left( (Y_{s}\star J_{{N_\delta }})^2(u) - (Y_s\star J_{N})^2(u)\right) \,\mathrm{d}u\mathrm{d}s\right] ^2\nonumber \\&\quad \!\le \! \frac{\delta }{2}\,+\! \int _0^T\mathrm{d}t\,\hat{\mathbf{E}}_{\varepsilon } \left[ \int _{0}^{t}\int _\mathbb{R }G^\prime (u)\, \left( \!(Y_{s}\star J_{{N_\delta }})^2(u) \!- \!(Y_s\star J_{N})^2(u)\!\right) \,\mathrm{d}u\mathrm{d}s\right] ^2 \end{aligned}$$
(3.9)

if \(\varepsilon =\varepsilon _{N,{N_\delta }}>0\) is chosen to be sufficiently small.

Using (3.6), the last summand can be further estimated by

$$\begin{aligned}8\sum _{i=1}^4 \int _0^T\mathrm{d}t\, \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,i}(t) \right] ^2 + 8\sum _{i=1}^4 \int _0^T\mathrm{d}t\, \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N_\delta }^{G^\prime ,i}(t) \right] ^2 \end{aligned}$$

where

$$\begin{aligned} \sum _{i=1}^4 \int _0^\infty \mathrm{d}t\, \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,i}(t) \right] ^2&\le e^T\tilde{C_{J}}\left( \! \frac{\Vert (1\!+\!u^2)G^{\prime \prime \prime }\Vert _\infty ^2}{N^2} \!+\!\frac{\Vert (1\!+\!u^2)G^{\prime \prime }\Vert _\infty ^2}{N} \!+\!\frac{\Vert (1\!+\!u^2)G^\prime \Vert _\infty ^2}{N^{1/3}} \!\right) \\&+\!e^T\tilde{C_{J}}\left( \! \varepsilon ^2 N^{2}\Vert G^{\prime \prime }\Vert _\infty ^2 \!+\!\varepsilon N^{2}\Vert (1\!+\!u^2)G^\prime \Vert _\infty ^2 \!+\!\varepsilon ^2 N^{4}\Vert G^\prime \Vert _1^2\!\right) \end{aligned}$$

by Lemma 3.1. Of course, the same inequality holds if \(N\) is replaced by \({N_\delta }\) such that

$$\begin{aligned}\sum _{i=1}^4 \int _0^\infty \mathrm{d}t\, \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N_\delta }^{G^\prime ,i}(t) \right] ^2 \,&\le \,\frac{\delta }{32}+ e^T\tilde{C_{J}}\left( \varepsilon ^2 N_\delta ^{2}\Vert G^{\prime \prime }\Vert _\infty ^2 +\varepsilon N_\delta ^{2}\Vert (1+u^2)G^\prime \Vert _\infty ^2\right. \\&\left. +\varepsilon ^2 N_\delta ^{4}\Vert G^\prime \Vert _1^2 \right) \end{aligned}$$

by the choice of \(N_\delta \) at the beginning of this proof. So, choosing \(\varepsilon =\varepsilon _{N,{N_\delta }}\) small enough such that both (3.9) and

$$\begin{aligned} 2\cdot 8e^T\tilde{C_{J}}\left( \varepsilon ^2{N_\delta }^{2}\cdot \Vert G^{\prime \prime }\Vert _\infty ^2 +\varepsilon {N_\delta }^{2}\cdot c_G\Vert G^\prime \Vert _\infty ^2 +\varepsilon ^2{N_\delta }^{4}\cdot \Vert G^\prime \Vert _1^2 \right) \,\le \,\delta /4 \end{aligned}$$

yields

$$\begin{aligned}&\int _0^T\mathrm{d}t\,\mathbf{E}_{{\gamma }}\left[ \int _{0}^{t}\int _\mathbb{R }G^\prime (u)\, \left( (Y_{s}\star J_{{N_\delta }})^2(u) - (Y_s\star J_{N})^2(u)\right) \,\mathrm{d}u\mathrm{d}s\right] ^2\\&\quad \le \delta + 8e^T\tilde{C_{J}}\left( \frac{\Vert (1+u^2)G^{\prime \prime \prime }\Vert _\infty ^2}{N^2} +\frac{\Vert (1+u^2)G^{\prime \prime }\Vert _\infty ^2}{N} +\frac{\Vert (1+u^2)G^\prime \Vert _\infty ^2}{N^{1/3}} \right) \\&\quad \le \delta +8e^T\tilde{C_{J}}\,N^{-1/3} \sum _{m=1}^3\sup _{u} |(1+u^2)\frac{\partial ^m}{\partial u^m}G(u)|^2. \end{aligned}$$

Repeating the above procedure with respect to \(N_\delta \ge \tilde{N}\) gives the same inequality for \(\tilde{N}\). Hence

$$\begin{aligned}&\int _0^T\mathrm{d}t\,\mathbf{E}_{{\gamma }}\left[ \int _{0}^{t}\int _\mathbb{R }G^\prime (u)\, \left( (Y_{s}\star J_{\tilde{N}})^2(u) - (Y_s\star J_{N})^2(u)\right) \,\mathrm{d}u\mathrm{d}s\right] ^2\\&\quad \le 4\delta +32e^T\tilde{C_{J}}\,N^{-1/3} \sum _{m=1}^3\sup _{u} |(1+u^2)\frac{\partial ^m}{\partial u^m}G(u)|^2 \end{aligned}$$

for arbitrary but fixed \(N,\tilde{N}\) with \(\tilde{N}\ge N\) which finally proves the lemma since \(\delta \) can be made arbitrarily small. \(\square \)

Proof of Proposition 2.5(i)

In this proof the notation \(const\) is used when a notation for a constant is needed thus \(const\) can take different values depending on the situation.

Fix \(G\in {\fancyscript{S}}(\mathbb{R })\). Applying (2.2), there exists a subsequence \((N_k)_{k=1}^\infty \) and a measurable subset \(\mathcal{T }_G\subseteq [0,T]\) with \(\ell (\mathcal{T }_G)=T\) such that

$$\begin{aligned} \lim _{k\rightarrow \infty }\mathbf{E}_{{\gamma }}\left[ \tilde{M}_t^G-\mathfrak{M }_{N_k}(Y)_t^G\right] ^2=0 \end{aligned}$$
(3.10)

for all \(t\in \mathcal{T }_G\). For technical reasons assume \(T\notin \mathcal{T }_G\) and let \(\{t_1,t_2,\dots \}\subseteq \mathcal{T }_G\) be a dense subset of \([0,T]\).

First observe that \(\tilde{M}^G_{t_n}\) is \(\mathcal{F }_{t_n}^Y\)- measurable, \(n=1,2,\dots \), and and the key is to show the following \(\mathcal{F }_t^Y\)- martingale property

$$\begin{aligned} \mathbf{E}_{{\gamma }}X[\tilde{M}^G_{t_n}-\tilde{M}^G_{t_{n^\prime }}]=0 \end{aligned}$$

for \(t_{n^\prime },t_n\in \{t_1,t_2,\dots \}\) satisfying \(t_{n^\prime }<t_n\) and an arbitrary random variable \(X\) of the form \(X=f(Y_{s_1}(H_1),\dots ,Y_{s_p}(H_p))\) where \(f:\mathbb{R }^p\rightarrow \mathbb{R }\) is a bounded continuous function, \(H_i\in {\fancyscript{S}}(\mathbb{R })\) and \(0\le s_i\le t_{n^\prime },\,1\le i\le p\). Of course, this martingale property is satisfied if there exists \(const>0\) such that

$$\begin{aligned} \left( \mathbf{E}_{{\gamma }}X[\tilde{M}^G_{t_n}-\tilde{M}^G_{t_{n^\prime }}]\right) ^2 \,\le \,const\cdot \delta \quad \text{ for } \text{ all } \delta >0\text{. } \end{aligned}$$
(3.11)

In order to prove (3.11), fix an arbitrary \(\delta >0\) and remark that Lemma 3.1 implies

$$\begin{aligned} \int _0^T\mathrm{d}t\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,1}(t)\right] ^2=O(N^{-1}) \quad \text{ and }\quad \int _0^T\mathrm{d}t\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,3}(t)\right] ^2=O(N^{-1/3}) \end{aligned}$$

uniformly in \(\varepsilon >0\). Hence, for some \(\tau >0\) satisfying \(t_n+2\tau <T\), one can choose \(k\) big enough such that both

$$\begin{aligned} \ell \left( \left\{ t\in [0,T]: \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N_k}^{G^\prime ,1}(t)\right] ^2 \!+\! \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N_k}^{G^\prime ,3}(t)\right] ^2 \ge \delta \right\} \right) \!\le \!\tau /2 \quad \text{ for } \text{ all } \varepsilon >0 \end{aligned}$$
(3.12)

and

$$\begin{aligned} \mathbf{E}_{{\gamma }}\left[ \tilde{M}^G_{t_n}-\mathfrak{M }_{N_k}(Y)_{t_n}^G\right] ^2 + \mathbf{E}_{{\gamma }}\left[ \tilde{M}^G_{t_{n^\prime }}-\mathfrak{M }_{N_k}(Y)_{t_{n^\prime }}^G \right] ^2 \,<\,\delta \end{aligned}$$
(3.13)

hold true. This \(k=k_\delta \) is chosen and fixed for proving (3.11) in what follows.

Of course, applying Cauchy–Schwarz, (3.13) implies

$$\begin{aligned} \left( \mathbf{E}_{{\gamma }}X[\tilde{M}^G_{t_n}-\tilde{M}^G_{t_{n^\prime }}]\right) ^2 \,\le \,const\left\{ \delta + \left( \mathbf{E}_{{\gamma }}X[\mathfrak{M }_{N_k}(Y)_{t_n}^G-\mathfrak{M }_{N_k} (Y)_{t_{n^\prime }}^G]\right) ^2 \right\} . \end{aligned}$$
(3.14)

Now, substituting the definition of \(\mathfrak{M }_{N_k}\), one obtains that

$$\begin{aligned} \left( \mathbf{E}_{{\gamma }}X[\mathfrak{M }_{N_k}(Y)_{t_n}^G-\mathfrak{M }_{N_k} (Y)_{t_{n^\prime }}^G]\right) ^2 = \left( \mathbf{E}_{{\gamma }}X[Y_{t_n}(G)-Y_{t_{n^\prime }}(G)] \!-\!\int _{t_{n^\prime }}^{t_n} \mathbf{E}_{{\gamma }}XY_s(G^{\prime \prime })\,\mathrm{d}s\right. \\ \left. +{\gamma } \int _{t_{n^\prime }}^{t_n}\int _\mathbb{R }G^\prime (u)\, \mathbf{E}_{{\gamma }}X(Y_s\star J_{N_k})^2(u)\,\mathrm{d}u\mathrm{d}s \right) ^{2} \end{aligned}$$

where

$$\begin{aligned} \mathbf{E}_{{\gamma }}X(Y_s\star J_{N_k})^2(u) \,=\,\lim _{\varepsilon \downarrow 0}\, \hat{\mathbf{E}}_{\varepsilon }X(Y_s\star J_{N_k})^2(u) \end{aligned}$$

such that

$$\begin{aligned} |\hat{\mathbf{E}}_{\varepsilon }X(Y_s\star J_{N_k})^2(u)|^2 \,\le \, \sup \nolimits _{x\in \mathbb{R }^p} |f(x)|\, \hat{f}(\Vert J_{N_k})\Vert _2^2) \end{aligned}$$

for all \(\varepsilon \le 1,\,s\in [0,T]\) and \(u\in \mathbb{R }\) by Lemma 4.1 in the Appendix. Here \(f\) is the function defining \(X\) while \(\hat{f}\) corresponds to Lemma 4.1 applied to \((Y_s\star J_{N_k})^2(u)\) and does not depend on \(u\). So

$$\begin{aligned} \int _{t_{n^\prime }}^{t_n}\int _\mathbb{R }G^\prime (u)\, \mathbf{E}_{{\gamma }}X(Y_s\star J_{N_k})^2(u)\,\mathrm{d}u\mathrm{d}s \,=\,\lim _{\varepsilon \downarrow 0} \int _{t_{n^\prime }}^{t_n}\int _\mathbb{R }G^\prime (u)\, \hat{\mathbf{E}}_{\varepsilon }X(Y_s\star J_{N_k})^2(u)\,\mathrm{d}u\mathrm{d}s\end{aligned}$$

by dominated convergence and, as similar estimates can be obtained for the remaining but easier terms, one arrives at

$$\begin{aligned}&\left( \mathbf{E}_{{\gamma }}X[\mathfrak{M }_{N_k}(Y)_{t_n}^G-\mathfrak{M }_{N_k} (Y)_{t_{n^\prime }}^G]\right) ^2\\&\quad =\!\lim _{\varepsilon \downarrow 0} \left( \! \mathbf{E}_\varepsilon X^\varepsilon \!\left[ \!Y_{t_n}^\varepsilon (G)\!-\! Y_{t_{n^\prime }}^\varepsilon (G)\!-\! \int _{t_{n^\prime }}^{t_n}\left\{ \! Y_s^\varepsilon (G^{\prime \prime })- {\gamma }\int _\mathbb{R }G^\prime (u) (Y_s^\varepsilon \star J_{N_k})^2(u)\,\mathrm{d}u\!\right\} \mathrm{d}s \!\right] \!\right) ^{2}\\&\quad =\!\lim _{\varepsilon \downarrow 0} \left( \! \mathbf{E}_\varepsilon X^\varepsilon \!\left[ \!M_{t_n}^{G,\varepsilon }\!-\!M_{t_{n^\prime } }^{G,\varepsilon } \!+\!R_\varepsilon ^G(t_n)\!-\!R_\varepsilon ^G(t_{n^\prime }) \!+\!{\gamma }\sum _{i=0}^4 \left( \!R_{\varepsilon ,N_k}^{G^\prime ,i}(t_n)\!-\! R_{\varepsilon ,N_k}^{G^\prime ,i}(t_{n^\prime })\!\right) \!\right] \right) ^{2} \end{aligned}$$

using (3.5) and (3.6) for the last equality and writing \(X^\varepsilon \) as a substitute for \(f(Y_{s_1}^\varepsilon (H_1),\dots ,Y_{s_p}^\varepsilon (H_p))\). Notice that \(\mathbf{E}_\varepsilon X^\varepsilon [M_{t_n}^{G,\varepsilon }-M_{t_{n^\prime }}^{G,\varepsilon }]\) disappears by the martingale property. So, if \(\varepsilon _0\) is chosen small enough then

$$\begin{aligned}&\left( \mathbf{E}_{{\gamma }}X[\mathfrak{M }_{N_k}(Y)_{t_n}^G-\mathfrak{M }_{N_k} (Y)_{t_{n^\prime }}^G]\right) ^2 \nonumber \\&\quad \le const\left\{ \delta +\sum _{t\in \{t_n,t_{n^\prime }\}} \left( \mathbf{E}_{\varepsilon _0} \left[ R_{\varepsilon _0}^G(t)\right] ^2+\, \sum _{i=0}^4\mathbf{E}_{\varepsilon _0} \left[ R_{\varepsilon _0,N_k}^{G^\prime ,i}(t)\right] ^2\right) \right\} \end{aligned}$$
(3.15)

by Cauchy–Schwarz. Also, choose \(\varepsilon _0\) small enough such that

$$\begin{aligned} \mathbf{E}_{\varepsilon _0} \left[ R_{\varepsilon _0}^G(t_n)\right] ^2+\; \mathbf{E}_{\varepsilon _0} \left[ R_{\varepsilon _0}^G(t_{n^\prime })\right] ^2 <\,\delta \end{aligned}$$

which is possible by (3.3). The next lemma provides estimates for the remaining summands.

Lemma 3.3

Fix \(0\le i\le 4, t\in \{t_n,t_{n^\prime }\}\) and \(\tau >0\) satisfying \(t_n+2\tau <T\). If

$$\begin{aligned} \ell (\{t\in [0,T]: \mathbf{E}_{\varepsilon } \left[ R_{\varepsilon ,N}^{G^\prime ,i}(t)\right] ^2 \ge \delta \}) \le \tau /2 \end{aligned}$$

then there exists \(\tilde{t}\in [t,t+2\tau ]\) such that

$$\begin{aligned} \mathbf{E}_{\varepsilon } \left[ R_{\varepsilon ,N}^{G^{\prime },i}(\tilde{t})\right] ^2 < \delta \quad \text{ and } \quad \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^{\prime },i}(\tilde{t})- R_{\varepsilon ,N}^{G^{\prime },i}(t)\right] ^2 < \delta . \end{aligned}$$

Indeed, observe that if \(\tilde{t}\ge t\) then

$$\begin{aligned} \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,i}(\tilde{t})- R_{\varepsilon ,N}^{G^\prime ,i}(t)\right] ^2 \,=\, \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,i}(\tilde{t}-t)\right] ^2 \end{aligned}$$

by stationarity and the Markov property. Now assume the contrary of the lemma’s assertion, hence

$$\begin{aligned}{}[t,t+2\tau ]&\subseteq \left\{ \tilde{t}\in [t,t+2\tau ]: \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,i}(\tilde{t})\right] ^2 \,\ge \,\delta \right\} \\&\quad \cup \; \left\{ \tilde{t}\in [t,t+2\tau ]: \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,i}(\tilde{t})- R_{\varepsilon ,N}^{G^\prime ,i}(t)\right] ^2 \,\ge \,\delta \right\} \\&=\left\{ \tilde{t}\in [t,t+2\tau ]: \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,i}(\tilde{t})\right] ^2 \,\ge \,\delta \right\} \\&\quad \cup \; \left\{ \tilde{t}\in [t,t+2\tau ]: \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N}^{G^\prime ,i}(\tilde{t}-t)\right] ^2 \,\ge \,\delta \right\} . \end{aligned}$$

Thus, as the Lebesgue measures of each of the sets on the last equality’s right-hand side are bounded by \(\tau /2\), one obtains that \(2\tau \le \tau \) which is a contradiction proving the lemma.

Next, for fixed \(N_k\), from Lemma 3.1 follows that

$$\begin{aligned} \int _0^T\mathrm{d}t\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N_k}^{G^\prime ,2}(t)\right] ^2=O(\varepsilon ) \quad \text{ and }\quad \int _0^T\mathrm{d}t\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N_k}^{G^\prime ,4}(t)\right] ^2=O(\varepsilon ^2) \end{aligned}$$

and, additionally taking into account (3.8), one obtains that

$$\begin{aligned} \ell \left( \left\{ t\in [0,T]: \mathbf{E}_{\varepsilon _1} \left[ R_{\varepsilon _1,N_k}^{G^\prime ,0}(t)\right] ^2 \!+\! \mathbf{E}_{\varepsilon _1} \left[ R_{\varepsilon _1,N_k}^{G^\prime ,2}(t)\right] ^2 \!+\! \mathbf{E}_{\varepsilon _1} \left[ R_{\varepsilon _1,N_k}^{G^\prime ,4}(t)\right] ^2 \!\ge \!\delta \right\} \right) \,\le \,\tau /2 \end{aligned}$$

for a sufficiently small \(\varepsilon _1>0\). Thus, because (3.12) holds for all \(\varepsilon >0\) and so for \(\varepsilon _1\) in particular, one can estimate

$$\begin{aligned} \mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N_k}^{G^\prime ,i}({t})\right] ^2&\le 2\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N_k}^{G^\prime ,i}(\tilde{t})- R_{\varepsilon ,N_k}^{G^\prime ,i}(t)\right] ^2 + 2\,\mathbf{E}_\varepsilon \left[ R_{\varepsilon ,N_k}^{G^\prime ,i}(\tilde{t})\right] ^2\\&\le 2\delta +2\delta \end{aligned}$$

using Lemma 3.3 for each \(i=0,1,2,3,4\) and \(t=t_n,t_{n^\prime }\) where \(\tilde{t}\) of course depends on the chosen \(i\) and \(t\). So, when \(\varepsilon _0\) in (3.15) is replaced by the minimum of \(\varepsilon _0\) and \(\varepsilon _1\), it follows that

$$\begin{aligned} \left( \mathbf{E}_{{\gamma }}X[\mathfrak{M }_{N_k}(Y)_{t_n}^G-\mathfrak{M }_{N_k} (Y)_{t_{n^\prime }}^G]\right) ^2 \,\le \; const\cdot \delta \end{aligned}$$

which, together with (3.14), proves (3.11). Hence \((\tilde{M}_{s_j}^G)_{j=1}^m\) is an \((\mathcal{F }^Y_{s_j})_{j=1}^m\) - martingale for every finite ordered subset \(\{s_1,\dots ,s_m\}\) of \(\{t_1,t_2,\dots \}\).

Now, choose arbitrary \(s,t\in \mathcal{T }_G\) and fix \(a>0\). Without restricting the generality one can assume for a moment that \(s,t\) play the role of \(t_{n^\prime },t_n\) chosen in the previous part of this proof. Combining Chebyshev’s inequality and (3.13) yields

$$\begin{aligned} \mathbf{P}_{{\gamma }}\left( |\tilde{M}_t^G\!-\!\tilde{M}_s^G|>a\right) \le \frac{const}{a^2}\cdot \delta \!+\!\mathbf{P}_{{\gamma }}\left( |\mathfrak{M }_{N_k}(Y)_t^G\!-\!\mathfrak{M }_{N_k}(Y)_s^G|>a/3\right) \qquad \quad \end{aligned}$$
(3.16)

for the corresponding \(k=k_\delta \). Remark that the set \(\{|\mathfrak{M }_{N_k}(Y)_t^G-\mathfrak{M }_{N_k}(Y)_s^G|>a/3\}\) is open in \(D([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\) with respect to the uniform topology and that convergence in \(J_1\) to elements of \(C([0,T];{\fancyscript{S}}^\prime (\mathbb{R }))\) is equivalent to uniform convergence. Thus, by Theorem 2.1(i), the weak convergence of the measures \(\hat{\mathbf{P}}_{\varepsilon },\,\varepsilon \downarrow 0\), implies

$$\begin{aligned} \mathbf{P}_{{\gamma }}\left( |\mathfrak{M }_{N_k}(Y)_t^G\!-\!\mathfrak{M }_{N_k}(Y)_s^G|>a/3\right) \le \underline{\mathrm{lim}}_{\,\varepsilon \downarrow 0}\, \hat{\mathbf{P}}_{\varepsilon } \left( |\mathfrak{M }_{N_k}(Y)_t^G\!-\!\mathfrak{M }_{N_k}(Y)_s^G|>a/3\right) \end{aligned}$$

where the \(lim\,in\!f\) on the right-hand side is equal to

$$\begin{aligned}&\underline{\mathrm{lim}}_{\,\varepsilon \downarrow 0} \mathbf{P}_{\varepsilon } \left( \left| Y_t^\varepsilon (G)\!-\!Y_s^\varepsilon (G)\!-\!\!\int _{s}^{t}\left\{ \! Y_r^\varepsilon (G^{\prime \prime })\!-\! {\gamma }\int _\mathbb{R }G^\prime (u) (Y_r^\varepsilon \star J_{N_k})^2(u)\mathrm{d}u\!\right\} \mathrm{d}r \right| >a/3\!\right) \\&\quad =\! \underline{\mathrm{lim}}_{\,\varepsilon \downarrow 0}\, \mathbf{P}_{\varepsilon } \left( \left| M_{t}^{G,\varepsilon }\!-\!M_{s}^{G,\varepsilon } \!+\!R_\varepsilon ^G(t)\!-\!R_\varepsilon ^G(s) \!+\!{\gamma }\sum _{i=0}^4 \left( R_{\varepsilon ,N_k}^{G^\prime ,i}(t)\!-\! R_{\varepsilon ,N_k}^{G^\prime ,i}(s)\!\right) \right| >a/3\!\right) \\&\quad \le \! \overline{\mathrm{lim}}_{\,\varepsilon \downarrow 0} \left( \! \mathbf{P}_{\varepsilon }(|M_{t}^{G,\varepsilon }\!-\!M_{s}^{G,\varepsilon }|\!>\!a/6) \!+\!\frac{36}{a^2} \mathbf{E}_\varepsilon \left[ R_\varepsilon ^G(t)-R_\varepsilon ^G(s) \!+\!{\gamma }\sum _{i=0}^4 \left( R_{\varepsilon ,N_k}^{G^\prime ,i}(t)\right. \right. \right. \\&\qquad \left. \left. \left. \!-\!R_{\varepsilon ,N_k}^{G^\prime ,i}(s)\!\right) \!\right] ^2 \right) \end{aligned}$$

where

$$\begin{aligned} \mathbf{E}_\varepsilon \left[ R_\varepsilon ^G(t)-R_\varepsilon ^G(s) \!+\!{\gamma }\sum _{i=0}^4 \left( R_{\varepsilon ,N_k}^{G^\prime ,i}(t) \!-\!R_{\varepsilon ,N_k}^{G^\prime ,i}(s)\right) \right] ^2 \!\le \!const\cdot \delta \quad \text{ for } \text{ all }\ \varepsilon <\varepsilon _0\wedge \varepsilon _1 \end{aligned}$$

as in the proof of (3.11). Using this to estimate the right-hand side of (3.16) yields

$$\begin{aligned} \mathbf{P}_{{\gamma }}(|\tilde{M}_t^G-\tilde{M}_s^G|>a) \,\le \, \overline{\mathrm{lim}}_{\,\varepsilon \downarrow 0}\, \mathbf{P}_{\varepsilon } (|M_{t}^{G,\varepsilon }-M_{s}^{G,\varepsilon }|>a/6) \end{aligned}$$
(3.17)

since \(\delta \) can be made arbitrarily small.

Now recall that \(s,t\in \mathcal{T }_G\) were arbitrarily chosen and observe that

$$\begin{aligned} \mathbf{P}_{\varepsilon } (|M_{t}^{G,\varepsilon }\!-\!M_{s}^{G,\varepsilon }|\!>\!a/6) \!\le \! \frac{6^4}{a^4} \mathbf{E}_\varepsilon \left| M_{t}^{G,\varepsilon }\!-\!M_{s}^{G,\varepsilon } \right| ^4 \!\le \! \frac{6^4C_{4}}{a^4}\, \mathbf{E}_\varepsilon \left( [M^{G,\varepsilon }]_t-[M^{G,\varepsilon }]_s \right) ^2 \end{aligned}$$

by first applying Chebyshev’s and then Burkholder-Davis-Gundy’s inequality with constant \(C_4\). Furthermore, it is known in this context (see [6] for example) that

$$\begin{aligned} \mathbf{E}_\varepsilon \left( [M^{G,\varepsilon }]_t-[M^{G,\varepsilon }]_s \right) ^2 \,\le \, C(T,G)\{\varepsilon ^2+(t-s)^2\}. \end{aligned}$$

Hence, by (3.17), there exists \(const\) only depending on \(T\) and \(G\) such that

$$\begin{aligned} \mathbf{P}_{{\gamma }}(|\tilde{M}_t^G-\tilde{M}_s^G|>a) \,\le \, const\cdot a^{-4}(t-s)^2 \end{aligned}$$
(3.18)

for all \(a>0\) and \(s,t\in \mathcal{T }_G\).

The next step is to construct a continuous process \((M^G_t)_{t\in [0,T]}\) such that \(\tilde{M}_t^G=M^G_t\) \(\mathbf{P}_{{\gamma }}\)-a.s. for all \(t\in \mathcal{T }_G\). But such a construction can be achieved almost the same way the continuous version of a process is constructed in the proof of the Kolmogorov-Chentsov theorem (see [9] for example). As in this proof, it follows from (3.18) that, for a dense subset \(D\) of \([0,T]\), \(\{\tilde{M}^G_t(\omega );t\in D\}\) is uniformly continuous in \(t\) for every \(\omega \in \Omega ^\star \) where \(\Omega ^\star \) is an event in \(\mathcal{F }^Y_T\) of \(\mathbf{P}_{{\gamma }}\)-measure one. But in difference to [9], \(D\) should not be the set of dyadic rationals in \([0,T]\) but rather an appropriate subset of the set \(\{t_1,t_2,\dots \}\) chosen at the beginning of this proof. Then one can define \(M^G_t(\omega )=0,\,0\le t\le T\), for \(\omega \notin \Omega ^\star \) while, for \(\omega \in \Omega ^\star \), \(M^G_t(\omega )=\tilde{M}^G_t(\omega )\) if \(t\in D\) and \(M^G_t(\omega )=\lim _n\tilde{M}^G_{s_n}(\omega )\) for some \((s_n)_{n=1}^\infty \subseteq D\) with \(s_n\rightarrow t\) if \(t\in [0,T]\setminus D\). This gives indeed a continuous process.

To see that \(\tilde{M}_t^G=M^G_t\) a.s. for all \(t\in \mathcal{T }_G\) one splits \(\mathcal{T }_G\) into \(D\) and \(\mathcal{T }_G\setminus D\). For \(t\in D\) one has \(\tilde{M}_t^G=M^G_t\) a.s. since \(\mathbf{P}_{{\gamma }}(\Omega ^\star )=1\). For \(t\in \mathcal{T }_G\setminus D\) and \((s_n)_{n=1}^\infty \subseteq D\) with \(s_n\rightarrow t\) one has \(M^G_t=\lim _n\tilde{M}^G_{s_n}\) a.s. by construction as well as \(\tilde{M}_t^G=\lim _n\tilde{M}^G_{s_n}\) in probability by (3.18) which also gives \(\tilde{M}_t^G=M^G_t\) a.s.

Realise that, without restricting the generality, both \(\mathcal{T }_G\) and \(D\) can be chosen to contain zero as \(\mathfrak{M }_N(Y)_0^G=0\) for all \(N\) by definition. Notice that \(D\subseteq \{t_1,t_2,\ldots \}\) and \(\tilde{M}^G_{t_n}\) is \(\mathcal{F }_{t_n}^Y\)- measurable for all \(n\) and \(\Omega ^\star \in \mathcal{F }^Y_T\). So \(M^G_t\) is \(\mathcal{F }_t\) - mesurable for \(t\in D\). Hence \((M^G_t)_{t\in [0,T]}\) is \(\mathbb{F }\)-adapted since it is continuous and \(D\) is dense in \([0,T]\).

Finally, the \(\mathcal{F }_t^Y\)- martingale property of \(\tilde{M}^G_{t_n},\,n=1,2,\ldots \), shown by (3.11) implies that \((M_{s_j}^G)_{j=1}^m\) is an \((\mathcal{F }_{s_j})_{j=1}^m\) - martingale for every finite ordered subset \(\{s_1,\dots ,s_m\}\) of \(D\). All these martingales are square integrable because \(\mathbf{E}_{{\gamma }}(\tilde{M}^G_{t_n})^2<\infty \) by the choice of \(t_n,\,n=1,2,\dots \), at the beginning of this proof. Now choose an arbitrary positive \(T^\prime <T\). Then \((M^G_t)_{t\in [0,T^\prime ]}\) is a square integrable \(\mathbb{F }\) - martingale as the limits used to construct this process can be interchanged with both expectations and conditional expectations by Doob’s maximal inequality for martingales as there must be an element of \(D\) between \(T^\prime \) and \(T\). \(\square \)

Proof of Proposition 2.5(ii)

Fix \(G\in {\fancyscript{S}}(\mathbb{R })\). Since \((M^G_t)_{t\in [0,T]}\) is a continuous \(\mathbb{F }\)-adapted process it suffices to show that for every positive \(T^\prime <T\), when restricted to \([0,T^\prime ]\), the process \(M^G\) is an \(\mathbb{F }\)-Brownian motion with variance \(2\Vert G^\prime \Vert ^2_2\). So, in what follows, \(T\) is identified with some positive \(T^\prime <T\) to simplify notation.

Obviously, it remains to show that \((M^G_t)^2-2\Vert G^\prime \Vert ^2_2\cdot t,\,t\in [0,T]\), is an \(\mathbb{F }\)-martingale. Recalling the construction of \(M^G\) in the proof of Proposition 2.5(i) above, the \(\mathbb{F }\)-martingale property already follows from

$$\begin{aligned} \mathbf{E}_{{\gamma }}X[(M^G_t)^2-2\Vert G^\prime \Vert ^2_2\cdot t -(M^G_{t^\prime })^2+2\Vert G^\prime \Vert ^2_2\cdot t^\prime \,]\,=\,0 \end{aligned}$$

for all \(t,t^\prime \in D\) such that \(t^\prime <t\) and \(X=f(Y_{s_1}(H_1),\dots ,Y_{s_p}(H_p))\) where \(f:\mathbb{R }^p\rightarrow \mathbb{R }\) is a bounded continuous function, \(H_i\in {\fancyscript{S}}(\mathbb{R })\) and \(0\le s_i\le t^\prime ,\,1\le i\le p\). Again this is verified by showing that

$$\begin{aligned} \left( \mathbf{E}_{{\gamma }}X[(M^G_t)^2-2\Vert G^\prime \Vert ^2_2\cdot t -(M^G_{t^\prime })^2+2\Vert G^\prime \Vert ^2_2\cdot t^\prime ]\right) ^2 \,\le \,const\cdot \delta \quad \text{ for } \text{ all } \delta >0\nonumber \\ \end{aligned}$$
(3.19)

for some \(const>0\). So fix \(t,t^\prime \in D\) such that \(t^\prime <t\) and observe that

$$\begin{aligned}&\left( \mathbf{E}_{{\gamma }}X[(M^G_t)^2-2\Vert G^\prime \Vert ^2_2\cdot t -(M^G_{t^\prime })^2+2\Vert G^\prime \Vert ^2_2\cdot t^\prime \,]\right) ^2\\&\quad \le const\left\{ \delta + \left( \mathbf{E}_{{\gamma }}X[(\mathfrak{M }_{N_k}(Y)_{t}^G)^2-(\mathfrak{M }_{N_k} (Y)_{t^\prime }^G)^2 -2\Vert G^\prime \Vert ^2_2\cdot (t-t^\prime )]\right) ^2 \right\} \end{aligned}$$

for some \(k=k_\delta \) big enough since the inequality

$$\begin{aligned} \left( \mathbf{E}_{{\gamma }}[(M^G_t)^2-(\mathfrak{M }_{N_k}(Y)_{t}^G)^2]\right) ^2 \,&\le \, 2\,\mathbf{E}_{{\gamma }}[M^G_t-\mathfrak{M }_{N_k}(Y)_{t}^G]^2 \left( \mathbf{E}_{{\gamma }}(M^G_t)^2\right. \\&\left. +\mathbf{E}_{{\gamma }}(\mathfrak{M }_{N_k}(Y)_{t}^G)^2\right) \end{aligned}$$

holds for \(t\) and \(t^\prime \). Furthermore, using Lemma 4.1 in the Appendix as in the proof of Lemma 2.4 gives

$$\begin{aligned}&\mathbf{E}_{{\gamma }}X(\mathfrak{M }_{N_k}(Y)_{t}^G)^2\\&\quad \!=\! \lim _{\varepsilon \downarrow 0}\, \hat{\mathbf{E}}_\varepsilon X\left( Y_{t}(G)\!-\!Y_0(G) \!-\!\int _{0}^{t}\left\{ Y_s(G^{\prime \prime })\!-\! {\gamma }\int _\mathbb{R }G^\prime (u)\, (Y_s\star J_{N_k})^2(u)\,\mathrm{d}u\right\} \mathrm{d}s \right) ^2 \end{aligned}$$

which simplifies to

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \mathbf{E}_\varepsilon X^\varepsilon \left( \!M_{t}^{G,\varepsilon } \!+\!R_\varepsilon ^G(t) \!+\!{\gamma }\!\sum _{i=0}^4 R_{\varepsilon ,N_k}^{G^\prime ,i}(t) \!\right) ^2 \;\text{ with }\;\,\,\! X^\varepsilon \!=\! f(Y_{s_1}^\varepsilon (H_1),\dots ,Y_{s_p}^\varepsilon (H_p)). \end{aligned}$$

As the same equality holds for \(t^\prime \), one obtains that

$$\begin{aligned}&\left( \mathbf{E}_{{\gamma }}X[(M^G_t)^2-2\Vert G^\prime \Vert ^2_2\cdot t -(M^G_{t^\prime })^2+2\Vert G^\prime \Vert ^2_2\cdot t^\prime \,]\right) ^2\\&\quad \le const\left\{ \delta + \left( \mathbf{E}_\varepsilon X^\varepsilon [(M^{G,\varepsilon }_t)^2 -(M^{G,\varepsilon }_{t^\prime })^2 -2\Vert G^\prime \Vert ^2_2\cdot (t-t^\prime )]\right) ^2 \right\} \end{aligned}$$

for a sufficiently small \(\varepsilon >0\) by estimating

$$\begin{aligned} \mathbf{E}_\varepsilon M^{G,\varepsilon }_t \left( R_\varepsilon ^G(t) +{\gamma }\sum _{i=0}^4 R_{\varepsilon ,N_k}^{G^\prime ,i}(t) \right) \quad \text{ and }\quad \mathbf{E}_\varepsilon \left( R_\varepsilon ^G(t) +{\gamma }\sum _{i=0}^4 R_{\varepsilon ,N_k}^{G^\prime ,i}(t) \right) ^2 \end{aligned}$$

for \(t\) and \(t^\prime \) using the bounds derived in the proof of Proposition 2.5(i).

Now \((M^{G,\varepsilon }_t)^2,\,t\ge 0\), is a submartingale in the class \((\mathfrak D L)\). Hence \((M^{G,\varepsilon }_t)^2-\langle M^{G,\varepsilon }\rangle _t,\,t\ge 0\), is a martingale so that

$$\begin{aligned}&\left( \mathbf{E}_{{\gamma }}X[(M^G_t)^2-2\Vert G^\prime \Vert ^2_2\cdot t -(M^G_{t^\prime })^2+2\Vert G^\prime \Vert ^2_2\cdot t^\prime \,]\right) ^2\\&\quad \le const\left\{ \delta + \left( \mathbf{E}_\varepsilon X^\varepsilon [\langle M^{G,\varepsilon }\rangle _t -\langle M^{G,\varepsilon }\rangle _{t^\prime } -2\Vert G^\prime \Vert ^2_2\cdot (t-t^\prime )]\right) ^2 \right\} . \end{aligned}$$

Finally \(\mathbf{E}_\varepsilon [\langle M^{G,\varepsilon }\rangle _t -\langle M^{G,\varepsilon }\rangle _{t^\prime } -2\Vert G^\prime \Vert ^2_2\cdot (t-t^\prime )]^2\) can be made arbitrarily small by choosing a suitable \(\varepsilon \) which proves (3.19) hence part (ii) of Proposition 2.5. The last argument is standard and can be found in [6], for example. \(\square \)

Proof of Proposition 2.5(iii)

Fix \(a_1,a_2\in \mathbb{R }\) and \(G_1,G_2\in {\fancyscript{S}}(\mathbb{R })\). The wanted linearity holds for \(\mathfrak{M }_N(Y)\) and, because \(\mathfrak{M }_N(Y)\) is an approximation for \((\tilde{M}^G)_{G\in {\fancyscript{S}}(\mathbb{R })}\), the linearity should also hold for the version \(({M}^G)_{G\in {\fancyscript{S}}(\mathbb{R })}\) of \((\tilde{M}^G)_{G\in {\fancyscript{S}}(\mathbb{R })}\). But some care has to be taken since the construction of \(({M}^G)_{G\in {\fancyscript{S}}(\mathbb{R })}\) depends on the choice of subsequences and, also, since the notion of version used in this paper is special as not all \(t\in [0,T]\) are covered.

By Proposition 2.5(i), there are sets \(\mathcal{T }_{G_1},\mathcal{T }_{G_2},\mathcal{T }_{a_1G_1+a_2G_2}\) corresponding to the processes \(M^{G_1}\), \(M^{G_2}\), \(M^{a_1 G_1+a_2 G_2}\). First one wants to find a set

$$\begin{aligned} \mathcal{T }\,\subseteq \, \mathcal{T }_{G_1}\cap \mathcal{T }_{G_2} \cap \mathcal{T }_{a_1G_1+a_2G_2} \quad \text{ dense } \text{ in }\,\, [0,T] \end{aligned}$$

such that

$$\begin{aligned} \tilde{M}_t^{a_1 G_1+a_2 G_2}\,=\,a_1\tilde{M}_t^{G_1}+a_2\tilde{M}_t^{G_2} \quad \text{ a.s. }\quad \text{ for }\,\, t\in \mathcal{T }. \end{aligned}$$
(3.20)

This is achieved by successively choosing subsequences as follows. Using (3.10), there is a subsequence \((k_j)_{j=1}^\infty \) of \((N_k)_{k=1}^\infty \) such that

$$\begin{aligned} \tilde{M}_t^{a_1 G_1+a_2 G_2}\,=\, \lim _{j\rightarrow \infty }\left( a_1\mathfrak{M }_{k_j}(Y)_t^{G_1} +a_2\mathfrak{M }_{k_j}(Y)_t^{G_2} \right) \quad \text{ a.s. }\quad \text{ for }\ t\in \mathcal{T }_{a_1G_1+a_2G_2}.\nonumber \\ \end{aligned}$$
(3.21)

Now, using (2.2) with respect to \((k_j)_{j=1}^\infty \) and \(G_1\), there is a measurable subset \(\mathcal{T }^\prime _{G_1}\subseteq [0,T]\) with \(\ell (\mathcal{T }^\prime _{G_1})=T\) and a subsequence \((j_l)_{l=1}^\infty \) of \((k_j)_{j=1}^\infty \) such that

$$\begin{aligned} \tilde{M}_t^{G_1}\,=\, \lim _{l\rightarrow \infty }\mathfrak{M }_{j_l}(Y)_t^{G_1} \quad \text{ a.s. }\quad \text{ for }\ t\in \mathcal{T }^\prime _{G_1}. \end{aligned}$$

Notice that \(\mathcal{T }^\prime _{G_1}\) and \(\mathcal{T }_{G_1}\) can be different. Similarly, one obtains that

$$\begin{aligned} \tilde{M}_t^{G_2}\,=\, \lim _{m\rightarrow \infty }\mathfrak{M }_{l_m}(Y)_t^{G_2} \quad \text{ a.s. }\quad \text{ for }\ t\in \mathcal{T }^\prime _{G_2} \end{aligned}$$

where \((l_m)_{m=1}^\infty \) is a subsequence of \((j_l)_{l=1}^\infty \) and \(\ell (\mathcal{T }^\prime _{G_2})=T\). Then

$$\begin{aligned} \mathcal{T }\,\mathop {=}\limits ^{\text{ def }}\, \mathcal{T }_{G_1}\cap \mathcal{T }_{G_2} \cap \mathcal{T }_{a_1G_1+a_2G_2} \cap \mathcal{T }^\prime _{G_1}\cap \mathcal{T }^\prime _{G_2} \,\subseteq \, \mathcal{T }_{G_1}\cap \mathcal{T }_{G_2}\cap \mathcal{T }_{a_1G_1+a_2G_2} \end{aligned}$$

and \(\mathcal{T }\) is dense in \([0,T]\) because \(\ell (\mathcal{T })=T\). Furthermore, using the subsequence \((l_m)_{m=1}^\infty \) instead of \((k_j)_{j=1}^\infty \) in (3.21) implies (3.20).

But, by Proposition 2.5(i), (3.20) is equivalent to

$$\begin{aligned} M_t^{a_1 G_1+a_2 G_2}\,=\,a_1 M_t^{G_1}+a_2 M_t^{G_2} \quad \text{ a.s. }\quad \text{ for }\ t\in \mathcal{T } \end{aligned}$$

which proves part (iii) of Proposition 2.5 because the processes \(M^{a_1 G_1+a_2 G_2},\,M^{G_1}, M^{G_2}\) are continuous. \(\square \)

Proof of Proposition 2.5(iv)

Remark that part (iv) would not follow from part (ii) allone but, including part (iii), it is straight forward to check both the Gaussian distribution and the covariance structure of the process \(M_t^G\) indexed by \(t\in [0,T]\) and \(G\in {\fancyscript{S}}(\mathbb{R })\). Of course, from the covariance structure follows that the index set of the process can be extended to \(t\in [0,T]\) and absolutely continuous functions \(G\) on \(\mathbb{R }\) with density \(G^\prime \in L^2(\mathbb{R })\) without changing the underlying probability space. Hence

$$\begin{aligned} \tilde{B}(t,u)\,=\, M_t^{G_u}/\sqrt{2}\,,\quad t\in [0,T],\;u\in \mathbb{R }, \end{aligned}$$

is properly defined using test functions \(G_u(\tilde{u}),\,\tilde{u}\in \mathbb{R }\), given by

$$\begin{aligned} G_u(\tilde{u})\,=\left\{ \begin{array}{rcl} 0\vee (u\wedge \tilde{u})&{}:&{}u\ge 0,\\ 0\wedge (u\vee \tilde{u})&{}:&{}u<0. \end{array}\right. \end{aligned}$$

Obviously, \(\tilde{B}(t,u),\,t\in [0,T],\,u\in \mathbb{R }\), is a centred Gaussian process on \((D([0,T];{\fancyscript{S}}^\prime (\mathbb{R })),\mathcal{F }^Y_T,\) \(\mathbf{P}_{{\gamma }})\) with covariance \(\mathbf{E}_{{\gamma }}\tilde{B}(t,u)\tilde{B}(t^\prime ,u^\prime ) =(t\wedge t^\prime )(|u|\wedge |u^\prime |)\) if \(u,u^\prime \) have the same sign and vanishing covariance otherwise.

So, as in the proof of the Kolmogorov-Chentsov theorem, one can construct a version \(B(t,u)\) of \(\tilde{B}(t,u)\) on the same probability space which is continuous in \(t\) and \(u\), hence, is a Brownian sheet. By standard theory on random linear functionals, see [11] for a good reference, there is an \({\fancyscript{S}}^\prime (\mathbb{R })\)-valued version of the process \(M_t^G\) which is of course indistinguishable of

$$\begin{aligned} \sqrt{2}\int _\mathbb{R }B(t,u)G^{\prime \prime }(u)\,\mathrm{d}u, \quad t\in [0,T],\;G\in {\fancyscript{S}}(\mathbb{R }), \end{aligned}$$

finally proving part (iv) of Proposition 2.5. \(\square \)