Optimal rate of convergence for stochastic Burgers-type equations

Open Access
Article

Abstract

Recently, a solution theory for one-dimensional stochastic PDEs of Burgers type driven by space-time white noise was developed. In particular, it was shown that natural numerical approximations of these equations converge and that their convergence rate in the uniform topology is arbitrarily close to \(\frac{1}{6}\). In the present article we improve this result in the case of additive noise by proving that the optimal rate of convergence is arbitrarily close to \(\frac{1}{2}\).

Keywords

Burgers equation Approximations Rough paths 

1 Introduction

The goal of this article is to study numerical approximations of stochastic PDEs of Burgers type on the circle \(\mathbb {T} = {\mathbb {R}}/ (2\pi {\mathbb {Z}})\) given by
$$\begin{aligned} du = \left[ \nu \Delta u + F(u) + G(u) \partial _{x} u \right] dt + \sigma d W(t), \quad u(0) = u^{0}. \end{aligned}$$
(1.1)
Here, \(u : {\mathbb {R}}_+ \times \mathbb {T} \times \Omega \rightarrow {\mathbb {R}}^n\), where \(\left( \Omega , \mathcal {F}, \mathbb {P} \right) \) is a probability space, \(\Delta =\partial ^2_x\) is the Laplace operator on the circle \(\mathbb {T}\), the derivative \(\partial _x\) is understood in the sense of distributions, the function \(F:{\mathbb {R}}^n \rightarrow {\mathbb {R}}^n\) is of class \(\mathcal {C}^1\), the function \(G:{\mathbb {R}}^n \rightarrow {\mathbb {R}}^{n \times n}\) is of class \(\mathcal {C}^{\infty }\), and \(\nu , \sigma \in {\mathbb {R}}_+\) are positive constants. Finally, W is an \(L^2\)-cylindrical Wiener process [6], i.e. Eq. (1.1) is driven by space-time white noise. The product appearing in the term \(G(u) \partial _{x} u\) is matrix-vector multiplication.
The difficulty in dealing with (1.1) comes from the nonlinearity \(G(u) \partial _{x} u\) and is caused by the low space-time regularity of the driving noise. Indeed, it is well-known that the pairing
$$\begin{aligned} \mathcal {C}^{\alpha } \times \mathcal {C}^\beta \ni (v,u) \mapsto v\, \partial _x u \end{aligned}$$
is well defined if and only if \(\alpha + \beta > 1\) (see Appendix 1; [2]). On the other hand, one expects solutions to (1.1) to have the spatial regularity of the solution of the linearised equation
$$\begin{aligned} dX(t) = \nu \Delta X dt + \sigma d W(t). \end{aligned}$$
(1.2)
For any fixed time \(t > 0\), the solution to the stochastic heat equation (1.2) has Hölder regularity \(\alpha < \frac{1}{2}\), but is not\(\frac{1}{2}\)-Hölder continuous (see [6, 17, 29]). This implies in particular that the product \(G(X) \partial _x X\) is not well-defined in this case, and it is not a priori clear how to define a solution to the Eq. (1.1).

In the case \(G \equiv 0\) this problem does of course not occur. Equations of this type and their numerical approximations were well studied and the results can be found in [15, 16]. Moreover, it was shown in [5] that the optimal rate of uniform convergence in this case is \(\frac{1}{2} - \kappa \), for every \(\kappa > 0\), as the spatial discretisation tends to zero.

For non-zero G, the difficulty can easily be overcome in the gradient case, i.e. when \(G = \nabla \mathcal {G}\) for some smooth function \(\mathcal {G} : {\mathbb {R}}^n \rightarrow {\mathbb {R}}^n\). In this case, postulating the chain rule, the nonlinear term can be rewritten as
$$\begin{aligned} G\left( u(t,x)\right) \partial _{x} u(t,x) = \partial _{x} \mathcal {G}\left( u(t,x)\right) , \end{aligned}$$
(1.3)
which is a well-defined distribution as soon as u is continuous. The existence and uniqueness results in the gradient case can be found in [7, 14]. In the article [1], the finite difference scheme was studied for the case \(G(u)=u\), and \(L^2\)-convergence was shown with rate \(\gamma \), for every \(\gamma < \frac{1}{2}\). The same rate of convergence was obtained in [3] in the \(L^\infty \) topology for Galerkin approximations.
For a general sufficiently smooth function G, a notion of solution was given in [18]. The key idea of the approach was to test the nonlinearity with a smooth test function \(\varphi \) and to formally rewrite it as
$$\begin{aligned} \int _{-\pi }^{\pi } \varphi (x) G\left( u(t,x)\right) \partial _{x} u(t,x)\, dx = \int _{-\pi }^{\pi } \varphi (x) G\left( u(t,x)\right) d_x u(t,x). \end{aligned}$$
(1.4)
As it was stated above, we expect u to behave locally like the solution to the linearised equation (1.2). It was shown in [18] that the latter can be viewed in a canonical way as a process with values in a space of rough paths. This correctly suggests that the theory of controlled rough paths [11, 12] could be used to deal with the integral (1.4) in the pathwise sense. The quantity (1.4) is uniquely defined up to a choice of the iterated integral which represents the integral of u with respect to itself. This implies that for different choices of the iterated integral we obtain different solutions, which is similar to the choice between Itô and Stratonovich stochastic integrals in the theory of SDEs. In the present situation however, there is a unique choice for the iterated integral which respects the symmetry of the linearised equation under the substitution \(x \mapsto -x\), and this corresponds to the “Stratonovich solution”. This natural choice is also the one for which the chain rule (1.3) holds in the particular case when G is a gradient.

Using the rough path approach, numerical approximations to (1.1) in the gradient case without using the chain rule were studied in [19]. It was shown that the corresponding approximate solutions converge in suitable Sobolev spaces to a limit which solves (1.1) with an additional correction term, which can be computed explicitly. This term is an analogue to the Itô-Stratonovich correction term in the classical theory of SDEs.

In [20], the solution theory was extended to Burgers-type equations with multiplicative noise (i.e. when the multiplier of the noise term is a nonlinear local function \(\theta (u)\) of the solution). Analysis of numerical schemes approximating the equation in the multiplicative case was performed in [21], where the appearance of a correction term was observed and the rate of convergence in the uniform topology was shown to be of order \(\frac{1}{6}-\kappa \), for every \(\kappa > 0\).

In this article, we prove that in the case of additive noise the rate of convergence in the supremum norm is \(\frac{1}{2}-\kappa \), for every \(\kappa > 0\). Actually, it turns out to be technically advantageous to consider convergence in Hölder spaces with Hölder exponent very close to zero. The main difference to [21] is that we cannot use the classical theory of controlled rough paths which applies only in the Hölder spaces of regularity from \(\left( \frac{1}{3}, \frac{1}{2}\right] \), to approximate the rough integral (1.4). To show the convergence in the Hölder spaces of lower regularity, we use the results from [12], which generalize the theory of controlled rough paths for functions of any positive regularity.

1.1 Assumptions and statement of the main result

As before we assume that \(F \in \mathcal {C}^1\) and \(G \in \mathcal {C}^{\infty }\) in (1.1). For \(\varepsilon > 0\) we consider the approximate stochastic PDEs on the circle \(\mathbb {T}\) given by
$$\begin{aligned} d u_{\varepsilon } = \left[ \nu \Delta _{\varepsilon } u_{\varepsilon } + F(u_{\varepsilon }) + G(u_{\varepsilon }) D_{\varepsilon } u_{\varepsilon } \right] dt + \sigma H_{\varepsilon }dW, \quad u_{\varepsilon }(0) = u^{0}_{\varepsilon }. \end{aligned}$$
(1.5)
Here, the operators \(\Delta _{\varepsilon }\), \(D_{\varepsilon }\) and \(H_{\varepsilon }\) are defined as Fourier multipliers providing approximations of \(\Delta \), \(\partial _{x}\) and the identity operator respectively, and are given by
$$\begin{aligned} \widehat{\Delta _{\varepsilon } u}(k) = -k^{2} f(\varepsilon k) \widehat{u}(k), \quad \widehat{D_{\varepsilon } u}(k) = i k g(\varepsilon k) \widehat{u}(k), \quad \widehat{H_{\varepsilon } W}(k) = h(\varepsilon k) \widehat{W}(k). \end{aligned}$$
Below we provide the assumptions on the functions f, g and h. We start with the assumptions on f.

Assumption 1

The function \(f: {\mathbb {R}}\rightarrow (0,\infty ]\) is even, satisfies \(f(0)=1\), is continuously differentiable on the interval \([-\delta , \delta ]\) for some \(\delta > 0\), and there exists \(c_{f} \in (0,1)\) such that \(f \ge c_{f}\).

Furthermore, the functions \(b_t\) given by \(b_{t}(x) := \exp \left( -x^{2}f(x)t \right) \) are uniformly bounded in \(t > 0\) in the bounded variation norm, i.e. \(\sup _{t > 0} |b_{t}|_{\mathrm {BV}} < \infty \).

Our next assumption concerns g, which defines the approximation to the spatial derivative.

Assumption 2

There exists a signed Borel measure \(\mu \) on \({\mathbb {R}}\) such that \(\int _{{\mathbb {R}}} e^{ikx} \mu (dx) = ik g(k)\), and such that
$$\begin{aligned} \mu ({\mathbb {R}})=0,\quad |\mu |({\mathbb {R}}) < \infty ,\quad \int _{{\mathbb {R}}} x \mu (dx) = 1. \end{aligned}$$
Moreover, the measure \(\mu \) has all finite moments, i.e. \(\int _{{\mathbb {R}}} |x|^{k} |\mu |(dx) < \infty \), for any integer \(k \ge 1\).
In particular, the approximate derivative can be expressed as
$$\begin{aligned} \left( D_{\varepsilon }u\right) (x)= \frac{1}{\varepsilon } \int _{{\mathbb {R}}} u(x+\varepsilon y) \mu (dy), \end{aligned}$$
where we identify \(u: \mathbb {T} \rightarrow {\mathbb {R}}\) with its periodic extension to all \({\mathbb {R}}\). Our last assumption is on the function h, which defines the approximation of noise.

Assumption 3

The function h is even, bounded, and such that \(h^{2}/f\) and \(h/(f + 1)\) are of bounded variation. Furthermore, h is twice differentiable at the origin with \(h(0)=1\) and \(h'(0)=0\).

The difference with the assumptions in [21] is that we require in Assumption 2 all the moments of the measure \(\mu \) to be finite and in Assumption 3 the function \(h/(f + 1)\) to be of bounded variation. We use the latter assumption in Lemma 4.1 in order to use the bounds on lifted rough paths obtained in [10]. All the examples of approximations provided in [19] (including finite difference schemes) still satisfy our assumptions.

Let \(\bar{u}\) be the solution to the modified equation (1.1),
$$\begin{aligned} d \bar{u} = \left[ \nu \Delta \bar{u} + \bar{F}(\bar{u}) + G(\bar{u}) \partial _{x} \bar{u} \right] dt + \sigma dW,\quad \bar{u}(0) = u^{0}, \end{aligned}$$
(1.6)
where, for \(i = 1, \ldots , n\), the modified reaction term is given by
$$\begin{aligned} \bar{F}_{i} := F_{i} - \Lambda ~\mathrm {div} G_{i}. \end{aligned}$$
Here, we denote by \(G_i\) the ith row of the matrix-valued function G, and the correction constant is defined by
$$\begin{aligned} \Lambda := \frac{\sigma ^2}{2\pi \nu } \int _{{\mathbb {R}}_{+}} \int _{{\mathbb {R}}} \frac{(1-\cos (yt)) h^{2}(t)}{t^{2} f(t)} \mu (dy) dt. \end{aligned}$$
It follows from the assumptions that \(\Lambda \) is well-defined. In fact, the Assumption 3 says that \(|h^2 / f|\) is bounded, and by the Assumption 2 the measure \(\mu \) has a finite second moment, what yields the existence of \(\Lambda \).
As we do not assume boundedness of the functions F and G, and their derivatives, the solution can blow up in finite time. To overcome this difficulty we consider solutions only up to some stopping times. More precisely, for any \(K > 0\) we define the stopping times
$$\begin{aligned} \tau ^*_K := \inf \{ t > 0: \Vert \bar{u}(t)\Vert _{\mathcal {C}^0} \ge K \}, \end{aligned}$$
where \(\Vert \cdot \Vert _{\mathcal {C}^{0}}\) is the supremum norm. The blow-up time of \(\bar{u}\) is then defined as \(\tau ^* := \lim _{K \uparrow \infty } \tau ^*_K\) in probability.

Our main theorem gives the convergence rate of the solutions of the approximate equations (1.5) to the solution of the modified equation (1.6).

Theorem 1.1

Let for every \(0 < \eta < \frac{1}{2}\) the initial values satisfy
$$\begin{aligned} \mathbb {E} \Vert u^{0}\Vert _{\mathcal {C}^{\eta }} < \infty , \quad \sup _{0 < \varepsilon \le 1} \mathbb {E} \Vert u_{\varepsilon }^{0}\Vert _{\mathcal {C}^{\eta }} < \infty . \end{aligned}$$
Moreover, we assume that for every \(\alpha > 0\) small enough the following estimate holds
$$\begin{aligned} \mathbb {E} \Vert u^{0} - u_{\varepsilon }^{0} \Vert _{\mathcal {C}^{\alpha }} \lesssim \varepsilon ^{\frac{1}{2} - \alpha }, \end{aligned}$$
where the proportionality constant can depend on \(\alpha \). Then, for every such \(\alpha > 0\), there exists a sequence of stopping times \(\tau _{\varepsilon }\) satisfying \(\lim _{\varepsilon \downarrow 0} \tau _{\varepsilon } = \tau ^*\) in probability, such that the following convergence holds
$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \mathbb {P}\Big [\sup _{0 \le t \le \tau _{\varepsilon }} \Vert \bar{u}(t) - u_{\varepsilon }(t) \Vert _{\mathcal {C}^{0}} \ge \varepsilon ^{\frac{1}{2} - \alpha }\Big ] = 0. \end{aligned}$$

Remark 1.2

The rate of convergence obtained in [21] was “almost” \(\frac{1}{6}\), in the sense that it is \(\frac{1}{6} - \kappa \) for any \(\kappa > 0\). To improve this result we consider convergence of the solutions in the Hölder spaces of the regularities close to zero. This approach creates difficulties when working with the rough integrals (1.4). In fact, the bounds on the rough integrals, in particular in [21, Lemma 5.3], hold only in the Hölder spaces \(\mathcal {C}^\alpha \) with \(\alpha \in \left( \frac{1}{3}, \frac{1}{2}\right) \) and the norms explode as \(\alpha \) approaches \(\frac{1}{3}\). To have reasonable bounds in the Hölder spaces of lower regularity, we have to include into the definition of the rough integrals the iterated integrals of the controlling process X of higher order. In [21] it was enough to consider only the iterated integrals of order two. In particular, the smaller \(\alpha \) is in Theorem 1.1, the more iterated integrals we have to consider to define the rough integral (1.4) (see Sect. 2 for more details).

If the function G is only of class \(\mathcal {C}^p\) for some \(p \ge 3\), we can consider the iterated integrals of X only up to the order \(p-1\) (see Sect. 4.1). As a consequence, the argument in the proof of Theorem 1.1 gives the rate of convergence only “almost” \(\frac{1}{2} - \frac{1}{p}\). This is precisely the rate of convergence obtained in [21], where p was taken to be 3.

Remark 1.3

By changing the time variable and the functions in (1.1) by a constant multiplier, we can obtain an equivalent equation with \(\nu = 1\). Moreover, we can assume \(\sigma = 1\). In what follows we only consider these values of the constants.

1.2 Structure of the article

In Sect. 2 we review the theories of rough paths and controlled rough paths. Section 3 is devoted to the results obtained in [18]. In particular, here we provide a notion of solution and the existence and uniqueness results for the Burgers type equations with additive noise. In Sect. 4 we define the rough integrals and formulate the mild solution to the approximate equation (1.5) in a way appropriate for working in the Hölder spaces of low regularity. The proof of Theorem 1.1 is provided in Sect. 5. The following sections give bounds on the corresponding terms in the equations (1.6) and (1.5): in Sects. 6 and 7 we consider the reaction terms and Sect. 8 is devoted to the terms involving the rough integrals. In Appendix 1 we prove a Kolmogorov-like criterion for distribution-valued processes. Appendix 2 provides regularity properties of the heat semigroup and its approximate counterpart on the Hölder spaces.

1.3 Spaces, norms and notation

Throughout this article, we denote by \(\mathcal {C}^0\) the space of continuous functions on the circle \(\mathbb {T}\) endowed with the supremum norm.

For functions \(X: {\mathbb {R}}\rightarrow {\mathbb {R}}^n\) (or \({\mathbb {R}}^{n \times n}\)) and \(R: {\mathbb {R}}^2 \rightarrow {\mathbb {R}}^n\) (or \({\mathbb {R}}^{n \times n}\)), such that R vanishes on the diagonal, we define respectively Hölder seminorms with a given parameter \(\alpha \in (0,1)\):
$$\begin{aligned} \Vert X \Vert _{\alpha } := \sup _{x \ne y} \frac{|X(x) - X(y)|}{|x - y|^{\alpha }}, \quad \Vert R \Vert _{\alpha } := \sup _{x \ne y} \frac{|R(x,y)|}{|x - y|^{\alpha }}. \end{aligned}$$
By \(\mathcal {C}^\alpha \) and \(\mathcal {B}^\alpha \) respectively we denote the spaces of functions for which these seminorms are finite. Then \(\mathcal {C}^\alpha \) endowed with the norm \(\Vert \cdot \Vert _{\mathcal {C}^\alpha } = \Vert \cdot \Vert _{\mathcal {C}^0} + \Vert \cdot \Vert _{\alpha }\) is a Banach space. \(\mathcal {B}^\alpha \) is a Banach space endowed with \(\Vert \cdot \Vert _{\mathcal {B}^\alpha } = \Vert \cdot \Vert _{\alpha }\).

The Hölder space \(\mathcal {C}^\alpha \) of regularity \(\alpha \ge 1\) consists of \(\lfloor \alpha \rfloor \) times continuously differentiable functions whose \(\lfloor \alpha \rfloor \)-th derivative is \((\alpha - \lfloor \alpha \rfloor )\)-Hölder continuous. For \(\alpha < 0\) we denote by \(\mathcal {C}^\alpha \) the Besov space \(\mathcal {B}^\alpha _{\infty , \infty }\) (see Appendix 1 for the definition).

We also define space-time Hölder norms, i.e. for some \(T > 0\) and functions \(X:[0,T] \times \mathbb {T} \rightarrow {\mathbb {R}}^n\) (or \({\mathbb {R}}^{n \times n}\)) and \(R:[0,T] \times \mathbb {T}^2 \rightarrow {\mathbb {R}}^n\) (or \({\mathbb {R}}^{n \times n}\)), any \(\alpha \in {\mathbb {R}}\) and any \(\beta > 0\) we define
$$\begin{aligned} \Vert X \Vert _{\mathcal {C}^{\alpha }_{T}} := \sup _{s \in [0,T]} \Vert X(s) \Vert _{\mathcal {C}^{\alpha }},\quad \Vert R \Vert _{\mathcal {B}^{\beta }_{T}} := \sup _{s \in [0,T]} \Vert R(s) \Vert _{\mathcal {B}^{\beta }}. \end{aligned}$$
(1.7)
We denote by \(\mathcal {C}^{\alpha }_{T}\) and \(\mathcal {B}^{\alpha }_{T}\) respectively the spaces of functions/distributions for which the norms (1.7) are finite. Furthermore, in order to deal with functions X exhibiting a blow-up with rate \(\eta >0\) near \(t=0\), we define the norm
$$\begin{aligned} \Vert X \Vert _{\mathcal {C}^{\alpha }_{\eta , T}} := \sup _{s \in (0,T]} s^{\eta } \Vert X(s) \Vert _{\mathcal {C}^{\alpha }}. \end{aligned}$$
Similarly to above, we denote by \(\mathcal {C}^{\alpha }_{\eta , T}\) the space of functions/distributions for which this norm is finite.

By \(\Vert \cdot \Vert _{\mathcal {C}^\alpha \rightarrow \mathcal {C}^\beta }\) we denote the operator norm of a linear map acting from the space \(\mathcal {C}^\alpha \) to \(\mathcal {C}^\beta \). When we write \(x \lesssim y\), we mean that there is a constant C, independent of the relevant quantities, such that \(x \le Cy\).

2 Elements of rough path theory

In this section we provide an overview of rough path theory and controlled rough paths. For more information on rough paths theory we refer to the original article [23] and to the monographs [8, 9, 24, 25].

One of the aims of rough paths theory is to provide a consistent and robust way of defining the integral
$$\begin{aligned} \int _s^t Y(r) \otimes dX(r), \end{aligned}$$
(2.1)
for processes \(Y, X \in \mathcal {C}^\alpha \) with any Hölder exponent \(\alpha \in \left( 0,\frac{1}{2}\right] \). If \(\alpha > \frac{1}{2}\), then the integral can be defined in Young’s sense [30] as the limit of Riemann sums. If \(\alpha \le \frac{1}{2}\), however, the Riemann sums may diverge (or fail to converge to a limit independent of the partition) and the integral cannot be defined in this way. Given \(X \in \mathcal {C}^\alpha \) with \(\alpha \in \left( 0, \frac{1}{2}\right] \), the theory of (controlled) rough paths allows to define (2.1) in a consistent way for a certain class of integrands Y. To this end however, one has to consider not only the processes X and Y, but suitable additional “higher order” information.
We fix \(0 < \alpha \le \frac{1}{2}\) and \(p = \lfloor 1 / \alpha \rfloor \) to be the largest integer such that \(p \alpha \le 1\). We then define the p-step truncated tensor algebra
$$\begin{aligned} T^{(p)}\big ({\mathbb {R}}^n\big ) := \bigoplus _{k=0}^{p} \big ({\mathbb {R}}^n\big )^{\otimes k}, \end{aligned}$$
whose basis elements can be labelled by words of length not exceeding p (including the empty word), based on the alphabet \(\mathcal {A} = \{1, \ldots , n\}\). We denote this set of words by \(\mathcal {A}_p\). Then the correspondence \(\mathcal {A}_p \rightarrow T^{(p)}({\mathbb {R}}^n)\) is given by \(w \mapsto e_w\) with \(e_w = e_{w_1} \otimes \ldots \otimes e_{w_k}\), for \(w = w_1 \ldots w_k\) and \(e_{\emptyset } = 1 \in \big ({\mathbb {R}}^n\big )^{\otimes 0}\approx {\mathbb {R}}\), where \(\{e_{i}\}_{i \in \mathcal {A}}\) is the canonical basis of \({\mathbb {R}}^n\).
There is an operation \(,\) called shuffle product [27], defined on the free algebra generated by \(\mathcal {A}\). For any two words the shuffle product gives all the possible ways of interleaving them in the ways that preserve the original order of the letters. For example, if a, b and c are letters from \(\mathcal {A}\), then one has the identityWe also define both the shuffle and the concatenation product of two elements from \(T^{(p)}\big ({\mathbb {R}}^n\big )\), i.e. for any two words \(w, \bar{w} \in \mathcal {A}_p\) we defineif the sums of the lengths of the two words do not exceed p and Open image in new window otherwise. This is extended to all of \(T^{(p)}\big ({\mathbb {R}}^n\big )\) by linearity. With these notations at hand, we give the following definition:

Definition 2.1

A geometric rough path of regularity \(\alpha \in \left( 0, \frac{1}{2}\right] \) is a map \(\mathbf {X} : {\mathbb {R}}^2 \rightarrow T^{(p)}\big ({\mathbb {R}}^n\big )\), where as above \(p = \lfloor 1 / \alpha \rfloor \), such that
  1. 1.

    Open image in new window, for any \(w, \bar{w} \in \mathcal {A}_p\) with \(|w| + |\bar{w}| \le p\),

     
  2. 2.

    \(\mathbf {X}(s,t) = \mathbf {X}(s,u) \otimes \mathbf {X}(u,t)\), for any \(s, u, t \in {\mathbb {R}}\),

     
  3. 3.

    \(\Vert \langle \mathbf {X}, e_w \rangle \Vert _{\mathcal {B}^{\alpha |w|}} < \infty \), for any word \(w \in \mathcal {A}_p\) of length |w|.

     
If we define \(X^i(t) := \langle \mathbf {X}(0,t), e_i \rangle \) for any \(i \in \mathcal {A}\), then the components of \(\mathbf {X}(s,t)\) of higher order should be thought of as defining the iterated integrals
$$\begin{aligned} \langle \mathbf {X}(s,t), e_w \rangle =: \int _{s}^{t} \ldots \int _{s}^{r_2} dX^{w_1}(r_1) \ldots dX^{w_k}(r_k), \end{aligned}$$
(2.2)
for \(w=w_1 \ldots w_k \in \mathcal {A}_p\). Of course, the integrals on the right hand side of (2.2) are not defined, as mentioned at the start of this section. Hence, for a given rough path \(\mathbf {X}\), then the left hand side of (2.2) is the definition of the right hand side.
The conditions in Definition 2.1 ensure that the quantities (2.2) behave like iterated integrals. In particular, if X is a smooth function and we define \(\mathbf {X}\) by (2.2) in Young’s sense, then \(\mathbf {X}\) satisfies the conditions of Definition 2.1, as was shown in [4]. In particular, if \(x = e_i\) and \(y = e_j\), for any two letters \(i, j \in \mathcal {A}\), then the first property gives
$$\begin{aligned} \langle \mathbf {X}(s,t), e_i \otimes e_j \rangle + \langle \mathbf {X}(s,t), e_j \otimes e_i \rangle = X^i(s,t) X^j(s,t), \end{aligned}$$
where we write \(X^i(s,t) := X^i(t) - X^i(s)\). This is the usual integration by parts formula. The second condition of Definition 2.1 provides the additivity property of the integral over consecutive intervals.
Given an \(\alpha \)-regular rough path \(\mathbf {X}\), we define the following quantity
$$\begin{aligned} \vert \vert \vert \mathbf {X} \vert \vert \vert _{\alpha } := \sum _{w \in \mathcal {A}_p \setminus \{\emptyset \}} \Vert \langle \mathbf {X}, e_w \rangle \Vert _{\mathcal {B}^{\alpha |w|}}. \end{aligned}$$
(2.3)

2.1 Controlled rough paths

The theory of controlled rough paths was introduced in [11] for geometric rough paths of Hölder regularity from \(\big (\frac{1}{3}, \frac{1}{2} \big ]\). In [12], the theory was generalised to rough paths of arbitrary positive regularity.

Definition 2.2

Given \(\alpha \in \left( 0, \frac{1}{2}\right] \), \(p = \lfloor 1/\alpha \rfloor \), a geometric rough path \(\mathbf {X}\) of regularity \(\alpha \), and a function \(Y: {\mathbb {R}}\rightarrow \big (T^{(p-1)}\big ({\mathbb {R}}^n\big )\big )^*\) (the dual of the truncated tensor algebra), we say that Y is controlled by \(\mathbf {X}\) if, for every word \(w \in \mathcal {A}_{p-1}\), one has the bound
$$\begin{aligned} |\langle Y(t), e_w \rangle - \langle Y(s), \mathbf {X}(s,t) \otimes e_w \rangle | \le C |t-s|^{(p-|w|)\alpha }, \end{aligned}$$
for some constant \(C > 0\).
An alternative statement of Definition 2.2 is that for every word \(w \in \mathcal {A}_{p-1}\) there exists a function \(R_Y^w \in \mathcal {B}^{(p-|w|)\alpha }\) such that
$$\begin{aligned} \langle Y(t), e_w \rangle = \sum _{\bar{w} \in \mathcal {A}_{p-|w|-1}} \langle Y(s), e_{\bar{w}} \otimes e_w \rangle \langle \mathbf {X}(s,t), e_{\bar{w}} \rangle + R_Y^w(s,t). \end{aligned}$$
(2.4)
Given an \(\alpha \)-regular geometric rough path \(\mathbf {X}\), we then endow the space of all controlled paths Y with the semi-norm
$$\begin{aligned} \Vert Y \Vert _{\mathcal {C}^\alpha _{\mathbf {X}}} := \sum _{w \in \mathcal {A}_{p-1}} \Vert \langle Y, e_w \rangle \Vert _{\mathcal {C}^\alpha } + \sum _{w \in \mathcal {A}_{p-2}} \Vert R_Y^w \Vert _{\mathcal {B}^{(p-|w|)\alpha }}. \end{aligned}$$
Given a rough path Y controlled by \(\mathbf {X}\), one can define the integral (2.1) bywhere we denoted \(X^i(t) := \langle \mathbf {X}(0,t), e_i \rangle \) for \(i \in \mathcal {A}\), and
$$\begin{aligned} \Xi _{i}(u,v) := \sum _{w \in \mathcal {A}_{p-1}} \langle Y(u), e_w \rangle \langle \mathbf {X}(u,v), e_w \otimes e_i \rangle \;. \end{aligned}$$
(2.6)
Here, the limit is taken over a sequence of partitions \(\mathcal {P}\) of the interval [st], whose diameters \(|\mathcal {P}|\) tend to 0. It was proved in [12, Theorem 8.5] that the rough integral (2.5) is well defined, i.e. the limit in (2.5) exists and is independent of the choice of partitions \(\mathcal {P}\).
If every coordinate \(Y^j\) of the process Y is controlled by \(\mathbf {X}\), then we denote the rough integral of Y with respect to X byWe use the symbol Open image in new window for the rough integral in (2.5), in order to remind the abuse of notation, since the integral depends not only on \(X^i\) and \(Y^j\), but on much more information contained in \(\mathbf {X}\) and Y. In the following proposition we provide several bounds on the rough integrals.

Proposition 2.3

Let Y be controlled by a geometric rough path \(\mathbf {X}\) of regularity \(\alpha \in \left( 0, \frac{1}{2}\right] \). Then there is a constant C, independent of Y and \(\mathbf {X}\), such thatMoreover, if \(\bar{Y}\) is controlled by another rough path \(\bar{\mathbf{X}}\) of regularity \(\alpha \), then there is a constant C, independent of \(\mathbf {X}\), \(\bar{\mathbf{X}}\), Y and \(\bar{Y}\), such thatwhere we have used the quantity
$$\begin{aligned} \Vert Y, \bar{Y} \Vert _{\mathcal {C}^\alpha _{\mathbf {X}, \bar{\mathbf{X}}}} := \sum _{w \in \mathcal {A}_{p-1}} \Vert \langle Y, e_w \rangle - \langle \bar{Y}, e_w \rangle \Vert _{\mathcal {C}^\alpha } + \sum _{w \in \mathcal {A}_{p-2}} \Vert R_Y^w - R_{\bar{Y}}^w \Vert _{\mathcal {B}^{(p-|w|)\alpha }}. \end{aligned}$$

Proof

The bounds follow from [12, Theorem 8.5, Proposition 6.1]. \(\square \)

Remark 2.4

The notation \(\vert \vert \vert \mathbf {X} - \bar{\mathbf{X}} \vert \vert \vert _{\alpha }\) is a slight abuse of notation since \(\mathbf {X} - \bar{\mathbf{X}}\) is not a rough path in general. The definition (2.3) does however make perfect sense for the difference.

In fact, the article [12] gives more precise bounds on the rough integrals than those provided in Proposition 2.3, but we prefer to have them in this form for the sake of conciseness.

3 Definition and well-posedness of the solution

Let us now give a short discussion of what we mean by “solutions” to (1.1), as introduced in [18]. The idea is to find a process X such that \(v = u - X\) is of class \(\mathcal {C}^1\) (in space), so that the definition of the integral (1.4) boils down to defining the integral
$$\begin{aligned} \int _{-\pi }^{\pi } \varphi (x) G\left( u(t,x)\right) d_x X(t,x). \end{aligned}$$
If we have a canonical way of lifting X to a rough path \(\mathbf {X}\), this integral can be interpreted in the sense of rough paths.
A natural choice for X is the solution to the linear stochastic heat equation. In order to get nice properties for this process, we build it in a slightly different way from [18]. First, we define the stationary solution to the modified SPDE on the circle \(\mathbb {T}\),
$$\begin{aligned} d Y = \Delta Y dt + \Pi dW, \end{aligned}$$
(3.1)
where \(\Pi \) denotes the orthogonal projection in \(L^{2}\) onto the space of functions with zero mean. Second, we define the process
$$\begin{aligned} X(t,x) := Y(t,x) + \frac{1}{\sqrt{2\pi }} w^0(t), \end{aligned}$$
(3.2)
where \(w^0\) if the zeroth Fourier mode of W.

Remark 3.1

We need to use \(\Pi \) in (3.1) in order to obtain a stationary solution. In [18], the author used instead the stationary solution to \(d X = \Delta X dt - X dt + dW\) as a reference path. Our choice of X was used in [21] and does not change the results of [18].

The following lemma shows that there is a natural way to extend X to a rough path.

Lemma 3.2

For every \(\frac{1}{3} < \alpha < \frac{1}{2}\), the stochastic process X can be canonically lifted to a process \(\mathbf {X}: {\mathbb {R}}\times \mathbb {T}^2 \rightarrow T^{(2)}\big ({\mathbb {R}}^n\big )\), such that for every fixed \(t \in {\mathbb {R}}\), the process \(\mathbf {X}(t)\) is a geometric \(\alpha \)-rough path.

The term “canonically” means that for a large class of natural approximations of the process X by smooth Gaussian processes \(X_\varepsilon \), the iterated integrals of \(X_\varepsilon \), defined by (2.2), converge in \(L^2\) to the corresponding elements of \(\mathbf {X}\) (see [9] for a precise definition and the proof). Denote by \(S_t = e^{t\Delta }\) the heat semigroup, which is given by convolution on the circle with the heat kernel
$$\begin{aligned} p_{t}(x) = \frac{1}{\sqrt{2 \pi }} \sum _{k \in {\mathbb {Z}}} e^{-t k^{2}} e^{ikx}. \end{aligned}$$
(3.3)
Assuming that the rough path-valued process \(\mathbf {X}\) is given, we then define solutions to (1.1) as follows:

Definition 3.3

Setting \(U(t) := S_t \left( u(0) - X(0) \right) \), a stochastic process u is a mild solution to the Eq. (1.1) if the process \(v(t) := u(t) - X(t) - U(t)\) belongs to \(\mathcal {C}^{1}_T\) and the identity
$$\begin{aligned} v(t,x)= & {} \int _0^t S_{t-s} \left( F(u(s)) + G(u(s)) \partial _x (v(s) + U(s))\right) (x) \,ds \nonumber \\&+\, \int _0^t S_{t-s} \partial _x Z(s) (x) \,ds. \end{aligned}$$
(3.4)
holds almost surely. Here, we write for brevity \(u(t) = v(t) + X(t) + U(t)\), and the process Z(sx) is a rough integralwhose derivative we consider in the sense of distributions.

Remark 3.4

In [18], the last integral in (3.4) was defined bybut as noticed in [21], the notion of solution in Definition 3.3 is more convenient, as it simplifies treatment of the rough integral. This change does not affect the existence and uniqueness results of [18], and the resulting solutions are the same.
For our convenience we rewrite the mild formulation of (1.6) as
$$\begin{aligned} \bar{v}(t) = \Phi ^{\bar{v}}(t) + \Psi ^{\bar{v}}(t) + \Xi ^{\bar{v}}(t) - \Upsilon ^{\bar{v}}(t), \end{aligned}$$
(3.6)
where we have set
$$\begin{aligned} \Phi ^{\bar{v}}(t)&:= \int _{0}^{t} S_{t-s} F(\bar{u}(s))\,ds,\nonumber \\ \Upsilon ^{\bar{v}}(t)_i&:= \Lambda \int _{0}^{t} S_{t-s}~ \mathrm {div} G_{i}(\bar{u}(s))\,ds, \nonumber \\ \Psi ^{\bar{v}}(t)&:= \int _{0}^{t} S_{t-s} G(\bar{u}(s)) \partial _x ({\bar{v}}(s) + U(s)) \,ds, \nonumber \\ \Xi ^{\bar{v}}(t)&:= \int _{0}^{t} S_{t-s} \partial _{x}Z(s)\,ds = \int _{0}^{t} \partial _{x}\big (S_{t-s} Z(s)\big )\,ds, \end{aligned}$$
(3.7)
and as before \(\bar{u} = \bar{v} + X + U\), \(U(t) = S_t(u^0 - X(0))\) andAlthough the two terms \(\Phi ^{\bar{v}}\) and \(\Upsilon ^{\bar{v}}\) are of the same type, we give them different names since they will arise in completely different ways from the approximation.

3.1 Existence and uniqueness results

The next theorem provides the well-posedness result for a mild solution to the Eq. (1.1).

Theorem 3.5

Let us assume that \(u^0 \in \mathcal {C}^\beta \) for some \(\frac{1}{3} < \beta < \frac{1}{2}\). Furthermore, let \(F \in \mathcal {C}^1\) and \(G \in \mathcal {C}^{3}\). Then for almost every realisation of the driving noise, there is \(T > 0\) such that there exists a unique mild solution to (1.1) on the interval [0, T] taking values in \(\mathcal {C}\big ([0,T], \mathcal {C}^\beta (\mathbb {T})\big )\). If moreover, F, G and all their derivatives are bounded, then the solution is global (i.e. \(T = \infty \)).

Proof

The proof can be done by performing a classical Picard iteration for v given by (3.4) on the space \(\mathcal {C}^{1}_{T}\) for some \(T \le 1\), see [18]. \(\square \)

Remark 3.6

The argument of [18, Theorem 3.7] also works in the space \(\mathcal {C}^{1+\alpha }_{\alpha /2, T}\), for any \(\alpha \in \left[ 0, \frac{1}{2}\right) \). Hence, the real regularity of v(t) is \(1 + \alpha \) rather than 1. This fact will be used in Sect. 6 to estimate how close the approximate derivative of v is to \(\partial _x v\).

4 Solutions of the approximate equations

In this section we rewrite the mild solution to the approximate equation (1.5) in a way convenient for working in Hölder spaces of low regularity. In particular, we define the iterated integrals of higher order of the controlling process.

Similarly to (3.1) and (3.2) we define the stationary process \(Y_{\varepsilon }\) and \(X_\varepsilon \) by
$$\begin{aligned} d Y_{\varepsilon } = \Delta _{\varepsilon } Y_{\varepsilon } dt + \Pi H_{\varepsilon } d W, \qquad X_\varepsilon (t,x) := Y_\varepsilon (t,x) + \frac{1}{\sqrt{2\pi }} w^0(t), \end{aligned}$$
(4.1)
where \(w^0\) is the zeroth Fourier mode of W. Moreover, we define the approximate semigroup \(S^{(\varepsilon )}_t = e^{t\Delta _{\varepsilon }}\) generated by the approximate Laplacian and given by convolution on the circle \(\mathbb {T}\) with the approximate heat kernel
$$\begin{aligned} p^{(\varepsilon )}_{t}(x) = \frac{1}{\sqrt{2 \pi }} \sum _{k \in {\mathbb {Z}}} e^{-t k^{2} f(\varepsilon k)} e^{ikx}. \end{aligned}$$
(4.2)
Furthermore, we define \(U_\varepsilon (t) := S^{(\varepsilon )}_t \left( u_\varepsilon (0) - X_\varepsilon (0) \right) \) and \(v_\varepsilon := u_\varepsilon - X_\varepsilon - U_\varepsilon \). Then the mild version of the approximate equation (1.5) can be rewritten as
$$\begin{aligned} v_{\varepsilon }(t) = \Phi _{\varepsilon }^{v_{\varepsilon }}(t) + \Psi _{\varepsilon }^{v_{\varepsilon }}(t) + \int _{0}^{t} S^{(\varepsilon )}_{t-s}G(u_{\varepsilon }(s))D_{\varepsilon } X_{\varepsilon }(s) \,ds, \end{aligned}$$
(4.3)
where we write for brevity \(u_\varepsilon = v_\varepsilon + X_\varepsilon + U_\varepsilon \), and set
$$\begin{aligned} \Phi _{\varepsilon }^{v_{\varepsilon }}(t)&:= \int _{0}^{t} S^{(\varepsilon )}_{t-s}F(u_{\varepsilon }(s)) \,ds, \nonumber \\ \Psi _{\varepsilon }^{v_\varepsilon }(t)&:= \int _{0}^{t} S^{(\varepsilon )}_{t-s} G(u_\varepsilon (s)) D_\varepsilon \left( v_{\varepsilon }(s) + U_{\varepsilon }(s) \right) ds. \end{aligned}$$
(4.4)
As already mentioned in Sect. 2, the rough integrals are approximated by Riemann-like sums, but these include additional higher-order correction terms. Hence, we cannot expect in general that Z(sx), defined in (3.5), is approximated by
$$\begin{aligned} \int _{-\pi }^{x} G(u_\varepsilon (s,y)) D_\varepsilon X_{\varepsilon }(s,y)\, dy, \end{aligned}$$
(4.5)
as \(\varepsilon \downarrow 0\). In order to approximate Z(sx), we have to add some extra terms to (4.5). These extra terms give raise to the correction term in the limiting equation, mentioned in the introduction. In the rest of this section we build these missing extra terms.

4.1 Iterated integrals

In order to use the theory of rough paths with regularities close to zero, we need to build the iterated integrals of arbitrarily high orders of X and \(X_\varepsilon \) with respect to themselves.

The expansion of \(X_{\varepsilon }\) defined in (4.1) in the Fourier basis is given by
$$\begin{aligned} X_{\varepsilon }(t,x)= & {} \frac{1}{\sqrt{2 \pi }} w_0(t) + \frac{1}{\sqrt{2 \pi }} \sum _{k \in {\mathbb {Z}}\backslash \{0\}} \int _{-\infty }^{t} e^{ikx} e^{-k^{2} f(\varepsilon k)(t-s)}h(\varepsilon k)\, dw_{k}(s) \nonumber \\= & {} \frac{1}{\sqrt{2 \pi }} w_0(t) + \frac{1}{\sqrt{\pi }} \sum _{k =1}^\infty \frac{q^{(\varepsilon )}_{k}}{k} \left( \eta ^{(\varepsilon )}_{k} (t) \sin (kx) + \eta ^{(\varepsilon )}_{-k} (t) \cos (kx)\right) . \end{aligned}$$
(4.6)
Here, \(w_{k}\) are \({\mathbb {C}}^{n}\)-valued standard Brownian motions (i.e. real and imaginary parts of every component are independent real-valued Brownian motions so that \(\mathbb {E}|w^{i}_{k}(t)|^{2}=t\)), which are independent up to the constraint \(w_{k}=\bar{w}_{-k}\) ensuring that \(X_{\varepsilon }\) is real-valued. Furthermore, for every fixed \(t \ge 0\), \(\eta ^{(\varepsilon )}_{k}(t)\) are independent \({\mathbb {R}}^{n}\)-valued standard Gaussian random vectors such that
$$\begin{aligned} \mathbb {E}\left[ \eta ^{(\varepsilon )}_{k}(0) \otimes \eta ^{(\varepsilon )}_{k}(t) \right] = e^{-k^2 f(\varepsilon k) t} \mathrm {Id}, \end{aligned}$$
and the coefficients \(q^{(\varepsilon )}_{k}\) are defined by
$$\begin{aligned} q^{(\varepsilon )}_{k} = \frac{h(\varepsilon k)}{\sqrt{f(\varepsilon k)}} \quad \text{ for } \quad k \ge 1. \end{aligned}$$
(4.7)
Similarly, the Fourier expansion of the process X is
$$\begin{aligned} X(t,x) = \frac{1}{\sqrt{2 \pi }} w_0(t) + \frac{1}{\sqrt{\pi }} \sum _{k =1}^\infty \frac{1}{k} \left( \eta _{k} (t) \sin (kx) + \eta _{-k} (t) \cos (kx)\right) , \end{aligned}$$
(4.8)
where \(\eta _{k}(t)\) are independent \({\mathbb {R}}^{n}\)-valued standard Gaussian random vectors such that
$$\begin{aligned} \mathbb {E}\left[ \eta _{k}(0) \otimes \eta _{k}(t) \right] = e^{-k^2 t} \mathrm {Id}. \end{aligned}$$
Furthermore, the random vectors \(\{(\eta ^{(\varepsilon )}_{k}(t), \eta _{k}(t)) : k \in {\mathbb {Z}}\setminus \{0\}\}\) are independent and satisfy
$$\begin{aligned} \mathbb {E}\left[ \eta ^{(\varepsilon )}_{k}(t) \otimes \eta _{k}(t) \right] = \frac{\sqrt{f(\varepsilon k)}}{f(\varepsilon k) + 1} \mathrm {Id} =: \tilde{q}^{(\varepsilon )}_k. \end{aligned}$$
The following lemma provides bounds on the canonical lifts of X(t) and \(X_\varepsilon (t)\) to Gaussian rough paths.

Lemma 4.1

For \(\alpha \in \left( 0,\frac{1}{2}\right) \), \(t \ge 0\) and \(p = \lfloor 1/\alpha \rfloor \), consider the canonical lifts \(\mathbf {X}(t), \mathbf {X}_\varepsilon (t): \mathbb {T}^2 \rightarrow T^{(p)}\big ({\mathbb {R}}^n\big )\) of the processes X(t) and \(X_\varepsilon (t)\) to Gaussian rough paths of regularity \(\alpha \) given by Lemma 3.2.

Furthermore, for any \(\lambda < \frac{1}{2}-\alpha \) and any \(T > 0\) the following bounds hold
$$\begin{aligned} \mathbb {E}\Vert X \Vert _{\mathcal {C}^\alpha _T} \lesssim 1,\quad \mathbb {E}\Vert X - X_\varepsilon \Vert _{\mathcal {C}^\alpha _T} \lesssim \varepsilon ^{\lambda }. \end{aligned}$$
(4.9)
Moreover, for any word \(w \in \mathcal {A}_p\) with \(|w| \ge 2\) we have
$$\begin{aligned} \mathbb {E}\Vert \mathbf {X}^w \Vert _{\mathcal {B}^{|w|\alpha }_T} \lesssim 1,\quad \mathbb {E}\Vert \mathbf {X}^w - \mathbf {X}^w_\varepsilon \Vert _{\mathcal {B}^{|w|\alpha }_T} \lesssim \varepsilon ^{\lambda }, \end{aligned}$$
(4.10)
where we use the notation \(\mathbf {X}^w = \langle \mathbf {X}, e_w \rangle \).

Proof

The proof of (4.9) is provided in [21, Lemma 3.3]. We only have to show that there exist the claimed lifts which satisfy the estimates (4.10). To this end, we define, for some \(\kappa >0\), the following sequences
$$\begin{aligned} \beta _k^{(\varepsilon , \kappa )} = \frac{h(\varepsilon k)^2}{k^{\kappa } f(\varepsilon k)},\quad \rho _k^{(\varepsilon , \kappa )} = \frac{h(\varepsilon k)}{k^{\kappa }(f(\varepsilon k) + 1)}, \end{aligned}$$
where \(k \ge 1\). First, for the increments of \(\beta _k^{(\varepsilon , \kappa )}\) we have
$$\begin{aligned}&\left| \beta _{k+1}^{(\varepsilon , \kappa )} - \beta _k^{(\varepsilon , \kappa )}\right| \le \left| \left( q^{(\varepsilon )}_{k+1}\right) ^2\right| \left| (k+1)^{-\kappa } - k^{-\kappa }\right| \\&\quad + \, k^{-\kappa }\left| \left( q^{(\varepsilon )}_{k+1}\right) ^2 - \left( q^{(\varepsilon )}_{k}\right) ^2 \right| \le C k^{-1 - \kappa }, \end{aligned}$$
for some constant \(C > 0\), where \(q^{(\varepsilon )}_{k}\) is defined in (4.7). To get the last inequality we have used the bounds on the functions f and h, provided in Assumptions 1 and 3, and the estimate
$$\begin{aligned} \left| \left( q^{(\varepsilon )}_{k+1}\right) ^2 - \left( q^{(\varepsilon )}_{k}\right) ^2 \right| \le C k^{-1}, \end{aligned}$$
which follows from the bound on the total variation of the function \(h^2/f\), provided by Assumption 3. Second, the convergence \(\beta _k^{(\varepsilon , \kappa )} \log k \rightarrow 0\) holds as \(k \rightarrow \infty \).

Using these properties of \(\beta _k^{(\varepsilon , \kappa )}\), we obtain from [28, Theorem 4] that the series \(\sum _{k=1}^N \beta _k^{(\varepsilon , \kappa )} \cos kx\) converge in \(L^1\) as \(N \rightarrow \infty \), and the \(L^1\)-norm of the limit is independent of \(\varepsilon \), which proves that for any \(\kappa > 0\) the parametrized sequence \(\beta _k^{(\varepsilon , \kappa )}\) is uniformly negligible in \(\varepsilon \in (0,1)\) in the sense of [10, Definition 3.6].

Similarly, using the bound on the total variation of \(h/(f+1)\), which is stated in Assumption 3, we can obtain that for any \(\kappa > 0\) the sequence \(\rho _k^{(\varepsilon , \kappa )}\) is uniformly negligible in \(\varepsilon \in (0,1)\) as well.

Noticing that the coefficients of the Fourier expansions (4.6) and (4.8) satisfy
$$\begin{aligned} \left( \frac{q^{(\varepsilon )}_{k}}{k}\right) ^2 = \frac{\beta _k^{(\varepsilon , \kappa )}}{k^{2 - \kappa }}, \qquad \frac{q^{(\varepsilon )}_{k} \tilde{q}^{(\varepsilon )}_{k}}{k^2} = \frac{\rho _k^{(\varepsilon , \kappa )}}{k^{2 - \kappa }}, \end{aligned}$$
we can apply [10, Theorem 3.14] and obtain that for every t and \(\alpha < \frac{1}{2}\) the processes X(t) and \(X_\varepsilon (t)\) can indeed be lifted to \(\alpha \)-regular rough paths \(\mathbf {X}(t)\) and \(\mathbf {X}_\varepsilon (t)\) respectively, such that for any word \(w \in \mathcal {A}_p\) with \(|w| \ge 2\) the bounds
$$\begin{aligned} \mathbb {E}\Vert \mathbf {X}^w(t) \Vert _{\mathcal {B}^{|w|\alpha }} \lesssim 1, \quad \mathbb {E}\Vert \mathbf {X}^w_\varepsilon (t) \Vert _{\mathcal {B}^{|w|\alpha }} \lesssim 1 \end{aligned}$$
(4.11)
hold uniformly in \(t \in [0,T]\). Furthermore, by [10, Theorem 3.15] we obtain that for all \(\gamma < \frac{1}{2} - \alpha \) and \(\kappa > 0\) small enough,
$$\begin{aligned} \mathbb {E}\Vert \mathbf {X}^w(t) - \mathbf {X}^w_\varepsilon (t) \Vert _{\mathcal {B}^{|w|\alpha }} \lesssim \left( \sup _{x \in \mathbb {T}} \mathbb {E} |X(t,x) - X_\varepsilon (t,x)|^2 \right) ^{\gamma +\kappa } \lesssim \varepsilon ^{\gamma }, \end{aligned}$$
(4.12)
uniformly in \(t \in [0,T]\). The last bound can be shown almost identically to [21, (3.16d)], but taking \(\theta \equiv 1\) and the time interval from \(-\infty \).
Now we will investigate the temporal regularity of \(\varvec{X}_\varepsilon \). Our aim is to apply [10, Theorem 3.15] to the processes \(\varvec{X}_\varepsilon (s)\) and \(\varvec{X}_\varepsilon (t)\), with \(s, t \in [0,T]\). To this end, let us define \(\tau = |t-s|\) and the parametrized sequence \(\mu ^{(\tau , \varepsilon )}_k = e^{-k^2 f(\varepsilon k) \tau }\). Then, in the same way as in the beginning of the proof and using Assumptions 1 and 3, we obtain that for any \(\kappa > 0\) the sequence \(\beta _k^{(\kappa , \varepsilon )} \mu ^{(\tau , \varepsilon )}_k\) is uniformly negligible in \(\tau > 0\) and \(\varepsilon \in (0,1)\) and by [10, Theorem 3.15] we obtain, for any word \(w \in \mathcal {A}_p\) with \(|w| \ge 2\),
$$\begin{aligned} \mathbb {E}\Vert \mathbf {X}^w_\varepsilon (t) - \mathbf {X}^w_\varepsilon (s) \Vert _{\mathcal {B}^{|w|\alpha }} \lesssim \left( \sup _{x \in \mathbb {T}} \mathbb {E} |X_\varepsilon (s,x) - X_\varepsilon (t,x)|^2 \right) ^\gamma \lesssim |t-s|^{\frac{\gamma }{2}}, \end{aligned}$$
(4.13)
for all \(\gamma < \frac{1}{2} - \alpha \). Here, the last bound can be derived similarly to [21, (3.16a)], but with \(\theta \equiv 1\) and the time interval from \(-\infty \). In the same way, we get
$$\begin{aligned} \mathbb {E}\Vert \mathbf {X}^w(t) - \mathbf {X}^w(s) \Vert _{\mathcal {B}^{|w|\alpha }} \lesssim |t-s|^{\frac{\gamma }{2}}. \end{aligned}$$
(4.14)
Applying the Kolmogorov criterion [22] together with the bounds (4.11) and (4.14), we get the first estimate in (4.10).
Now, let us take any word \(w \in \mathcal {A}_p\) with \(|w| \ge 2\). Then, on the one hand, the estimate (4.12) gives
$$\begin{aligned}&\mathbb {E} \Vert \varvec{X}^w(t) - \varvec{X}_\varepsilon ^w(t) - \varvec{X}^w(s) + \varvec{X}_\varepsilon ^w(s) \Vert _{\mathcal {B}^{\alpha |w|}} \\&\quad \le \,\mathbb {E}\Vert \varvec{X}^w(t) - \varvec{X}^w_\varepsilon (t) \Vert _{\mathcal {B}^{\alpha |w|}} + \mathbb {E}\Vert \varvec{X}^w(s) - \varvec{X}^w_\varepsilon (s) \Vert _{\mathcal {B}^{\alpha |w|}} \lesssim \varepsilon ^{\gamma }. \end{aligned}$$
On the other hand, from (4.14) and (4.13) the following estimate follows
$$\begin{aligned}&\mathbb {E}\Vert \varvec{X}^w(t) - \varvec{X}_\varepsilon ^w(t) - \varvec{X}^w(s) + \varvec{X}_\varepsilon ^w(s) \Vert _{\mathcal {B}^{\alpha |w|}} \\&\quad \le \,\mathbb {E}\Vert \varvec{X}^w_\varepsilon (t) - \varvec{X}^w_\varepsilon (s) \Vert _{\mathcal {B}^{\alpha |w|}} + \mathbb {E}\Vert \varvec{X}^w(t) - \varvec{X}^w(s) \Vert _{\mathcal {B}^{\alpha |w|}} \lesssim |t-s|^{\frac{\gamma }{2}}. \end{aligned}$$
Combining these two bunds we obtain
$$\begin{aligned} \mathbb {E}\Vert \varvec{X}^w(t) - \varvec{X}_\varepsilon ^w(t) - \varvec{X}^w(s) + \varvec{X}_\varepsilon ^w(s) \Vert _{\mathcal {B}^{\beta |w|}} \lesssim \varepsilon ^{\gamma } \wedge |t-s|^{\frac{\gamma }{2}} \lesssim \varepsilon ^{\frac{1}{2} - \alpha - \delta } |t-s|^{\frac{\delta }{2}}, \end{aligned}$$
for any \(\delta > 0\) small enough and uniformly in \(s, t \in [0,T]\). From this bound, estimate (4.12) and the Kolmogorov criterion [22] we obtain the second bound in (4.10). \(\square \)

4.2 Approximation of the rough integral

Now, having defined the iterated integrals of \(X_\varepsilon \), we can build an approximation of the process Z defined in (3.5).

The idea comes from the fact that if u(t) is controlled by \(\mathbf {X}(t)\), then the process G(u(t)) is controlled by \(\mathbf {X}(t)\) as well. The Taylor expansion gives an approximation for \(G_{ij}(u(t))\),
$$\begin{aligned} G_{ij}(u(t, y)) \approx G_{ij}(u(t, x)) + \sum _{w \in \mathcal {A}_{p-1} \setminus \emptyset } \tilde{C}_w D^w G_{ij}(u(t, x)) \left( u(t, y) - u(t, x) \right) _w. \end{aligned}$$
Here, \(\tilde{C}_w\) are combinatorial factors which can be calculated explicitly. Furthermore, we use the following notation: for \(w = w_1 \cdots w_k \in \mathcal {A}_{p-1}\) and \(k \ge 1\) we denote \(D^w = D^{w_1} \cdots D^{w_k}\) and \(u(t,x)_w = u_{w_1}(t,x) \cdots u_{w_k}(t,x)\).
Recalling that we will look for solutions such that \(u(t) - X(t) \in \mathcal {C}^1\), we obtain an approximation of \(G_{ij}(u(t))\) via \(\mathbf {X}(t)\),
$$\begin{aligned} G_{ij}(u(t, y)) \approx G_{ij}(u(t, x)) + \sum _{\begin{array}{c} w \in \mathcal {A}_{p-1} \setminus \{\emptyset \} \\ w=w_1 \ldots w_k \end{array}} \tilde{C}_w D^w G_{ij}(u(t, x)) \prod _{l=1}^k \langle \mathbf {X}(t; x,y), e_{w_l} \rangle . \end{aligned}$$
Symmetrising this expression and using Definition 2.1, this can be rewritten as
$$\begin{aligned} G_{ij}(u(t, y)) \approx \sum _{w \in \mathcal {A}_{p-1}} C_w D^w G_{ij}(u(t, x)) \langle \mathbf {X}(t; x,y), e_w \rangle , \end{aligned}$$
(4.15)
for some slightly different constants \(C_w\). This expansion motivates our choice of the terms in the approximation of the rough integral.
In view of Assumption 2, it is natural to define the process \(D_\varepsilon \mathbf {X}_{\varepsilon } : {\mathbb {R}}_+ \times \mathbb {T} \rightarrow T^{(p)}\big ({\mathbb {R}}^n\big )\) in the following way: for any word \(w \in \mathcal {A}_p\) we set
$$\begin{aligned} \langle D_\varepsilon \mathbf {X}_{\varepsilon }(t;y), e_w \rangle := \frac{1}{\varepsilon } \int _{{\mathbb {R}}} \langle \mathbf {X}_\varepsilon (t;y,y+\varepsilon z), e_w \rangle \mu (dz). \end{aligned}$$
(4.16)
Combining the expansion (4.15) with the definition (2.6), it appears plausible that a good approximation of Z is given by
$$\begin{aligned} Z_{\varepsilon }(t,x)_{i} := \sum _{w \in \mathcal {A}_{p-1}} C_w \int _{-\pi }^{x} D^{w} G_{ij}(u_{\varepsilon }(t,y)) \langle D_\varepsilon \mathbf {X}_{\varepsilon }(t;y), e_w \otimes e_j \rangle \,dy. \end{aligned}$$
(4.17)
Here, to simplify the notation we have omitted the sum over j.
Now we can rewrite the mild solution (4.3) as
$$\begin{aligned} v_{\varepsilon }(t) = \Phi _{\varepsilon }^{v_{\varepsilon }}(t) + \Psi _{\varepsilon }^{v_\varepsilon }(t) + \Xi _{\varepsilon }^{v_{\varepsilon }}(t) - \Upsilon _{\varepsilon }^{v_{\varepsilon }}(t) - \bar{\Upsilon }_{\varepsilon }^{v_{\varepsilon }}(t), \end{aligned}$$
(4.18)
where the functions \(\Phi _{\varepsilon }^{v_{\varepsilon }}\) and \(\Psi _{\varepsilon }^{v_\varepsilon }\) are defined in (4.4). The term involving the rough integral is denoted by
$$\begin{aligned} \Xi _{\varepsilon }^{v_{\varepsilon }}(t) := \int _{0}^{t} S^{(\varepsilon )}_{t-s}\partial _{x}Z_{\varepsilon }(s)\,ds = \int _{0}^{t} \partial _{x} \big (S^{(\varepsilon )}_{t-s}Z_{\varepsilon }(s)\big ) \,ds. \end{aligned}$$
(4.19)
The additional terms in (4.18) which we used to approximate the rough integral we denote by
$$\begin{aligned} \Upsilon _{\varepsilon }^{v_{\varepsilon }}(t,x)_{i}&:= \sum _{k \in \mathcal {A}} \int _{0}^{t} S^{(\varepsilon )}_{t-s}\Big (D^{k} G_{ij}(u_{\varepsilon }(s,\cdot )) \langle D_\varepsilon \mathbf {X}_{\varepsilon }(s;\cdot ), e_{kj} \rangle \Big )(x)\, ds, \nonumber \\ \quad \bar{\Upsilon }_{\varepsilon }^{v_{\varepsilon }}(t,x)_{i}&:= \sum _{\begin{array}{c} w \in \mathcal {A}_{p-1}\\ |w| \ge 2 \end{array}} C_w \int _{0}^{t} S^{(\varepsilon )}_{t-s} \Big (D^{w} G_{ij}(u_{\varepsilon }(s,\cdot )) \langle D_\varepsilon \mathbf {X}_{\varepsilon }(s;\cdot ), e_{wj} \rangle \Big )(x)\, ds.\nonumber \\ \end{aligned}$$
(4.20)
In the next sections we will show that the term \(\bar{\Upsilon }_{\varepsilon }^{v_{\varepsilon }}\) tends to 0 and the other terms in (4.18) converge to the corresponding terms in (3.6) in the space \(\mathcal {C}^1_T\).

5 Convergence of the solutions of the approximate equations

In this section we provide a proof of Theorem 1.1. In what follows we use the constant \(\alpha _{\star } = \frac{1}{2} - \alpha \), for some fixed small \(\alpha > 0\). This constant represents the real spatial regularity of the process X defined in (3.2). To obtain better bounds we will work in the spaces of regularity \(\alpha \), which is close to 0. The constants \(\alpha \) and \(\alpha _\star \) are used throughout the article as fixed values.

To shorten notations we define the norm
$$\begin{aligned} \vert \vert \vert \mathbf {X} \vert \vert \vert _{\alpha _\star , T} := \sup _{t \in [0,T]} \vert \vert \vert \mathbf {X}(t) \vert \vert \vert _{\alpha _\star }. \end{aligned}$$
(5.1)
See (2.3) for the definition of the norm of a rough path. For any \(K > 0\) we define the stopping time
$$\begin{aligned}&\sigma _{K} := \inf \left\{ t \ge 0\,:\, \Vert X \Vert _{\mathcal {C}_{t}^{\alpha _\star }} \ge K, \text { or } \vert \vert \vert \mathbf {X} \vert \vert \vert _{\alpha _\star , t} \ge K, \text { or } \Vert \bar{v} \Vert _{\mathcal {C}^{1+\alpha _\star }_{\alpha _\star /2,t}} \ge K, \right. \\&\quad \left. \text { or }\quad \Vert \bar{v} \Vert _{\mathcal {C}^{1}_{t}} \ge K, \text { or } \Vert v_\varepsilon \Vert _{\mathcal {C}^{1}_{t}} \ge K \right\} \wedge T. \end{aligned}$$
Note that in view of Remark 3.6, the condition on the norm \(\Vert \bar{v} \Vert _{\mathcal {C}^{1+\alpha _\star }_{\alpha _\star /2, t}}\) is reasonable. For any two letters \(i, j \in \mathcal {A}\) we define the process
$$\begin{aligned} \mathcal {H}^{i, j}_\varepsilon (t,x) := \Lambda \delta _{i, j} - \langle D_\varepsilon \mathbf {X}_{\varepsilon }(t;x), e_i \otimes e_j \rangle , \end{aligned}$$
where \(\delta \) is the Kronecker delta. To have a priori bounds on the corresponding \(\varepsilon \)-quantities we introduce the stopping time
$$\begin{aligned}&\sigma _{K, \varepsilon } := \inf \left\{ t \ge 0\,:\, \Vert X - X_\varepsilon \Vert _{\mathcal {C}_{t}^{\alpha _\star }} \ge 1, ~\text { or }~ \vert \vert \vert \mathbf {X} - \mathbf {X}_\varepsilon \vert \vert \vert _{\alpha _\star , t} \ge 1, \right. \\&\quad \left. ~\text { or }~ \Vert \mathcal {H}_\varepsilon \Vert _{\mathcal {C}_{t}^{-\frac{1}{2} + \alpha }} \ge 1, ~\text { or }~ \Vert \bar{v} - v_\varepsilon \Vert _{\mathcal {C}^{\alpha }_{t}} \ge 1, ~\text { or }~ \Vert \bar{v} - v_\varepsilon \Vert _{\mathcal {C}^{1}_{(1-\alpha )/2,t}} \ge 1 \right\} \wedge T. \end{aligned}$$
The blow-up of the norm \(\Vert \bar{v}(t) - v_\varepsilon (t) \Vert _{\mathcal {C}^{1}}\) comes from the regularization property of the heat semigroup and the fact that we work in the \(\alpha \)-regular spaces, i.e. we use the bound
$$\begin{aligned} \Vert U(t) \Vert _{\mathcal {C}^1} \lesssim t^{\frac{\alpha -1}{2}} \left( \Vert u^0 \Vert _{\mathcal {C}^\alpha } + \Vert X(0) \Vert _{\mathcal {C}^\alpha } \right) . \end{aligned}$$
See Appendix 2 for the properties of the heat semigroup. Finally, we define the stopping time \(\varrho _{K,\varepsilon } := \sigma _{K} \wedge \sigma _{K, \varepsilon }\) and write in what follows
$$\begin{aligned} t_{\varepsilon } := t \wedge \varrho _{K, \varepsilon }. \end{aligned}$$
(5.2)

Remark 5.1

In the article we always consider time intervals up to the stopping time \(\varrho _{K, \varepsilon }\). Therefore, all the quantities involved in the definition of \(\varrho _{K, \varepsilon }\) are bounded by \(K + 1\) and all the proportionality constants can depend on K.

Proof of Theorem 1.1

For \(\alpha > 0\) as in the beginning of this section we define \(p = \lfloor 1/\alpha \rfloor \). From the derivation of the bounds below we will see how small the value of \(\alpha \) must be. To make the notation shorter, we introduce the following norm
$$\begin{aligned} \Vert \cdot \Vert _{\alpha , t} := \Vert \cdot \Vert _{\mathcal {C}^{\alpha }_{t}} + \Vert \cdot \Vert _{\mathcal {C}^{1}_{(1-\alpha )/2, t}}. \end{aligned}$$
For \(t \le \varrho _{K, \varepsilon }\), we obtain from (3.6) and (4.18) the bound
$$\begin{aligned} \Vert \bar{v} - v_{\varepsilon } \Vert _{\alpha , t}\le & {} \Vert \Psi ^{\bar{v}} - \Psi _{\varepsilon }^{v_\varepsilon } \Vert _{\alpha , t} + \Vert \Phi ^{\bar{v}} - \Phi ^{v_{\varepsilon }}_{\varepsilon } \Vert _{\alpha , t} + \Vert \Upsilon ^{\bar{v}} - \Upsilon ^{v_{\varepsilon }}_{\varepsilon } \Vert _{\alpha , t} \nonumber \\&+\,\Vert \bar{\Upsilon }^{v_{\varepsilon }}_{\varepsilon } \Vert _{\alpha , t} + \Vert \Xi ^{\bar{v}} - \Xi ^{v_{\varepsilon }}_{\varepsilon } \Vert _{\alpha , t}. \end{aligned}$$
(5.3)
We consider only time periods \(t < 1\), for larger times the claim can easily be obtained by iteration. To find a bound on the first term in (5.3) we use the results of Sect. 6. Applying Proposition 6.1 with a small constant \(\kappa = \alpha \) we get
$$\begin{aligned}&\Vert \Psi ^{\bar{v}} - \Psi _{\varepsilon }^{v_\varepsilon } \Vert _{\alpha , t} \lesssim t^{\frac{1}{2}} \Vert \bar{v} - v_{\varepsilon } \Vert _{\alpha , t} + \Vert X - X_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_t} \nonumber \\&\quad +\,\Vert u^0 - u^0_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }} + \varepsilon ^{\alpha _\star - \alpha }. \end{aligned}$$
(5.4)
In order to bound the second term in (5.3), we use Proposition 6.2 with \(\kappa = \alpha \),
$$\begin{aligned} \Vert \Phi ^{\bar{v}} - \Phi ^{v_{\varepsilon }}_{\varepsilon } \Vert _{\alpha , t}\lesssim & {} t^{\frac{1 - \alpha }{2}} \Vert \bar{v} - v_{\varepsilon } \Vert _{\mathcal {C}^{0}_{t}} + \Vert X - X_{\varepsilon } \Vert _{\mathcal {C}^{0}_t} \nonumber \\&+\, \Vert u^0 - u_{\varepsilon }^0 \Vert _{\mathcal {C}^{0}} + \varepsilon ^{\alpha _\star - \alpha }. \end{aligned}$$
(5.5)
Applying Proposition 7.2 with the parameter \(\kappa = \alpha \), we bound the expectation of the third term in (5.3) by
$$\begin{aligned}&\mathbb {E} \Vert \Upsilon ^{\bar{v}} - \Upsilon ^{v_{\varepsilon }}_{\varepsilon } \Vert _{\alpha , t} \lesssim t^{\frac{1 - \alpha }{2}} \mathbb {E} \Vert \bar{v} - v_\varepsilon \Vert _{\mathcal {C}^{0}_{t}} + \mathbb {E} \Vert X - X_\varepsilon \Vert _{\mathcal {C}^{0}_t} \nonumber \\&\quad + \mathbb {E} \Vert u^0 - u_{\varepsilon }^0 \Vert _{\mathcal {C}^{0}} + \varepsilon ^{ \alpha _\star - \alpha }. \end{aligned}$$
(5.6)
A bound on the fourth term in (5.3) is a straightforward application of Proposition 6.4,
$$\begin{aligned} \Vert \bar{\Upsilon }^{v_{\varepsilon }}_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_{t}} + \Vert \bar{\Upsilon }^{v_{\varepsilon }}_{\varepsilon } \Vert _{\mathcal {C}^{1}_{t}} \lesssim \varepsilon ^{3\alpha _{\star }-1}. \end{aligned}$$
(5.7)
Using Proposition 8.2 with the small parameter \(\kappa = \alpha / 2\) we can bound the last term in (5.3) by
$$\begin{aligned} \Vert \Xi ^{\bar{v}} - \Xi ^{v_{\varepsilon }}_{\varepsilon } \Vert _{\alpha , t} \lesssim t^{\frac{\alpha }{4}} \mathcal {D}_{\varepsilon }(t) + \varepsilon ^{\alpha _\star - 3 \alpha /2}, \end{aligned}$$
(5.8)
where \(\mathcal {D}_\varepsilon \) is defined in (8.1).
Combining the bounds (5.3)–(5.8) together we obtain
$$\begin{aligned} \mathbb {E} \Vert \bar{v} - v_{\varepsilon } \Vert _{\alpha , t}&\lesssim \,t^{\frac{\alpha }{4}} \mathbb {E} \Vert \bar{v} - v_{\varepsilon } \Vert _{\alpha , t} + \mathbb {E} \Vert u^{0} - u_{\varepsilon }^{0} \Vert _{\mathcal {C}^{\alpha }} + \mathbb {E} \Vert X - X_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_t} \nonumber \\&\quad + \,\mathbb {E} \vert \vert \vert \mathbf {X} - \mathbf {X}_\varepsilon \vert \vert \vert _{\alpha , t} + \varepsilon ^{\frac{1}{2} - 3\alpha }, \end{aligned}$$
(5.9)
where we have used \(\alpha _\star = \frac{1}{2} - \alpha \). By Lemma 4.1 we can bound the norms of the controlling processes,
$$\begin{aligned} \mathbb {E} \Vert X - X_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_t} + \mathbb {E} \vert \vert \vert \mathbf {X} - \mathbf {X}_\varepsilon \vert \vert \vert _{\alpha , t} \lesssim \varepsilon ^{\frac{1}{2} - 2\alpha }. \end{aligned}$$
Furthermore, by choosing \(t = t_*\) small enough we can absorb the first term on the right-hand side of (5.9) into the left-hand side and obtain
$$\begin{aligned} \mathbb {E} \Vert \bar{v} - v_{\varepsilon } \Vert _{\alpha , t_*} \le C \left( \mathbb {E} \Vert u^{0} - u_{\varepsilon }^{0} \Vert _{\mathcal {C}^{\alpha }} + \varepsilon ^{\frac{1}{2} - 3\alpha }\right) . \end{aligned}$$
(5.10)
From the definition of \(\bar{u}\) via \(\bar{v}\) and (5.10) we conclude
$$\begin{aligned} \mathbb {E} \Vert \bar{u} - u_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_{t_*}}\le & {} \mathbb {E} \Vert \bar{v} - v_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_{t_*}} + \mathbb {E} \Vert X - X_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_{t_*}} + \mathbb {E} \Vert U - U_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_{t_*}} \\\le & {} C \mathbb {E} \Vert u^{0} - u_{\varepsilon }^{0} \Vert _{\mathcal {C}^{\alpha }} + \varepsilon ^{\frac{1}{2} - 3\alpha }. \end{aligned}$$
Here, we have also used Lemma 4.1 and the bound
$$\begin{aligned} \Vert U(t) - U_{\varepsilon }(t) \Vert _{\mathcal {C}^{\alpha }}\lesssim & {} \Vert u^0 - u_\varepsilon ^0 \Vert _{\mathcal {C}^{\alpha }} + \Vert X(0) - X_\varepsilon (0) \Vert _{\mathcal {C}^{\alpha }} \\&+ \varepsilon ^{\alpha _\star - 2\alpha } \left( \Vert u_\varepsilon ^0 \Vert _{\mathcal {C}^{\alpha _\star }} + \Vert X_\varepsilon (0) \Vert _{\mathcal {C}^{\alpha _\star }} \right) , \end{aligned}$$
which can be derived similarly to (6.7). The rest of the proof is almost identical to the proof of [21, Theorem 1.5]. \(\square \)

6 Estimates on the reaction term

In this section we prove convergence of the reaction terms of the approximate equation (4.18) to the corresponding terms of (3.6). Let us recall the notation (5.2) and Remark 5.1, which says that all the quantities involved in the definition of the stopping time \(\varrho _{K,\varepsilon }\) are bounded on the interval \((0, t_\varepsilon ]\) by the constant \(K + 1\) and all the proportionality constants below can depend on K.

The next proposition gives a bound on the terms \(\Psi ^{\bar{v}}\) and \(\Psi ^{v_\varepsilon }_{\varepsilon }\) defined in (3.7) and (4.4) respectively.

Proposition 6.1

For any \(\gamma \in (0, 1]\), \(t > 0\) and \(\kappa > 0\) small enough the following bound holds
$$\begin{aligned} \Vert \Psi ^{\bar{v}}(t_\varepsilon ) - \Psi ^{v_\varepsilon }_{\varepsilon }(t_\varepsilon ) \Vert _{\mathcal {C}^{\gamma }}\lesssim & {} t_\varepsilon ^{\frac{1 + \alpha - \gamma }{2}} \left( \Vert \bar{v} - v_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_{t_\varepsilon }} + \Vert \bar{v} - v_{\varepsilon } \Vert _{\mathcal {C}^{1}_{(1-\alpha )/2, t_\varepsilon }} \right) \nonumber \\&\quad +\,\Vert X - X_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_{t_\varepsilon }} + \Vert u^0 - u^0_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }} + \varepsilon ^{\alpha _\star - \kappa }. \end{aligned}$$
(6.1)

Proof

For any \(t > 0\), using the notation (5.2), we can rewrite
$$\begin{aligned}&\Psi ^{\bar{v}}(t_\varepsilon ) - \Psi ^{v_\varepsilon }_{\varepsilon }(t_\varepsilon )\\&\quad = \int _{0}^{t_\varepsilon } S_{t_\varepsilon -s} G(\bar{u}(s)) \big (\partial _x \bar{v}(s) - D_\varepsilon \bar{v}(s) \big ) \,ds \\&\qquad +\,\int _{0}^{t_\varepsilon } S_{t_\varepsilon -s} G(\bar{u}(s)) \big (\partial _x U(s) - D_\varepsilon U(s) \big ) \,ds \\&\qquad +\, \int _{0}^{t_\varepsilon } S_{t_\varepsilon -s} G(\bar{u}(s)) \big (D_\varepsilon \bar{v}(s) - D_\varepsilon v_{\varepsilon }(s) \big ) \,ds \\&\qquad + \,\int _{0}^{t_\varepsilon } S_{t_\varepsilon -s} G(\bar{u}(s)) \big (D_\varepsilon U(s) - D_\varepsilon U_{\varepsilon }(s) \big ) \,ds \\&\qquad + \,\int _{0}^{t_\varepsilon } S_{t_\varepsilon -s} \big ( G(\bar{u}(s)) - G(u_\varepsilon (s))\big ) D_\varepsilon \left( v_{\varepsilon }(s) + U_{\varepsilon }(s) \right) \,ds \\&\qquad + \,\int _{0}^{t_\varepsilon } \big ( S_{t_\varepsilon -s} - S^{(\varepsilon )}_{t_\varepsilon -s} \big ) G(u_\varepsilon (s)) D_\varepsilon \left( v_{\varepsilon }(s) + U_{\varepsilon }(s) \right) \,ds =: \sum _{1\le j \le 6} J_j. \end{aligned}$$
To bound the term \(J_1\), we first investigate how good the operator \(D_\varepsilon \) approximates \(\partial _x\). Let us take a function \(\varphi \in \mathcal {C}^{1+\alpha _\star }(\mathbb {T})\). Then by the Assumption 2, we can rewrite
$$\begin{aligned} \left( D_\varepsilon - \partial _x \right) \varphi (x) = \frac{1}{\varepsilon } \int _{{\mathbb {R}}} \left( \varphi (x + \varepsilon y) - \varphi (x) - \partial _x \varphi (x) \varepsilon y \right) \mu (dy). \end{aligned}$$
Using the fact, that the Hölder regularity of \(\varphi \) is \(1 + \alpha _\star \), we obtain
$$\begin{aligned} \left| \varphi (x + \varepsilon y) - \varphi (x) - \partial _x \varphi (x) \varepsilon y \right| \lesssim |\varepsilon y|^{1 + \alpha _\star } \Vert \varphi \Vert _{\mathcal {C}^{1+\alpha _\star }}. \end{aligned}$$
This yields the estimate
$$\begin{aligned} \Vert \left( D_\varepsilon - \partial _x \right) \varphi \Vert _{\mathcal {C}^0} \lesssim \varepsilon ^{\alpha _\star } \Vert \varphi \Vert _{\mathcal {C}^{1+\alpha _\star }}, \end{aligned}$$
(6.2)
where we have used the boundedness of the \((1+\alpha _\star )\)th moment of \(\mu \).
Using this estimate we derive
$$\begin{aligned} \Vert J_1 \Vert _{\mathcal {C}^{\gamma }}\le & {} \int _{0}^{t_\varepsilon } \Vert S_{t_\varepsilon -s} \Vert _{\mathcal {C}^{0} \rightarrow \mathcal {C}^{\gamma }} \Vert G(\bar{u}(s)) \Vert _{\mathcal {C}^{0}} \Vert \partial _x \bar{v}(s) - D_\varepsilon \bar{v}(s) \Vert _{\mathcal {C}^{0}} \,ds \nonumber \\\lesssim & {} \varepsilon ^{\alpha _\star } \int _{0}^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{\gamma }{2}} \Vert \bar{v}(s) \Vert _{\mathcal {C}^{1 + \alpha _\star }} \,ds \lesssim \varepsilon ^{\alpha _\star } t_\varepsilon ^{1 - \frac{\gamma + \alpha _\star }{2}}, \end{aligned}$$
(6.3)
where we have used boundedness of \(\Vert \bar{u} \Vert _{\mathcal {C}^{0}_{t_\varepsilon }}\) and \(\Vert \bar{v} \Vert _{\mathcal {C}^{1+\alpha _\star }_{\alpha _\star /2, t_\varepsilon }}\).
To derive a bound on \(J_2\), we notice that
$$\begin{aligned} \Vert U(s) \Vert _{\mathcal {C}^{1 + \alpha _\star }} \lesssim s^{-\frac{1}{2}} \left( \Vert u^0 \Vert _{\mathcal {C}^{\alpha _\star }} + \Vert X(0) \Vert _{\mathcal {C}^{\alpha _\star }} \right) , \end{aligned}$$
which follows from Lemma 8.7. Hence, using the estimate (6.2) for U, we obtain
$$\begin{aligned} \Vert J_2 \Vert _{\mathcal {C}^{\gamma }}\le & {} \int _{0}^{t_\varepsilon } \Vert S_{t_\varepsilon -s} \Vert _{\mathcal {C}^{0} \rightarrow \mathcal {C}^{\gamma }} \Vert G(\bar{u}(s)) \Vert _{\mathcal {C}^{0}} \Vert \partial _x U(s) - D_\varepsilon U(s) \Vert _{\mathcal {C}^{0}} \,ds \nonumber \\\lesssim & {} \varepsilon ^{\alpha _\star } \int _{0}^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{\gamma }{2}} \Vert U(s) \Vert _{\mathcal {C}^{1 + \alpha _\star }} \,ds \lesssim \varepsilon ^{\alpha _\star } t_\varepsilon ^{\frac{1 - \gamma }{2}}. \end{aligned}$$
(6.4)
Note, that for any function \(\varphi \in \mathcal {C}^1(\mathbb {T})\) we have by Assumption 2,
$$\begin{aligned} |D_\varepsilon \varphi (x)| \le \frac{1}{\varepsilon } \int _{{\mathbb {R}}} \int _0^{\varepsilon |z|} |\partial _x \varphi (x + y)| dy |\mu |(dz) \lesssim \Vert \varphi \Vert _{\mathcal {C}^{1}}. \end{aligned}$$
(6.5)
Using this bound we obtain
$$\begin{aligned}&\Vert J_3 \Vert _{\mathcal {C}^{\gamma }} \le \int _{0}^{t_\varepsilon } \Vert S_{t_\varepsilon -s} \Vert _{\mathcal {C}^{0} \rightarrow \mathcal {C}^{\gamma }} \Vert G(\bar{u}(s)) \Vert _{\mathcal {C}^{0}} \Vert D_\varepsilon \bar{v}(s) - D_\varepsilon v_{\varepsilon }(s) \Vert _{\mathcal {C}^{0}} \,ds \nonumber \\&\quad \lesssim \Vert \bar{v} - v_{\varepsilon } \Vert _{\mathcal {C}^{1}_{(1-\alpha )/2, t_\varepsilon }} \int _{0}^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{\gamma }{2}} s^{\frac{\alpha - 1}{2}} \,ds \lesssim {t_\varepsilon }^{\frac{1 + \alpha - \gamma }{2}} \Vert \bar{v} - v_{\varepsilon } \Vert _{\mathcal {C}^{1}_{(1-\alpha )/2, t_\varepsilon }},\qquad \quad \end{aligned}$$
(6.6)
where we have used boundedness of \(\Vert \bar{u} \Vert _{\mathcal {C}^{0}_{t_\varepsilon }}\).
To bound \(J_4\) we note that
$$\begin{aligned} \Vert U(s) - U_{\varepsilon }(s) \Vert _{\mathcal {C}^{1}}&\le \Vert S_s \big ( u^0 - u_\varepsilon ^0 \big ) \Vert _{\mathcal {C}^{1}} + \Vert S_s \left( X(0) - X_\varepsilon (0) \right) \Vert _{\mathcal {C}^{1}} \nonumber \\&\quad +\, \Vert \big (S_s - S^{(\varepsilon )}_s\big ) \left( u_\varepsilon ^0 - X_\varepsilon (0) \right) \Vert _{\mathcal {C}^{1}} \nonumber \\&\lesssim \, s^{\frac{\alpha - 1}{2}} \left( \Vert u^0 - u_\varepsilon ^0 \Vert _{\mathcal {C}^{\alpha }} + \Vert X(0) - X_\varepsilon (0) \Vert _{\mathcal {C}^{\alpha }} \right) \nonumber \\&\quad +\,s^{-\frac{1}{2}} \varepsilon ^{\alpha _\star - \kappa } \left( \Vert u_\varepsilon ^0 \Vert _{\mathcal {C}^{\alpha _\star }} + \Vert X_\varepsilon (0) \Vert _{\mathcal {C}^{\alpha _\star }} \right) , \end{aligned}$$
(6.7)
for any \(\kappa > 0\) sufficiently small. Here, in the last estimate we used Lemma 8.8 with \(\lambda = \alpha _\star - \kappa \). Using this estimate and (6.5) we obtain
$$\begin{aligned} \Vert J_4 \Vert _{\mathcal {C}^{\gamma }}\le & {} \int _{0}^{t_\varepsilon } \Vert S_{t_\varepsilon -s} \Vert _{\mathcal {C}^{0} \rightarrow \mathcal {C}^{\gamma }} \Vert G(\bar{u}(s)) \Vert _{\mathcal {C}^{0}} \Vert D_\varepsilon U(s) - D_\varepsilon U_{\varepsilon }(s) \Vert _{\mathcal {C}^{0}} \,ds \nonumber \\\lesssim & {} \int _{0}^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{\gamma }{2}} \Vert U(s) - U_{\varepsilon }(s) \Vert _{\mathcal {C}^{1}} \,ds \nonumber \\\lesssim & {} t_\varepsilon ^{\frac{1 + \alpha - \gamma }{2}} \left( \Vert u^0 - u_\varepsilon ^0 \Vert _{\mathcal {C}^{\alpha }} + \Vert X(0) - X_\varepsilon (0) \Vert _{\mathcal {C}^{\alpha }} \right) + \varepsilon ^{\alpha _\star - \kappa }. \end{aligned}$$
(6.8)
Exploiting continuous differentiability of the function G we get
$$\begin{aligned} \Vert J_5 \Vert _{\mathcal {C}^{\gamma }}\le & {} \int _{0}^{t_\varepsilon } \Vert S_{t_\varepsilon -s} \Vert _{\mathcal {C}^{0} \rightarrow \mathcal {C}^{\gamma }} \Vert G(\bar{u}(s)) - G(u_\varepsilon (s))\Vert _{\mathcal {C}^{0}} \Vert D_\varepsilon v_{\varepsilon }(s) + D_\varepsilon U_{\varepsilon }(s) \Vert _{\mathcal {C}^{0}} \,ds \nonumber \\\lesssim & {} \int _{0}^{t_\varepsilon } (t_\varepsilon -s)^{- \frac{\gamma }{2}} s^{\frac{\alpha _\star - 1 - \kappa }{2}} \Vert \bar{u}(s) - u_\varepsilon (s)\Vert _{\mathcal {C}^{0}} \,ds \nonumber \\\lesssim & {} t_\varepsilon ^{\frac{1 + \alpha _\star - \gamma - \kappa }{2}} \Vert \bar{v} - v_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_{t_\varepsilon }} + \Vert X - X_{\varepsilon } \Vert _{\mathcal {C}^{\alpha }_{t_\varepsilon }} + \Vert u^0 - u_\varepsilon ^0 \Vert _{\mathcal {C}^{\alpha }} + \varepsilon ^{\alpha _\star - \kappa }, \end{aligned}$$
(6.9)
where in the second line we have used a bound, similar to (6.7),
$$\begin{aligned} \Vert D_\varepsilon U_{\varepsilon }(s)\Vert _{\mathcal {C}^{0}} \lesssim \Vert U_{\varepsilon }(s) \Vert _{\mathcal {C}^{1}} \lesssim s^{\frac{\alpha _\star - 1 - \kappa }{2}}. \end{aligned}$$
(6.10)
Moreover, in the estimate (6.9) we have used the bound
$$\begin{aligned} \Vert U(s) - U_{\varepsilon }(s) \Vert _{\mathcal {C}^{0}} \lesssim \Vert u^0 - u_\varepsilon ^0 \Vert _{\mathcal {C}^{0}} + \Vert X(0) - X_\varepsilon (0) \Vert _{\mathcal {C}^{0}} + \varepsilon ^{\alpha _\star - \kappa }, \end{aligned}$$
(6.11)
which is obtained in a way similar to (6.7).
Using Lemma 8.8, the integral \(J_6\) can be bounded by
$$\begin{aligned} \Vert J_6 \Vert _{\mathcal {C}^{\gamma }}\le & {} \int _{0}^{t_\varepsilon } \Vert S_{t_\varepsilon -s} - S^{(\varepsilon )}_{t_\varepsilon -s} \Vert _{\mathcal {C}^{0} \rightarrow \mathcal {C}^{\gamma }} \Vert G(u_\varepsilon (s)) \Vert _{\mathcal {C}^{0}} \Vert D_\varepsilon v_{\varepsilon }(s) + D_\varepsilon U_{\varepsilon }(s) \Vert _{\mathcal {C}^{0}} \,ds \nonumber \\\lesssim & {} \varepsilon ^{\alpha _\star - \kappa } \int _{0}^{t_\varepsilon } (t_\varepsilon -s)^{ - \frac{\alpha _\star + \gamma - \kappa /2}{2}} s^{\frac{\alpha _\star - 1 - \kappa /2}{2}} \,ds \lesssim t_\varepsilon ^{\frac{1-\gamma }{2}} \varepsilon ^{\alpha _\star - \kappa }, \end{aligned}$$
(6.12)
where we have used the bound (6.10).

Combining the bounds (6.3)–(6.12) we obtain the claimed estimate (6.1). \(\square \)

In the following proposition we provide a bound on the terms \(\Phi ^{\bar{v}}\) and \(\Phi ^{v_\varepsilon }_{\varepsilon }\) defined in (3.7) and (4.4) respectively.

Proposition 6.2

For any \(\gamma \in (0,1]\) and \(\kappa > 0\) small enough the following bound holds
$$\begin{aligned} \Vert \Phi ^{\bar{v}}(t_\varepsilon ) - \Phi ^{v_\varepsilon }_{\varepsilon }(t_\varepsilon ) \Vert _{\mathcal {C}^{\gamma }} \lesssim t_\varepsilon ^{1 - \frac{\gamma }{2}} \Vert \bar{v} - v_{\varepsilon } \Vert _{\mathcal {C}^{0}_{t_\varepsilon }} + \Vert X - X_{\varepsilon } \Vert _{\mathcal {C}^{0}_{t_\varepsilon }} + \Vert u^0 - u_{\varepsilon }^0 \Vert _{\mathcal {C}^{0}} + \varepsilon ^{\alpha _\star - \kappa }. \end{aligned}$$

Proof

Using continuous differentiability of the function F, Lemma 8.8 and recalling that \(\bar{u} = \bar{v} + X + U\) we get
$$\begin{aligned}&\Vert \Phi ^{\bar{v}}(t_\varepsilon ) - \Phi ^{v_{\varepsilon }}_{\varepsilon }(t_\varepsilon ) \Vert _{\mathcal {C}^{\gamma }}\\&\quad \le \, \int _{0}^{t_\varepsilon } \Vert S_{t_\varepsilon -s} \Vert _{\mathcal {C}^{0} \rightarrow \mathcal {C}^{\gamma }} \Vert F(\bar{u}(s)) - F(u_\varepsilon (s))\Vert _{\mathcal {C}^{0}} \,ds \\&\qquad +\,\int _{0}^{t_\varepsilon } \Vert S_{t_\varepsilon -s} - S^{(\varepsilon )}_{t_\varepsilon -s} \Vert _{\mathcal {C}^{0} \rightarrow \mathcal {C}^{\gamma }} \Vert F(u_\varepsilon (s)) \Vert _{\mathcal {C}^{0}} \,ds \\&\quad \lesssim \int _0^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{\gamma }{2}} \Vert \bar{u}(s) - u_{\varepsilon }(s) \Vert _{\mathcal {C}^{0}} \,ds + \varepsilon ^{\frac{1}{2} - \kappa } \int _0^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{1}{4} - \frac{\gamma }{2}} \,ds \\&\quad \lesssim t_\varepsilon ^{1 - \frac{\gamma }{2}} \Vert \bar{v} - v_{\varepsilon } \Vert _{\mathcal {C}^{0}_{t_\varepsilon }} + \Vert X - X_{\varepsilon } \Vert _{\mathcal {C}^{0}_{t_\varepsilon }} + \Vert u^0 - u_{\varepsilon }^0 \Vert _{\mathcal {C}^{0}} + \varepsilon ^{\alpha _\star - \kappa }. \end{aligned}$$
Here, we have used boundedness of \(\Vert u_\varepsilon \Vert _{\mathcal {C}^0_{t_\varepsilon }}\) and the estimate (6.11). \(\square \)

The following lemma shows how the processes (4.16) behave in the supremum norm. In particular, it shows that they converge to 0 as soon as \(|w| > 2\).

Lemma 6.3

For any word \(w \in \mathcal {A}_p\), the bound
$$\begin{aligned} \sup _{s \in [0,t_\varepsilon ]} \Vert \langle D_\varepsilon \mathbf {X}_{\varepsilon }(s;\cdot ), e_w \rangle \Vert _{\mathcal {C}^{0}} \lesssim \varepsilon ^{|w|\alpha _{\star }-1}, \end{aligned}$$
holds uniformly in \(\varepsilon \) and t.

Proof

Since \(\mathbf {X}_{\varepsilon }(s)\) is a rough path of regularity \(\alpha _\star \), we can use the third property in Definition 2.1 to get
$$\begin{aligned} \left| \langle D_{\varepsilon } \mathbf {X}_{\varepsilon }(s;x), e_w \rangle \right|\le & {} \frac{1}{\varepsilon } \int _{{\mathbb {R}}} \left| \langle \mathbf {X}_{\varepsilon }(s;x,x+\varepsilon z), e_w \rangle \right| |\mu |(dz) \\\lesssim & {} \varepsilon ^{|w|\alpha _{\star }-1} \int _{{\mathbb {R}}} |z|^{|w|\alpha _{\star }} |\mu |(dz) \lesssim \varepsilon ^{|w|\alpha _{\star }-1}. \end{aligned}$$
Here, we have used the assumption on the moments of \(|\mu |\). \(\square \)

In the following proposition we obtain a bound on the term \(\bar{\Upsilon }^{v_\varepsilon }_{\varepsilon }\) defined in (4.20).

Proposition 6.4

For any \(\gamma \in (0,1]\) we have the estimate \(\Vert \bar{\Upsilon }^{v_\varepsilon }_{\varepsilon } \Vert _{\mathcal {C}_{t_\varepsilon }^{\gamma }} \lesssim \varepsilon ^{3\alpha _{\star }-1}\).

Proof

We use Lemma 8.9 to estimate the approximate heat semigroup, and Lemma 6.3:
$$\begin{aligned} \Vert \bar{\Upsilon }^{v_\varepsilon }_{\varepsilon }(t_\varepsilon ) \Vert _{\mathcal {C}^{\gamma }}\lesssim & {} \sum _{\begin{array}{c} w \in \mathcal {A}_{p-1} \\ |w| \ge 2 \end{array}} \int _{0}^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{\gamma }{2} -\kappa } \Vert D^{w} G(u_{\varepsilon }(s)) \Vert _{\mathcal {C}^{0}} \Vert \langle D_\varepsilon \mathbf {X}_{\varepsilon }(s;\cdot ), e_w \otimes e_1 \rangle \Vert _{\mathcal {C}^{0}} \,ds \\\lesssim & {} \sum _{\begin{array}{c} w \in \mathcal {A}_{p-1} \\ |w| \ge 2 \end{array}} t_\varepsilon ^{1-\frac{\gamma }{2} -\kappa } \varepsilon ^{(|w|+1)\alpha _{\star }-1} \lesssim \varepsilon ^{3\alpha _{\star }-1}, \end{aligned}$$
for \(\kappa > 0\) small enough. This is the claimed bound. \(\square \)

7 Convergence of the correction term

In this section we show that the term \(\Upsilon _{\varepsilon }^{v_{\varepsilon }}\), defined in (4.20), converges to the correction term \(\Upsilon ^{\bar{v}}\) from (3.7). In view of Remark 5.1, we only consider time intervals up to the stopping time \(\varrho _{K,\varepsilon }\), by using the notation (5.2).

To shorten the notation we define \(\mathbb {X}_\varepsilon (t)\) to be the projection of the rough path \(\mathbf {X}_\varepsilon (t)\) to the second level of the tensor algebra. The following lemma is similar to [21, Proposition 4.1], but the bound is in a Hölder norm rather than a Sobolev norm.

Lemma 7.1

For any \(\gamma \in \left( 0, \frac{1}{2}\right) \), any \(t > 0\) and any \(\kappa > 0\) small enough we have
$$\begin{aligned} \mathbb {E} \left[ \sup _{s \in [0,t_\varepsilon ]} \left\| D_\varepsilon \mathbb {X}_{\varepsilon }(s, \cdot ) - \Lambda \mathrm {Id} \right\| _{\mathcal {C}^{-\gamma }} \right] \lesssim \varepsilon ^{\gamma - \kappa }. \end{aligned}$$

Proof

The proof is almost identical to that of [21, Proposition 4.1], but we use Lemma 8.5 to reduce oneself to moment bounds on the Paley–Littlewood blocks of \(D_\varepsilon \mathbb {X}_{\varepsilon }\), instead of using pointwise bounds. \(\square \)

A bound on \(\Upsilon ^{\bar{v}}\) and \(\Upsilon ^{v_\varepsilon }_\varepsilon \), defined in (3.7) and (4.20) respectively, is given in the next proposition.

Proposition 7.2

For any \(\gamma \in (0, 1]\) and any \(\kappa > 0\) sufficiently small we have
$$\begin{aligned} \mathbb {E} \Vert \Upsilon ^{\bar{v}} (t_\varepsilon )- \Upsilon ^{v_\varepsilon }_\varepsilon (t_\varepsilon ) \Vert _{\mathcal {C}^{\gamma }}&\lesssim t_\varepsilon ^{1 - \frac{\gamma }{2}} \mathbb {E} \Vert \bar{v} - v_\varepsilon \Vert _{\mathcal {C}^{0}_{t_\varepsilon }} + \mathbb {E} \Vert X - X_\varepsilon \Vert _{\mathcal {C}^{0}_{t_\varepsilon }} \\&\quad +\, \mathbb {E} \Vert u^0 - u_{\varepsilon }^0 \Vert _{\mathcal {C}^{0}} + \varepsilon ^{\alpha _\star - \kappa }. \end{aligned}$$

Proof

Let us define the functions \(\mathcal {F}(u)_i = \Lambda ~\mathrm {div} G_i(u)\) and
$$\begin{aligned} \mathcal {F}_\varepsilon (u)_i(s,x) = \sum _{w \in \mathcal {A}} D^{w} G_{ij}(u(s,x)) \langle D_\varepsilon \mathbf {X}_{\varepsilon }(s,x), e_w \otimes e_j \rangle , \end{aligned}$$
where as usual the sum over j is omitted. Then we can write
$$\begin{aligned} \Upsilon ^{\bar{u}}(t_\varepsilon ) - \Upsilon ^{u_\varepsilon }_\varepsilon (t_\varepsilon )&= \int _0^{t_\varepsilon } S_{t_\varepsilon -s} \Big (\mathcal {F}(u_\varepsilon ) - \mathcal {F}_\varepsilon (u_\varepsilon )\Big )(s) \,ds \\&\quad +\,\int _0^{t_\varepsilon } S_{t_\varepsilon -s} \Big (\mathcal {F}(\bar{u}) - \mathcal {F}(u_\varepsilon )\Big )(s) \,ds\\&\quad +\,\int _0^{t_\varepsilon } \Big (S_{t_\varepsilon -s} - S^{(\varepsilon )}_{t_\varepsilon -s}\Big ) \mathcal {F}_\varepsilon (u_\varepsilon )(s) \,ds \\&=: J_1 + J_2 + J_3. \end{aligned}$$
To bound \(J_1\) we note that we can rewrite
$$\begin{aligned} \left( \mathcal {F}(u_\varepsilon ) - \mathcal {F}_\varepsilon (u_\varepsilon )\right) _i(s,x) = \sum _{w \in \mathcal {A}} D^{w} G_{ij}(u_\varepsilon (s,x)) \left( \Lambda \delta _{w, j} - \langle D_\varepsilon \mathbf {X}_{\varepsilon }(s,x), e_w \otimes e_j \rangle \right) . \end{aligned}$$
Therefore, applying Lemma 8.7 with \(\eta \in (0, \alpha _\star )\) and Lemma 8.6, we obtain
$$\begin{aligned} \Vert J_1 \Vert _{\mathcal {C}^{\gamma }}\lesssim & {} \int _0^{t_\varepsilon } \Vert S_{t_\varepsilon -s} \Vert _{\mathcal {C}^{-\eta } \rightarrow \mathcal {C}^{\gamma }} \Vert \left( \mathcal {F}(u_\varepsilon ) - \mathcal {F}_\varepsilon (u_\varepsilon )\right) (s) \Vert _{\mathcal {C}^{-\eta }} \,ds \\\lesssim & {} \sup _{s \in [0,t_\varepsilon ]} \left\| D_\varepsilon \mathbb {X}_{\varepsilon }(s, \cdot ) - \Lambda \mathrm {Id} \right\| _{\mathcal {C}^{-\eta }} \Vert DG(u_\varepsilon ) \Vert _{\mathcal {C}^{\alpha _\star }_{t_\varepsilon }} \int _0^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{\eta + \gamma }{2}} \,ds. \end{aligned}$$
That gives us, using the boundedness of \(\Vert u_\varepsilon \Vert _{\mathcal {C}_{t_\varepsilon }^{\alpha _\star }}\) and Lemma 7.1,
$$\begin{aligned} \mathbb {E}\Vert J_1 \Vert _{\mathcal {C}^{\gamma }_{t_\varepsilon }} \lesssim t_\varepsilon ^{1-\frac{\eta + \gamma }{2}} \varepsilon ^{\eta - \kappa }. \end{aligned}$$
(7.1)
A bound on \(J_2\) follows from Lemma 8.7 and regularity of G,
$$\begin{aligned} \Vert J_2 \Vert _{\mathcal {C}^{\gamma }}\lesssim & {} \int _0^{t_\varepsilon } (t_\varepsilon -s)^{- \frac{\gamma }{2}} \Vert \mathcal {F}(\bar{u}(s)) - \mathcal {F}(u_\varepsilon (s)) \Vert _{\mathcal {C}^{0}} \,ds \nonumber \\\lesssim & {} \int _0^{t_\varepsilon } (t_\varepsilon -s)^{- \frac{\gamma }{2}} \Vert \bar{u}(s) - u_\varepsilon (s) \Vert _{\mathcal {C}^{0}} \,ds \nonumber \\\lesssim & {} t_\varepsilon ^{1 - \frac{\gamma }{2}} \Vert \bar{v} - v_\varepsilon \Vert _{\mathcal {C}^{0}_{t_\varepsilon }} + \Vert X - X_\varepsilon \Vert _{\mathcal {C}^{0}_{t_\varepsilon }} + \Vert u^0 - u_{\varepsilon }^0 \Vert _{\mathcal {C}^{0}} + \varepsilon ^{\alpha _\star - \kappa }. \end{aligned}$$
(7.2)
Here, we have used the representation of \(\bar{u}\) via \(\bar{v}\) and the bound (6.11).
For the third term we use Lemma 8.8 with \(\lambda = \frac{1}{2} - \kappa \),
$$\begin{aligned} \Vert J_3 \Vert _{\mathcal {C}^{\gamma }_{t_\varepsilon }} \lesssim \varepsilon ^{\frac{1}{2} - \kappa } t_\varepsilon ^{1-\frac{1}{2}\left( \gamma +\frac{1}{2}\right) } \Vert \mathcal {F}_\varepsilon (u_\varepsilon ) \Vert _{\mathcal {C}^{0}_{t_\varepsilon }} \lesssim \varepsilon ^{\frac{1}{2} - \kappa } t_\varepsilon ^{\frac{3}{4} - \frac{\gamma }{2}}, \end{aligned}$$
(7.3)
where we have used boundedness of the second-order iterated integral \(\mathbb {X}_\varepsilon \) and \(\Vert u_\varepsilon \Vert _{\mathcal {C}_{t_\varepsilon }^\alpha }\). Combining the estimates (7.1), (7.2) and (7.3) we obtain the claimed bound. \(\square \)

8 Estimates on rough terms

In this section we obtain bounds on the terms involving rough integrals. As usual, we will use the notation (5.2), which in view of Remark 5.1 means that all the quantities involved in the definition of \(\varrho _{K,\varepsilon }\) are bounded. Furthermore, let us define the quantity
$$\begin{aligned} \mathcal {D}_\varepsilon (t_\varepsilon )&:= \Vert X- X_\varepsilon \Vert _{\mathcal {C}^{0}_{t_\varepsilon }} + \vert \vert \vert \mathbf {X} - \mathbf {X}_\varepsilon \vert \vert \vert _{\alpha , t_\varepsilon } + \Vert \bar{v}-v_\varepsilon \Vert _{\mathcal {C}^{\alpha }_{t_\varepsilon }} \nonumber \\&\quad +\,\Vert \bar{v}-v_\varepsilon \Vert _{\mathcal {C}^{1}_{(1-\alpha )/2, t_\varepsilon }} + \Vert u^0 - u^0_\varepsilon \Vert _{\mathcal {C}^{\alpha }}, \end{aligned}$$
(8.1)
where the norm \(\vert \vert \vert \cdot \vert \vert \vert _{\alpha , t_\varepsilon }\) was introduced in (5.1).

The next lemma provides bounds on the rough integrals Z and \(Z_\varepsilon \) defined in (3.5) and (4.17) respectively.

Lemma 8.1

For \(t>0\) we have the following results
$$\begin{aligned} \Vert Z(t_\varepsilon )\Vert _{\mathcal {C}^{\alpha _{\star }}}\lesssim & {} t_\varepsilon ^{-\frac{\alpha _{\star }}{2}}, \end{aligned}$$
(8.2)
$$\begin{aligned} Z(t_\varepsilon ) - Z_{\varepsilon }(t_\varepsilon )= & {} T_1(t_\varepsilon ) + T_2(t_\varepsilon ), \end{aligned}$$
(8.3)
where, for \(\kappa > 0\) small enough, the bounds
$$\begin{aligned} \Vert T_1(t_\varepsilon )\Vert _{\mathcal {C}^{\alpha }} \lesssim t_\varepsilon ^{\frac{\alpha -1}{2}} \left( \mathcal {D}_{\varepsilon }(t_\varepsilon ) + \varepsilon ^{\alpha _\star - \alpha - \kappa }\right) , \quad \Vert T_2(t_\varepsilon )\Vert _{\mathcal {C}^{\alpha _\star }} \lesssim \varepsilon ^{3\alpha _{\star }-1} t_\varepsilon ^{-\frac{\alpha _\star }{2}}, \end{aligned}$$
hold with \(\mathcal {D}_{\varepsilon }\) defined in (8.1).

Proof

Since \(\bar{u}(s) - X(s) \in \mathcal {C}^1\), for \(s \le t_\varepsilon \), the process \(Y_{ij}(s) = G_{ij}(\bar{u}(s))\) is controlled by the \(\alpha _\star \)-regular rough path \(\mathbf {X}(s)\) with the rough path derivative \(Y'_{ij}(s) = DG_{ij}(\bar{u}(s))\) and the remainder
$$\begin{aligned}&R_{Y_{ij}}(s; x,y) = DG_{ij}(\bar{u}(s,x)) \left( \bar{v}(s; x, y) + U(s; x, y) \right) \\&\quad + \int _0^1 \left( DG_{ij}(\lambda \bar{u}(s,y) + (1-\lambda ) \bar{u}(s,x)) - DG_{ij}(\bar{u}(s,x)) \right) \bar{u}(s; x, y)\, d\lambda , \end{aligned}$$
where we use the notation \(\bar{v}(s; x, y) = \bar{v}(s, y) - \bar{v}(s, y)\) and respectively for U and \(\bar{u}\). Here, by the rough path derivative we mean the projection of the controlled rough path on \(({\mathbb {R}}^n)^*\) in Definition 2.2, and the remainder is a collection of all the processes \(R_Y^w\) from (2.4).
From the regularity assumptions for the function G and the processes \(\bar{u}\) and \(\bar{v}\), we obtain the bounds
$$\begin{aligned} \Vert Y_{ij}(s) \Vert _{\mathcal {C}^{\alpha _\star }} \lesssim 1,\quad \Vert Y'_{ij}(s) \Vert _{\mathcal {C}^{\alpha _\star }} \lesssim 1,\quad \Vert R_{Y_{ij}}(s) \Vert _{\mathcal {B}^{2 \alpha _\star }} \lesssim s^{-\frac{\alpha _{\star }}{2}}. \end{aligned}$$
(8.4)
The power of s in the last estimate comes from the bound \(\Vert U(s) \Vert _{2\alpha _\star } \lesssim s^{-\frac{\alpha _{\star }}{2}}\), which is a consequence of Lemma 8.7. The estimate (8.2) follows from (2.8) and (8.4).
Similarly, for \(s \le t_\varepsilon \), the process \(Y_{\varepsilon , ij}(s) = G_{ij}(u_\varepsilon (s))\) is controlled by the \(\alpha _\star \)-regular rough path \(\mathbf {X}_\varepsilon (s)\) with the rough path derivative \(Y'_{\varepsilon , ij}(s) = DG_{ij}(u_\varepsilon (s))\) and the remainder \(R_{Y_{\varepsilon ,ij}}(s)\), such that the following bounds hold
$$\begin{aligned} \Vert Y_{\varepsilon , ij}(s) \Vert _{\mathcal {C}^{\alpha _\star }} \lesssim 1,\quad \Vert Y'_{\varepsilon , ij}(s) \Vert _{\mathcal {C}^{\alpha _\star }} \lesssim 1,\quad \Vert R_{Y_{\varepsilon , ij}}(s) \Vert _{\mathcal {B}^{2 \alpha _\star }} \lesssim s^{-\frac{\alpha _{\star }}{2}}. \end{aligned}$$
(8.5)
To prove the bound (8.3), we consider the processes \(\bar{u}(s)\) and \(u_\varepsilon (s)\) to be of Hölder regularity \(\alpha \). Then they are controlled by the \(\alpha \)-regular rough paths \(\mathbf {X}(s)\) and \(\mathbf {X}_\varepsilon (s)\) respectively. Hence, we can extend \(G_{ij}(\bar{u}(s))\) to the process \(\mathcal {G}_{ij}(s): \mathbb {T} \rightarrow \big (T^{(p-1)}\big ({\mathbb {R}}^n\big )\big )^*\) which is controlled by \(\mathbf {X}(s)\) as well and such that
$$\begin{aligned} \langle \mathcal {G}_{ij}(s,x), e_w \rangle = D^{w} G_{ij}(\bar{u}(s,x)), \end{aligned}$$
for \(w \in \mathcal {A}_{p-1}\). Then, as it was noticed in Sect. 4.2, for every \(w \in \mathcal {A}_{p-1}\) the following expansion holds
$$\begin{aligned}&\langle \mathcal {G}_{ij}(s,y), e_w \rangle - \langle \mathcal {G}_{ij}(s,x), e_w \rangle \\&\quad = \sum _{\bar{w} \in \mathcal {A}_{p-|w|-1} \setminus \emptyset } C_{\bar{w}} \langle \mathcal {G}_{ij}(s,x), e_{\bar{w}} \otimes e_w \rangle \langle \mathbf {X}(s;x,y), e_{\bar{w}} \rangle + R_{\mathcal {G}_{ij}}^w(s; x,y). \end{aligned}$$
For any word \(w \in \mathcal {A}_{p-1}\), the assumptions on G and \(\bar{u}\) imply \(\Vert \langle \mathcal {G}_{ij}(s), e_w \rangle \Vert _{\mathcal {C}^{\alpha }} \lesssim 1\). Furthermore, from the argument of Sect. 4.2, it is not difficult to obtain the estimate on the remainder: \(\Vert R^w_{\mathcal {G}_{ij}}(s) \Vert _{\mathcal {B}^{(p - |w|) \alpha }} \lesssim s^{\frac{\alpha _{\star } - 1}{2}}\). The latter bound follows from \(|\bar{u}(s; x,y)_{\bar{w}}| \lesssim |y-x|^{(p-|w|) \alpha }\), for any word \(\bar{w}\) such that \(|\bar{w}| = p-|w|\), and
$$\begin{aligned}&| \bar{u}(s;x,y)_{\bar{w}} - X(s;x,y)_{\bar{w}} | \lesssim \left| \bar{u}(s;x,y) - X(s;x,y) \right| \\&\quad \lesssim |y-x| \left( \Vert \bar{v}(s)\Vert _{\mathcal {C}^{1}} + \Vert U(s)\Vert _{\mathcal {C}^{1}}\right) \lesssim |y-x| \left( 1 + s^{\frac{\alpha _{\star } - 1}{2}}\right) , \end{aligned}$$
for any word \(\bar{w} \in \mathcal {A}_{p-|w|-1} \setminus \{\emptyset \}\). Here, in the last line we have used the bound
$$\begin{aligned} \Vert U(s) \Vert _{\mathcal {C}^{1}} \lesssim s^{\frac{\alpha _\star - 1}{2}} \left( \Vert u^0 \Vert _{\mathcal {C}^{\alpha _\star }} + \Vert X(0) \Vert _{\mathcal {C}^{\alpha _\star }} \right) , \end{aligned}$$
which follows from Lemma 8.7.
In the same way the process \(G_{ij}(u_\varepsilon (s))\) can be extended to \(\mathcal {G}^\varepsilon _{ij}(s): \mathbb {T} \rightarrow \big (T^{(p-1)}\big ({\mathbb {R}}^n\big )\big )^*\) which is controlled by \(\mathbf {X}_\varepsilon (s)\). We denote the remainders by \(R_{\mathcal {G}^\varepsilon _{ij}}^w\). Furthermore, the corresponding bounds hold
$$\begin{aligned} \Vert \langle \mathcal {G}^\varepsilon _{ij}(s), e_w \rangle \Vert _{\mathcal {C}^{\alpha }} \lesssim 1,\quad \Vert R^w_{\mathcal {G}_{ij}^\varepsilon }(s) \Vert _{\mathcal {B}^{(p - |w|) \alpha }} \lesssim s^{\frac{\alpha _{\star } - 1}{2}}, \end{aligned}$$
for any word \(w \in \mathcal {A}_{p-1}\).
The following estimate follows from the regularity of the function G,
$$\begin{aligned}&\Vert \langle \mathcal {G}_{ij}(s) - \mathcal {G}^\varepsilon _{ij}(s), e_w \rangle \Vert _{\mathcal {C}^{\alpha }} \lesssim \Vert \bar{u}(s) - u_\varepsilon (s) \Vert _{\mathcal {C}^{\alpha }} \nonumber \\&\quad \lesssim \Vert X(s) - X_\varepsilon (s) \Vert _{\mathcal {C}^{\alpha }} + \Vert \bar{v}(s) - v_\varepsilon (s) \Vert _{\mathcal {C}^{\alpha }} + \Vert u^0 - u^0_\varepsilon \Vert _{\mathcal {C}^{\alpha }}, \end{aligned}$$
(8.6)
where \(w \in \mathcal {A}_{p-1}\). Furthermore, the following bound holds
$$\begin{aligned} |\bar{u}(s; x,y)_{\bar{w}} - u_\varepsilon (s; x,y)_{\bar{w}}| \lesssim |y-x|^{(p-|w|) \alpha } \Vert \bar{u}(s) - u_\varepsilon (s) \Vert _{\mathcal {C}^{\alpha }}, \end{aligned}$$
for a word \(\bar{w}\) such that \(|\bar{w}| = p-|w|\), and for any word \(\bar{w} \in \mathcal {A}_{p-|w|-1} \setminus \{\emptyset \}\) one has
$$\begin{aligned}&| \bar{u}(s;x,y)_{\bar{w}} - X(s;x,y)_{\bar{w}} - u_\varepsilon (s;x,y)_{\bar{w}} + X_\varepsilon (s;x,y)_{\bar{w}} | \\&\quad \lesssim \left| \bar{u}(s;x,y) - X(s;x,y) - u_\varepsilon (s;x,y) + X_\varepsilon (s;x,y) \right| \\&\quad \lesssim |y-x| \left( \Vert \bar{v}(s) - v_\varepsilon (s)\Vert _{\mathcal {C}^{1}} + \Vert U(s) - U_\varepsilon (s)\Vert _{\mathcal {C}^{1}}\right) \\&\quad \lesssim |y-x| s^{\frac{\alpha - 1}{2}} \left( \Vert \bar{v} - v_\varepsilon \Vert _{\mathcal {C}^{1}_{(1-\alpha )/2, s}} + \Vert X(s) - X_\varepsilon (s) \Vert _{\mathcal {C}^{\alpha }} \right. \\&\qquad +\left. \Vert u^0 - u^0_\varepsilon \Vert _{\mathcal {C}^{\alpha }} + \varepsilon ^{\alpha _\star - \alpha - \kappa }\right) . \end{aligned}$$
Here, in the last line we have used the bound
$$\begin{aligned} \Vert U(s) - U_\varepsilon (s)\Vert _{\mathcal {C}^{1}}&\lesssim \Vert S_s (X(0) - X_\varepsilon (0) - u^0 + u^0_\varepsilon ) \Vert _{\mathcal {C}^{1}} \\&\quad +\, \Vert (S_s - S^{(\varepsilon )}_s) (X_\varepsilon (0) - u^0_\varepsilon )\Vert _{\mathcal {C}^{1}} \\&\lesssim s^{\frac{\alpha -1}{2}} \left( \Vert X(0) - X_\varepsilon (0) \Vert _{\mathcal {C}^{\alpha }} + \Vert u^0 - u^0_\varepsilon \Vert _{\mathcal {C}^{\alpha }} + \varepsilon ^{\alpha _\star - \alpha - \kappa } \right) , \end{aligned}$$
for any \(\kappa > 0\) sufficiently small, which follows from Lemmas 8.7 and 8.8. From these bounds and Sect. 4.2 we obtain
$$\begin{aligned} \Vert R^w_{\mathcal {G}_{ij}}(s) - R^w_{\mathcal {G}^\varepsilon _{ij}}(s) \Vert _{\mathcal {B}^{(p - |w|) \alpha }}\lesssim & {} s^{\frac{\alpha -1}{2}} \left( \Vert \bar{v} - v_\varepsilon \Vert _{\mathcal {C}^{1}_{(1-\alpha )/2, s}} + \Vert X - X_\varepsilon \Vert _{\mathcal {C}^{\alpha }_s} \right. \nonumber \\&+\,\left. \Vert u^0 - u^0_\varepsilon \Vert _{\mathcal {C}^{\alpha }} + \varepsilon ^{\alpha _\star - \alpha - \kappa }\right) . \end{aligned}$$
(8.7)
In order to prove (8.3), we definewhere we have omitted as usual the sum over j. From (2.7), (8.5) and Definition 2.1 we obtain
$$\begin{aligned} \Vert Q_i^{\varepsilon }(t_\varepsilon ) \Vert _{\mathcal {B}^{3\alpha _\star }} \lesssim t_\varepsilon ^{-\frac{\alpha _\star }{2}}, \quad \Vert T_i^{\varepsilon }(t_\varepsilon ) \Vert _{\mathcal {B}^{3\alpha _\star }} \lesssim 1. \end{aligned}$$
(8.8)
Next, we can rewrite \(Z^i - Z^i_{\varepsilon }\) in the following wayHere, we have used the Fubini-type result proved in [20, Lemma 2.10].
To bound \(I_1\) we apply (2.9) and use the bounds (8.6), (8.7),
$$\begin{aligned} \Vert I_{1}(t_\varepsilon ) \Vert _{\mathcal {C}^\alpha } \lesssim t_\varepsilon ^{\frac{\alpha -1}{2}} \left( \mathcal {D}_{\varepsilon }(t_\varepsilon ) + \varepsilon ^{\alpha _\star - \alpha - \kappa } \right) , \end{aligned}$$
where \(\mathcal {D}_{\varepsilon }\) is defined in (8.1). It follows from (8.8) that
$$\begin{aligned} \Vert I_{4}(t_\varepsilon ) \Vert _{\mathcal {C}^1} \lesssim \int _{{\mathbb {R}}} |z|^{3\alpha _\star } \mu (dz) \,\varepsilon ^{3\alpha _\star - 1} \Vert Q_i^{\varepsilon }(t_\varepsilon ) \Vert _{\mathcal {B}^{3\alpha _\star }} \lesssim \varepsilon ^{3\alpha _\star - 1} t_\varepsilon ^{-\frac{\alpha _\star }{2}}. \end{aligned}$$
In the same way from the second bound in (8.8) we derive
$$\begin{aligned} \Vert I_{5}(t_\varepsilon ) \Vert _{\mathcal {C}^1} \lesssim \int _{{\mathbb {R}}} |z|^{3\alpha _\star } \mu (dz)\, \varepsilon ^{3\alpha _\star - 1} \Vert T_i^{\varepsilon }(t_\varepsilon ) \Vert _{\mathcal {B}^{3\alpha _\star }} \lesssim \varepsilon ^{3\alpha _\star - 1}. \end{aligned}$$
To bound the integral \(I_3\) let us define the process \(u_{x,z,\varepsilon }(t_\varepsilon ,y) := u_\varepsilon (t_\varepsilon , \varepsilon y - \varepsilon z - x)\) and the rough path \(\mathbf {X}_{x,z,\varepsilon }(t_\varepsilon ;y, \bar{y}) := \mathbf {X}_{\varepsilon }(t_\varepsilon ; \varepsilon y - \varepsilon z - x, \varepsilon \bar{y} - \varepsilon z - x)\). Then we can perform the change of variables \(\bar{y} = (y - \varepsilon z - x)/\varepsilon \) in the integral \(I_3\) and obtainwhere \(X_{x,z,\varepsilon }(t_\varepsilon , \bar{y}) - X_{x,z,\varepsilon }(t_\varepsilon , y)\) is the projection of \(\mathbf {X}_{x,z,\varepsilon }(t_\varepsilon ;y, \bar{y})\) onto \({\mathbb {R}}^n\) and
$$\begin{aligned} Y_{x,z,\varepsilon }(t_\varepsilon ,\bar{y}) := \bar{y} G_{i}(u_{x,z,\varepsilon }(t_\varepsilon ,\bar{y})). \end{aligned}$$
Taking into account the a priori bounds on \(u_\varepsilon \), we obtain from [18, Lemma 2.2] that \(Y_{x,z,\varepsilon }(t_\varepsilon )\) is controlled by \(\mathbf {X}_{x,z,\varepsilon }(t_\varepsilon )\) with the rough path derivative
$$\begin{aligned} Y'_{x,z,\varepsilon }(t_\varepsilon ,\bar{y}) := \bar{y} D G_{i}(u_{x,z,\varepsilon }(t_\varepsilon ,\bar{y}))\, \end{aligned}$$
and the remainder \(R_{Y_{x,z,\varepsilon }}(t_\varepsilon )\) such that
$$\begin{aligned} \Vert Y_{x,z,\varepsilon }(t_\varepsilon ) \Vert _{\mathcal {C}^{\alpha _\star }} \lesssim 1, \quad \Vert Y'_{x,z,\varepsilon }(t_\varepsilon ) \Vert _{\mathcal {C}^{\alpha _\star }} \lesssim 1, \quad \Vert R_{Y_{x,z,\varepsilon }}(t_\varepsilon ) \Vert _{\mathcal {B}^{2\alpha _\star }} \lesssim t_\varepsilon ^{-\frac{\alpha _\star }{2}}. \end{aligned}$$
Hence, the following bound follows from Proposition 2.3 and the simple estimate \(\vert \vert \vert \mathbf {X}_{x,z,\varepsilon }(t_\varepsilon ) \vert \vert \vert _{\alpha _\star } \le \varepsilon ^{\alpha _\star } \vert \vert \vert \mathbf {X}_{\varepsilon }(t_\varepsilon ) \vert \vert \vert _{\alpha _\star }\):Here we have also used the bound on the \(\alpha _\star \)th moment of the measure \(\mu \). Similarly, we can obtain the bound \(\Vert I_2(t_\varepsilon ) \Vert _{\mathcal {C}^{\alpha _\star }} \lesssim \varepsilon ^{\alpha _\star } t_\varepsilon ^{-\frac{\alpha _\star }{2}}\).

Now we set \(T_1=I_1\) and \(T_2=I_2 + I_3 + I_4 + I_5\) and obtain the claim. \(\square \)

In the following proposition we prove a bound on \(\Xi ^{\bar{v}}\) and \(\Xi ^{v_\varepsilon }_{\varepsilon }\) defined in (3.7) and (4.19) respectively.

Proposition 8.2

For \(\gamma \in (0,1]\) and \(\kappa > 0\) small enough we have the estimate
$$\begin{aligned} \Vert \Xi ^{\bar{v}}(t_\varepsilon ) - \Xi ^{v_\varepsilon }_{\varepsilon }(t_\varepsilon )\Vert _{\mathcal {C}^{\gamma }} \lesssim t_\varepsilon ^{\alpha - \frac{1}{2}(\gamma + \kappa )} \left( \mathcal {D}_{\varepsilon }(t_\varepsilon ) + \varepsilon ^{\alpha _\star - \alpha - \kappa }\right) , \end{aligned}$$
where \(\mathcal {D}_{\varepsilon }\) is defined in (8.1).

Proof

We can rewrite \(\Xi ^{\bar{v}} - \Xi ^{v_\varepsilon }_{\varepsilon }\) in the following way
$$\begin{aligned} \Xi ^{\bar{v}}(t_\varepsilon ) - \Xi ^{v_\varepsilon }_{\varepsilon }(t_\varepsilon )= & {} \int _{0}^{t_\varepsilon } \partial _{x}( S_{t_\varepsilon -s} - S^{(\varepsilon )}_{t_\varepsilon -s}) Z(s) \,ds + \int _{0}^{t_\varepsilon } \partial _{x} S^{(\varepsilon )}_{t_\varepsilon -s} ( Z(s) - Z_{\varepsilon }(s)) \,ds \\=: & {} J_{1} + J_{2}. \end{aligned}$$
By (8.2) and Lemma 8.8 with \(\lambda = \alpha _\star - \alpha - \kappa \) we obtain for any \(\kappa > 0\) small enough
$$\begin{aligned} \Vert J_{1}\Vert _{\mathcal {C}^{\gamma }}\lesssim & {} \int _{0}^{t_\varepsilon } \Vert S_{t_\varepsilon -s} - S^{(\varepsilon )}_{t_\varepsilon -s}\Vert _{\mathcal {C}^{\alpha _{\star }} \rightarrow \mathcal {C}^{1 + \gamma }} \Vert Z(s)\Vert _{\mathcal {C}^{\alpha _{\star }}} \,ds \nonumber \\\lesssim & {} \varepsilon ^{\alpha _\star - \alpha - \kappa } \int _{0}^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{1}{2}(1+\gamma - \alpha )} s^{-\frac{\alpha _{\star }}{2}} \,ds \lesssim t_\varepsilon ^{\frac{1}{2}(1 - \gamma + \alpha - \alpha _\star )} \varepsilon ^{\alpha _\star - \alpha - \kappa }.\qquad \quad \end{aligned}$$
(8.9)
The second term can be estimated using Lemma 8.9 and (8.3) by
$$\begin{aligned} \Vert J_{2}\Vert _{\mathcal {C}^{\gamma }}\lesssim & {} \int _{0}^{t_\varepsilon } \Vert S^{(\varepsilon )}_{t_\varepsilon -s}\Vert _{\mathcal {C}^{\alpha } \rightarrow \mathcal {C}^{1 + \gamma }}\Vert T_1(s) \Vert _{\mathcal {C}^{\alpha }} \,ds + \int _{0}^{t_\varepsilon } \Vert S^{(\varepsilon )}_{t_\varepsilon -s}\Vert _{\mathcal {C}^{\alpha _\star } \rightarrow \mathcal {C}^{1 + \gamma }} \Vert T_2 (s) \Vert _{\mathcal {C}^{\alpha _\star }} \,ds \nonumber \\\lesssim & {} \int _{0}^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{1}{2}(1+\gamma - \alpha + \kappa )} s^{\frac{\alpha -1}{2}} \left( \mathcal {D}_{\varepsilon }(s) + \varepsilon ^{\alpha _\star - \alpha - \kappa }\right) ds \nonumber \\&+\,\varepsilon ^{3\alpha _{\star }-1} \int _{0}^{t_\varepsilon } (t_\varepsilon -s)^{-\frac{1}{2}(1+\gamma - \alpha _\star + \kappa )} s^{-\frac{\alpha _\star }{2}} \,ds \nonumber \\\lesssim & {} t_\varepsilon ^{\alpha - \frac{1}{2}(\gamma + \kappa )} \left( \mathcal {D}_{\varepsilon }(t_\varepsilon ) + \varepsilon ^{\alpha _\star - \alpha - \kappa }\right) + \varepsilon ^{3\alpha _{\star }-1} t_\varepsilon ^{\frac{1}{2}(1 - \gamma - \kappa )}. \end{aligned}$$
(8.10)
Combining (8.9) and (8.10) we obtain the claimed bound. \(\square \)

Notes

Acknowledgments

We would like to thank H. Weber for numerous discussions of this and related problems. MH’s research was funded by the Philip Leverhulme trust through a leadership award, by the Royal Society through a research merit award, and by the ERC through a consolidator award.

References

  1. 1.
    Alabert, A., Gyöngy, I.: On numerical approximation of stochastic Burgers’ equation. In: Kabanov, Yu., Liptser, R., Stoyanov, J. (eds.), From Stochastic Calculus to Mathematical Finance, pp. 1–15. Springer, Berlin, (2006)Google Scholar
  2. 2.
    Bahouri, H., Chemin, J.-Y., Danchin, R.: Fourier analysis and nonlinear partial differential equations. Grundlehren der Mathematischen Wissenschaften, vol. 343. Springer, Heidelberg (2011). doi:10.1007/978-3-642-16830-7 CrossRefMATHGoogle Scholar
  3. 3.
    Blömker, D., Jentzen, A.: Galerkin approximations for the stochastic Burgers equation. SIAM J. Numer. Anal. 51(1), 694–715 (2013)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Chen, K.-T.: Iterated integrals and exponential homomorphisms. Proc. Lond. Math. Soc. (3) 4, 502–512 (1954)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Davie, A.M., Gaines, J.G.: Convergence of numerical schemes for the solution of parabolic stochastic partial differential equations. Math. Comput. 70(233), 121–134 (2001)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Da Prato, G., Zabczyk, J.: Second Order Partial Differential Equations in Hilbert Spaces. London Mathematical Society Lecture Note Series, vol. 293. Cambridge University Press, Cambridge (2002). doi:10.1017/CBO9780511543210
  7. 7.
    Da Prato, G., Debussche, A., Temam, R.: Stochastic Burgers’ equation. NoDEA Nonlinear Differ. Equ. Appl. 1(4), 389–402 (1994)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Friz, P.K., Hairer, M.: A Course on Rough Paths. Universitext. Springer, Cham (2014). With an introduction to regularity structures. doi:10.1007/978-3-319-08332-2
  9. 9.
    Friz, P., Victoir, N.: Multidimensional Stochastic Processes as Rough Paths. Theory and Applications, Cambridge Studies in Advanced Mathematics, vol. 120. Cambridge University Press, Cambridge (2010)Google Scholar
  10. 10.
    Friz, P.K., Gess, B., Gulisashvili, A., Riedel, S.: Jain-Monrad criterion for rough paths and applications (2013). arXiv:1307.3460v2
  11. 11.
    Gubinelli, M.: Controlling rough paths. J. Funct. Anal. 216(1), 86–140 (2004)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Gubinelli, M.: Ramification of rough paths. J. Differ. Equ. 248(4), 693–721 (2010)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Gubinelli, M., Imkeller, P., Perkowski, N.: Paracontrolled distributions and singular PDEs (2012). arXiv:1210.2684
  14. 14.
    Gyöngy, I.: Existence and uniqueness results for semilinear stochastic partial differential equations. Stoch. Process. Appl. 73(2), 271–299 (1998)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Gyöngy, I.: Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space-time white noise. I. Potential Anal. 9(1), 1–25 (1998)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Gyöngy, I.: Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space-time white noise. II. Potential Anal. 11(1), 1–37 (1999)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Hairer, M.: An introduction to stochastic PDEs (2009). arXiv:0907.4178
  18. 18.
    Hairer, M.: Rough stochastic PDEs. Commun. Pure Appl. Math. 64(11), 1547–1585 (2011)MathSciNetMATHGoogle Scholar
  19. 19.
    Hairer, M., Maas, J.: A spatial version of the Itô-Stratonovich correction. Ann. Probab. 40(4), 1675–1714 (2012)MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Hairer, M., Weber, H.: Rough Burgers-like equations with multiplicative noise. Probab. Theory Relat. Fields 155(1–2), 71–126 (2013)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Hairer, M., Maas, J., Weber, H.: Approximating rough stochastic PDEs. Commun. Pure Appl. Math. 67(5), 776–870 (2014)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Kallenberg, O.: Foundations of Modern Probability. Probability and Its Applications (New York), 2nd edn. Springer, New York (2002)MATHGoogle Scholar
  23. 23.
    Lyons, T.J.: Differential equations driven by rough signals. Rev. Mat. Iberoam. 14(2), 215–310 (1998)MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Lyons, T., Qian, Z.: System Control and Rough Paths. Oxford Mathematical Monographs. Oxford University Press, Oxford (2002). doi:10.1093/acprof:oso/9780198506485.001.0001 CrossRefMATHGoogle Scholar
  25. 25.
    Lyons, T., Caruana, M., Lévy, T.: Differential equations driven by rough paths. Lecture Notes in Mathematics, vol. 1908. Springer, Berlin (2007)Google Scholar
  26. 26.
    Nelson, E.: The free Markoff field. J. Funct. Anal. 12, 211–227 (1973)MathSciNetCrossRefMATHGoogle Scholar
  27. 27.
    Reutenauer, C.: Free Lie Algebras. London Mathematical Society Monographs. New Series, vol. 7. The Clarendon Press, Oxford University Press, New York (1993). Oxford Science PublicationsGoogle Scholar
  28. 28.
    Teljakovskiĭ, S.A.: A certain sufficient condition of Sidon for the integrability of trigonometric series. Mat. Zametki 14, 317–328 (1973)MathSciNetGoogle Scholar
  29. 29.
    Walsh, J.B.: An introduction to stochastic partial differential equations. In: École d’été de probabilités de Saint-Flour, XIV—1984. Lecture Notes in Mathematics, vol. 1180, pp. 265–439. Springer, Berlin (1986)Google Scholar
  30. 30.
    Young, L.C.: An inequality of the Hölder type, connected with Stieltjes integration. Acta Math. 67(1), 251–282 (1936)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© The Author(s) 2015

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Mathematics DepartmentUniversity of WarwickCoventryUK

Personalised recommendations