## 1 Introduction

Regularization by noise is the study of the potentially regularizing effect of irregular paths or stochastic processes on a-priori ill-posed Ordinary Differential Equations (ODEs) or Partial Differential Equations (PDEs). For illustration, consider the following differential equation

\begin{aligned} \mathop {}\!\textrm{d}x_t = b(x_t)\mathop {}\!\textrm{d}t +\mathop {}\!\textrm{d}z_t, \quad t \ge 0, \end{aligned}
(1.1)

where b is a non-linear function and z is a continuous path. Zvonkin [37] initially observed that when z is a Brownian motion, and the above equation is interpreted as a Stochastic Differential Equation (SDE) then strong existence and uniqueness holds even when b is merely bounded and measurable. This is in contrast to the classical theory of ODEs (when $$z\equiv 0$$ in (1.1)), where one typically requires that b is Lipschitz (or of similar regularity) in order to guarantee uniqueness, see e.g. [12]. Similar regularization by noise phenomena were proved by Davie in [13] where, in contrast to [37], uniqueness of solutions to (1.1) was proven in a path-by-path manner when b is a bounded and measurable function and z is a continuous path sampled from the law of the Brownian motion.

We can therefore think of z as a process which might provide a regularizing effect on the drift coefficient b such that a unique solution can be proven to exist, even when existence and/or uniqueness fails in the classical setting ($$z\equiv 0$$). This field of study has since the initiation in [37] seen a rapid development, and investigations into the regularizing effect of various stochastic processes are by now a large field of research.

There is typically a clear distinction in the approach to proving such effects. The traditional approach is based on classical tools from stochastic analysis and probability theory, and the concept of solutions which are then investigated is in the sense of (probabilistic) strong or weak solutions, see e.g. [15]. More recently, by using tools inspired by the theory of rough paths, much progress has been made towards understanding the path-by-path or pathwise regularization by noise effect.

The idea is first to identify a class of stochastic processes whose paths provide the desired regularization effect, and then solve the SDE (1.1) in a pathwise manner, see e.g. [10, 20, 25] for some of the recent works in this direction. With the approach presented in these papers, the authors are able to give meaning to and prove pathwise wellposedness of equations of the form of (1.1) even in the case when b is truly a distribution (in the sense of generalized functions). A particularly interesting feature of this approach is the apparent connection between the regularization by noise effect and the regularity of the local time associated to noise source in the equation.

Regularization by noise has also been investigated extensively in the context of SPDEs. While there are certainly several papers studying the regularization by noise effect for parabolic SPDEs both from a probabilistic and pathwise perspective, see e.g. [29,30,31], and more recently [2, 6, 11], we will discuss here some of the development for regularization by noise for hyperbolic SPDEs, more closely related to the equations considered in this article.

Motivated by the pathwise techniques used for regularization by noise in [10, 20, 25], we will in this article extend the aforementioned techniques in order to prove pathwise regularization by noise results for stochastic differential equations on the plane of the form

\begin{aligned} x_t =\xi _t +\int _0^t b(x_s)\mathop {}\!\textrm{d}s + w_t, \quad t=(t_1,t_2)\in [0,T]^2, \end{aligned}
(1.2)

where $$w:[0,T]^2\rightarrow \mathbb {R}^d$$ is an additive continuous field, and we use the notation $$\int _0^t b(x_s)\mathop {}\!\textrm{d}s:=\int _0^{t_1}\int _0^{t_2} b(x_{s_1,s_2})\mathop {}\!\textrm{d}s_1\mathop {}\!\textrm{d}s_2$$. The term $$\xi :[0,T]^2\rightarrow \mathbb {R}^d$$ is a continuous field representing the boundary conditions of the equation. In contrast to the probabilistic methodology used for regularization by noise for the stochastic heat equation in [2] based on stochastic sewing lemma, our analysis of regularization will be done through an analysis of the regularity of the local time associated to the additive continuous field w, similarly as done for ODEs in [24, 25]. In combination with an extension of the theory of non-linear Young equations to the plane, this will allow for a purely pathwise analysis of the regularization by noise phenomena for a class of hyperbolic equations. When considering stochastic equations, this methodology can be interpreted in the spirit of rough paths theory: probabilistic considerations are only needed in order to prove pathwise regularity of the local time, and the analysis of the equation itself is done purely analytically. While proving space time regularity of various stochastic processes has recently been extensively investigated in several articles [19, 24, 25], similar analysis for stochastic fields have not yet received equal recent attention. In particular, with respect to the our framework, only partial results in this direction are known (see e.g. [21] and the discussion in Remark 24 explaining why such results are not sufficient in our context). In this article we will focus on the analytical step of the pathwise regularization by noise program described above, while also providing new probabilistic space-time regularity estimates for the fractional Brownian sheet that allow to employ the analytic machinery developed (refer to Theorem 31). Let us already mention at this point however, that we do not expect this probabilistic result to be optimal and further research in this direction seems to be in place (refer also the the final Sect. 6 for a brief discussion of potential approaches in this context).

Before motivating our approach in more detail, observe that the integral equation can be seen to be the integrated version of the so-called Goursat partial differential equation

\begin{aligned} \frac{\partial ^2}{\partial t_1\partial t_2 } x_{t} =b(x_t)+\frac{\partial ^2}{\partial t_1\partial t_2 }w_t, \quad t=(t_1,t_2)\in [0,T]^2, \end{aligned}

with the boundary conditions $$x_{(0,t_2)}=\xi _{(0,t_2)}$$, $$x_{(t_1,0)}=\xi _{(t_1,0)}$$ and $$\frac{\partial ^2}{\partial t_1\partial t_2 } \xi =0$$, and w is zero on the boundary of $$[0,T]^2$$ (i.e. $$w_{0,t_2}=w_{t_1,0}=0$$ for all $$t\in [0,T]^2$$). This hyperbolic equation is fundamentally linked with the stochastic wave equation, which will be illustrated in detail below, see Theorem 4. Furthermore, we see the regularization by noise problem for the integral equation in (1.2) as a first step in order to prove regularization by noise for more complicated SPDEs driven by a stochastic field, see in particular Sect. 6 for a discussion of further development and open problems.

The integral equation in (1.2) has been extensively studied from a probabilistic point of view, mostly in the setting where w is a Brownian sheet, but also other processes have been considered. Specifically, in [35, 36] strong existence and pathwise uniqueness of solutions to equations of the form

\begin{aligned} x_t=\xi _t + \int _0^t b(x_s)\mathop {}\!\textrm{d}s + \int _0^t \sigma (x_s)\mathop {}\!\textrm{d}w_s, \quad t\in [0,T]^2, \end{aligned}

under the assumption that both b and $$\sigma$$ are Lipschitz continuous and of linear growth, and w is a Brownian sheet. The same author proved existence of weak solutions to the above equation when b is merely continuous together with a certain growth condition and a condition on the sixth moment of the boundary process $$\xi$$. Strong existence and uniqueness has later been obtained in [14] when the drift b is bounded and nondecreasing, and w is a rough fractional Brownian sheet ( i.e. with Hurst parameters $$H_1,H_2\le \frac{1}{2}$$, see Sect. 4 for further information about the fractional Brownian sheet).

Based on a multi-parameter version of the sewing lemma constructed in [23], we will in this article extend the framework of non-linear Young equations used in [10, 20, 25] (see also [17] for a complete overview in the one-parameter case) to two-parameter processes. Given a function $$A:[0,T]^2\times \mathbb {R}^d \rightarrow \mathbb {R}^n$$ which is sufficiently regular in both time and spatial arguments, and a sufficiently regular path $$y:[0,T]^2\rightarrow \mathbb {R}^d$$, one can then construct a nonlinear Young integral on the plane of the form

\begin{aligned} \int _s^t A(\mathop {}\!\textrm{d}r,y_r)=\lim _{|\mathcal {P}|\rightarrow 0} \sum _{ [u,v]\in \mathcal {P}} \square _{u,v} A(\cdot , y_u), \end{aligned}

where $$\mathcal {P}$$ is a partition of the rectangle $$[s_1,t_1]\times [s_2,t_2]$$ consisting of rectangles of the form $$[u,v]=[u_1,v_1]\times [u_2,v_2]$$ for $$[u_i,v_i]\subset [s_i,t_i]$$ with $$i=1,2$$, and the limit is taken as the mesh of the partition goes to zero. The operator $$\square$$ denotes the rectangular increment and is defined for $$s,t\in [0,T]^2$$ by

\begin{aligned} \square _{s,t} f: =f(t_1,t_2)-f(t_1,s_2)-f(s_1,t_2)+f(s_1,s_2), \end{aligned}

for any function f on the plane $$[0,T]^2$$. With the goal of proving pathwise existence and uniqueness of (1.2) when b is a distribution, a crucial step will be to give meaning to the integral term. By setting $$\theta =x-w$$ we observe that, formally, $$\theta$$ solves the equation

\begin{aligned} \theta _t=\xi _t+\int _0^t b(\theta _s+w_s)\mathop {}\!\textrm{d}s. \end{aligned}
(1.3)

The integral appearing on the right hand side may be easier to handle due to its connection with the local time associated to the path w. Indeed, recall that the local time formula tells us that for each $$x\in \mathbb {R}^d$$ we have

\begin{aligned} \int _s^t b(x+w_r)\mathop {}\!\textrm{d}r = (b*\square _{s,t} L^{-w})(x), \end{aligned}

where $$*$$ denotes the usual convolution. We then observe that by Young’s convolution inequality in Besov spaces, the regularity of the mapping $$x\mapsto \int _s^t b(x+w_r)\mathop {}\!\textrm{d}r$$ is given as a sum of the spatial regularity exponents of b and $$L^{-w}$$. Thus, if $$L^{-w}$$ is a sufficiently smooth function, the convolution $$b*L^{-w}$$ may be a differentiable function, even when b is a distribution.

The nonlinear Young integral can therefore be used to give meaning to the integral appearing in (1.3) by setting $$A(t,x)=b*L^{-w}_t(x)$$. If $$\theta :[0,T]^2\rightarrow \mathbb {R}^d$$ is a sufficiently regular path, we then define

\begin{aligned} \int _0^t b(\theta _s+w_s)\mathop {}\!\textrm{d}s=\int _0^t (b*L^{-w}) (\mathop {}\!\textrm{d}s,\theta _s)=\lim _{|\mathcal {P}|\rightarrow 0} \sum _{[u,v]\in \mathcal {P}} b*\square _{u,v} L^{-w}(\theta _u).\qquad \end{aligned}
(1.4)

In subsequent sections we show also that this integral coincides with the classical Riemann integral whenever b is a continuous function. Now existence and uniqueness of (1.3) is granted under sufficient regularity conditions on b and the local time $$L^{-w}$$. We illustrate this by highlighting one of the main results in this article, after some notational definitions.

Let E be a Banach space. For $$\gamma \in (0,1)^2$$ we denote by $$C^\gamma ([0,T]^2; E)$$ the set of E-valued two parameter jointly $$\gamma$$-Hölder continuous functions, proposed in Definition 7. We denote the usual Besov spaces by $$B_{p,q}^\alpha (\mathbb {R}^d;\mathbb {R}^n)$$, refer Sect. 2 for their definition. The first main result about of the current paper can be stated as follows, see Theorem 28 for the precise formulation.

### Theorem 1

Let $$T>0$$. Consider parameters $$\alpha ,\zeta \in \mathbb {R}$$, $$\gamma =(\gamma _1,\gamma _2)\in (\frac{1}{2},1)^2$$ and $$\eta =(\eta _1,\eta _2)\in (0,1]^2$$ such that for $$i=1,2$$ the following two conditions hold

\begin{aligned} \alpha +\zeta \ge 2+\eta _i,\quad \textrm{and} \quad (1+\eta _i)\gamma _i>1. \end{aligned}

Let $$p,q\in [1,\infty ]$$ with $$\frac{1}{p}+\frac{1}{q}=1$$, and suppose that $$w\in C([0,T]^2;\mathbb {R}^d)$$ has an associated local time $$L^{-w}\in C^\gamma ([0,T]^2; B^\alpha _{q,q}(\mathbb {R}^d;\mathbb {R}^d))$$. Then for every $$\xi \in C^\gamma ([0,T]^2;\mathbb {R}^d)$$, and any $$b\in B^\zeta _{p,p}(\mathbb {R}^d;\mathbb {R}^d)$$ there exists a unique solution $$x\in C([0,T]^2;\mathbb {R}^d)$$ to Eq. (1.2). More precisely, there exists a $$\theta \in C^\gamma ([0,T]^2;\mathbb {R}^d)$$ with the property that $$x=w+\theta$$, and $$\theta$$ satisfies

\begin{aligned} \theta _t=\xi _t+\int _0^t (b*L^{-w})(\mathop {}\!\textrm{d}s,\theta _s), \quad t\in [0,T]^2, \end{aligned}

where the integral is interpreted in the nonlinear Young sense, as in (1.4).

### Remark 2

Note that Uniqueness then holds in the class of functions of the form $$x=w+\theta$$ with $$\theta \in C^\gamma$$ satisfying (1).

As is clear from the above theorem, a second crucial ingredient in our study of regularization by noise in the plane consists of regularity estimates for the local time associated with stochastic fields. Our second main result can be stated as follows (see Theorem 31)

### Theorem 3

Let $$w:[0,T]^2\rightarrow \mathbb {R}^d$$ be a fractional Brownian sheet of Hurst parameter $$H=(H_1, H_2)$$ on $$(\Omega , \mathcal {F}, \mathbb {P})$$. Suppose that

\begin{aligned} \lambda <\frac{1}{2(H_1\vee H_2)}-\frac{d}{2}. \end{aligned}

Then for almost every $$\omega \in \Omega$$, w admits a local time L such that for $$\gamma _1\in (1/2, 1-(\lambda +\frac{d}{2})H_1)$$ and $$\gamma _2\in (1/2, 1-(\lambda +\frac{d}{2})H_2)$$

\begin{aligned} \left\Vert \square _{s,t} L\right\Vert _{H^\lambda _x}\lesssim (t_1-s_1)^{\gamma _1}(t_2-s_2)^{\gamma _2}, \end{aligned}

Combining Theorems 1 and 3, we immediately obtain a regularization by noise result for stochastic differential equations in the plane perturbed by an additive fractional Brownian sheet.

As a further interesting application of Theorem 1, we also study the Goursat boundary regularization of wave equation with singular non-linearities. More precisely, for a distributional non-linearity h, we study the problem

\begin{aligned} \begin{aligned} \left( \frac{\partial ^2}{\partial x^2}- \frac{\partial ^2}{\partial y^2}\right) u=h(u(x,y)), \end{aligned} \end{aligned}
(1.5)

on $$(x, y)\in R_{\frac{\pi }{4}}\circ [0,T]^2$$ subject to the Goursat boundary conditions

\begin{aligned} \begin{aligned} u(x,y)&=\beta ^1(y) \qquad \text {if}\ y=x\\ u(x,y)&=\beta ^2(y) \qquad \text {if}\ y=-x. \end{aligned} \end{aligned}
(1.6)

Here $$R_{\frac{\pi }{4}}$$ denotes the rotation operator of the plane by $$\pi /4$$.

The wave equation in (1.5) with random boundary conditions (1.6) arises in the literature on the splitting method to construct a non-continuous approximation of the solution of a stochastic Goursat problem, see e.g. [1]. Thus, in our understanding, the analysis of this problem will help to apply the splitting method to study Goursat problem with a distributional non-linearity perturbed by a suitable fractional Brownian sheet. However, in case h is a true distribution it is even unclear how to define a solution to this problem and prove its well-posedness by using classical methods.

To circumvent this issue, the main observation in our analysis is that the one-dimensional wave equation (1.5) with the above prescribed Goursat boundary condition (1.6) can be transformed into a Goursat PDE, accessible by our previous Theorem 1, as shown in Sect. 5. In particular, assuming $$\beta ^1, \beta ^2$$ to be sufficiently regularizing meaning they admit sufficiently regular local times, said Theorem 1 allows to establish existence and uniqueness to the problem

\begin{aligned} \psi _t=-2\int _0^t (h*L)(ds, \psi _s), \end{aligned}
(1.7)

where $$L_t(x)=L^{-\beta ^1(\cdot /\sqrt{2})}*L^{-\beta ^2(\cdot /\sqrt{2})}(x)$$, and the integral is interpreted in the non-linear Young sense. Then the second main result result of this paper is as follows, see Theorem 39 for details and the precise formulation.

### Theorem 4

Assume that the parameters $$p,q,\zeta ,\gamma$$ and $$\alpha$$ belong to the range as in Theorem 1. Suppose $$h\in B^\zeta _{p,p}(\mathbb {R}^d)$$ and suppose that $$L_t(x)=(L^{-\beta ^1(\cdot /\sqrt{2})}_{t_1}*L^{-\beta ^2(\cdot /\sqrt{2})}_{t_2})(x)$$ satisfies $$L\in C^\gamma _tB^\alpha _{q,q}(\mathbb {R}^d)$$. Then there exists a unique solution u to (1.5), in the sense of Definition 38, given by

\begin{aligned} u(x,y):=\psi (\frac{y+x}{\sqrt{2}}, \frac{y-x}{\sqrt{2}})+\beta ^1(\frac{y+x}{\sqrt{2}})+\beta ^2(\frac{y-x}{\sqrt{2}}), \end{aligned}

where $$\psi$$ is the unique solution to (1.7), interpreted in the nonlinear Young sense.

### Remark 5

In Theorem 36 we provide specific conditions under which sample paths of the fractional Brownian motion may be used as boundary processes $$\beta ^1$$ and $$\beta ^2$$ thus providing one example of stochastic paths which fulfills the conditions in Theorem 4. However, the class of stochastic processes providing such regularizing effects is by now well studied, see e.g. [19, 24, 25], and we therefore do not study such processes in more detail here.

### Remark 6

Note that by developing a Young integration theory in two dimensions, whose proof is on similar lines, as presented in Proposition 16, the authors of [32] have shown the existence of a unique solution to a non-linear one dimensional wave equation driven by an arbitrary signal whose rectangular increments satisfy some Hölder continuous with Hölder exponent greater than $$\frac{1}{2}$$. Similar results for one dimensional stochastic geometric wave equation have also been presented by the last author of this paper, in collaboration with Brzeźniak, in [8] where the noise is modelled by a fractional Brownian sheet of Hurst parameters greater than $$\frac{3}{4}$$. In contrast to Proposition 16, to achieve the existence of a unique local solution they extend the theory of pathwise stochastic integrals in Besov spaces to two dimensional setting. However, the present work focuses on the regularization by boundary conditions in the context of 1D wave equation (which in integral form can be seen as an additive perturbation of the wave equation), our Theorem 4 is fundamentally different from the main results of [8, 32] where a multiplicative noise is considered. Hence, the results presented in this article are not comparable with their results in a straightforward manner.

This paper is organized in six sections. Section 2 covers the notation and the required definitions used in the paper.

Section 3 is devoted to the extension of nonlinear Young theory to two dimensional integrands, and in particular prove existence, uniqueness and stability of non-linear Young integral equations. In Sect. 4 we give a rigorous concept of solution to (1.2) and prove the regularization by noise effect under certain conditions on the local time associated to the noise. We moreover provide a quantitative regularity estimate for the local time associated with the fractional Brownian sheet which can then be employed in the study of the aforementioned regularization by noise phenomenon.

In Sect. 5 we demonstrate how the theory of 2D nonlinear Young equations can be employed in the study of regularization of the wave equation with a noisy Goursat type boundary condition. In particular, wellposedness of this equation when the nonlinear coefficient is a distribution and the boundary processes are given as rough fractional Brownian motions is proven. We conclude the paper with Sect. 6 in which we discuss further extensions of our results and other related challenging open problems.

## 2 Notation

We will work with a partial ordering of points in the rectangle $$[0,T]^2$$, in the sense that for $$s,t\in [0,T]^2$$ the notation $$s<t$$ means that $$s_1<t_1$$ and $$s_2<t_2$$. We will work in a two-parameter setting, and will therefore frequently work with rectangles as opposed to intervals. For $$s=(s_1, s_2)$$ and $$t=(t_1, t_2)$$ with $$s<t$$, we define $$[s,t]\subset [0,T]^2$$ by $$[s,t]=[s_1,t_1]\times [s_2,t_2]$$. We therefore consider [st] to be the rectangle spanned by the lower left point $$(s_1, s_2)$$ and the upper right point $$(t_1, t_2)$$. We will refer to the set $$\{0\}\times [0,T]$$ and $$[0,T]\times \{0\}$$ as the boundary of $$[0,T]^2$$. For two numbers a and b the notation $$a\lesssim b$$ (or $$a\sim b$$) means that there exists a constant $$C>0$$ such that $$a\le Cb$$ (or $$(a=Cb$$). If the constant C depends on an important parameter k we use the notation $$\lesssim _k$$ (or $$\sim _k$$).

For a function $$A:[0,T]^2\times [0,T]^2\rightarrow \mathbb {R}^d$$, we set

\begin{aligned} \square _{s,t}A:=A_{(s_1, s_2), (t_1, t_2)}. \end{aligned}

We will denote the increment of a function $$f:[0,T]^2\rightarrow \mathbb {R}^d$$ over a rectangle [st] by

\begin{aligned} \square _{s,t} f=f_{t_1,t_2}-f_{t_1,s_2}-f_{s_1,t_2}+f_{s_1,s_2}, \end{aligned}

which canonically generalizes the notion of an increment in the two dimensional setting. This type of increment satisfies certain important properties which will be used throughout the article, and we therefore comment on some of these properties here.

If the mixed partial derivative $$\frac{\partial ^2 f(t_1,t_2)}{\partial t_1\partial t_2}$$ exists for all $$t\in [0,T]^2$$, then it is readily seen that

\begin{aligned} \square _{s,t} f = \int _s^t \frac{\partial ^2 f(r_1,r_2)}{\partial r_1\partial r_2}\mathop {}\!\textrm{d}r, \end{aligned}

where we use the double-integral notation $$\int _s^t:=\int _{s_1}^{t_1}\int _{s_2}^{t_2}$$ and $$\mathop {}\!\textrm{d}r=\mathop {}\!\textrm{d}r_2 \mathop {}\!\textrm{d}r_1$$. Furthermore, we observe that if $$g(t_1,t_2)=\square _{0,t}f$$, then g is zero on the boundary, since

\begin{aligned} g(t_1,0)=\square _{0,(t_1,0)}f=f(t_1,0)-f(0,0)-f(t_1,0)+f(0,0)=0, \end{aligned}

and similarly we can check that $$g(0,t_2)=0$$. Furthermore, we have

\begin{aligned} \square _{s,t} g=\square _{s,t} f. \end{aligned}
(2.1)

Note that the two functions can still be different on the boundary, as this is not captured by the rectangular increment.

We will work with a 2D Hölder space, capturing the necessary regularity of fields of interest in each of their variables. To this end, we also introduce two concepts which will be used to measure the regularity: Namely, for $$s<t\in [0,T]^2$$ and $$\alpha =(\alpha _1,\alpha _2)\in (0,1)^2$$ we define

\begin{aligned} m(t-s)^\alpha := |t_1-s_1|^{\alpha _1}|t_2-s_2|^{\alpha _2}. \end{aligned}
(2.2)

With a slight abuse of notation we also define

\begin{aligned} |t-s|^\alpha =|t_1-s_1|^{\alpha _1}+|t_2-s_2|^{\alpha _2}. \end{aligned}
(2.3)

### Definition 7

Let E be a Banach space and $$f:[0,T]^2\rightarrow E$$ be such that, for some $$\alpha = (\alpha _1,\alpha _2)\in (0,1)^2$$,

\begin{aligned}{}[f]_\alpha :=[f]_{(1,0),\alpha }+[f]_{(0,1),\alpha }+[f]_{(1,1),\alpha } <\infty , \end{aligned}

where we define the semi-norms

\begin{aligned} \begin{aligned} \,[f]_{(1,0),\alpha }&:=\sup _{s\ne t\in [0,T]^2} \frac{|f(t_1,s_2)-f(s_1,s_2)|_E}{|t_1-s_1|^{\alpha _1}}, \\ [f]_{(0,1),\alpha }&:=\sup _{s\ne t\in [0,T]^2} \frac{|f(s_1,t_2)-f(s_1,s_2)|_E}{|t_2-s_2|^{\alpha _2}}, \\ [f]_{(1,1),\alpha }&:=\sup _{s\ne t\in [0,T]^2} \frac{|\square _{s,t} f|_E}{m(t-s)^\alpha }. \end{aligned} \end{aligned}
(2.4)

We then say that f is $$\alpha$$-Hölder continuous on the rectangle $$[0,T]^2$$, and we write $$f\in C^\alpha _t E$$. Under the mapping $$f\mapsto |f(0,0)|+[f]_\alpha =: \Vert f\Vert _{C^\alpha _t E}$$, the space $$C^\alpha _t E$$ is a Banach space. Whenever $$E=\mathbb {R}^d$$ we write $$C^\alpha _t$$ or sometimes $$C^\alpha ([0,T]^2;\mathbb {R}^d)$$ instead of $$C^\alpha _t \mathbb {R}^d$$. Moreover, if we need to keep track of the interval over which compute the above quantities then we will highlight the interval explicitly in subscript, for e.g. $$[f]_{\alpha ,[0,T]}$$.

### Remark 8

Note that any function $$f:[0,T]^2\rightarrow E$$ can be decomposed into two functions $$f=z+y$$ where y is zero on the boundary $$\partial [0,T]^2:=\{0\}\times [0,T]\cup [0,T]\times \{0\}$$ and for any $$(s,t)\in [0,T]^2$$ $$\square _{s,t}z=0$$. Indeed, by simple addition and subtraction, we see that

\begin{aligned} f(t_1,t_2)= \square _{0,t}f +f(t_1,0)+f(0,t_2)-f(0,0), \end{aligned}

thus by defining

\begin{aligned} z(t_1,t_2):=f(t_1,0)+f(0,t_2)-f(0,0)\quad \textrm{and} \quad y(t_1,t_2)=\square _{0,t} f, \end{aligned}

we see that z and y satisfy the claimed properties. Furthermore, considering the 2D-Hölder semi-norm of f over $$[0,T]^2$$, we see that

\begin{aligned}{}[f]_\alpha \sim _T[z]_{(1,0),\alpha }+[z]_{(0,1),\alpha }+[y]_{(1,1),\alpha }. \end{aligned}
(2.5)

This decomposition and relation will play a central role in subsequent sections.

Let us also recall the definition of Besov spaces which will be of use towards the formulation of our regularization by noise results. For a more extensive introduction we refer to [4]. We will denote by $${\mathscr {S}}$$ (respectively $${\mathscr {S}}^{\prime }$$) the space of Schwartz functions on $${\mathbb {R}}^d$$ (respectively its dual, the space of tempered distributions). For $$f\in {\mathscr {S}}^{\prime }$$ we denote the Fourier transform by $${\hat{f}}={\mathscr {F}}\left( f\right) = \int _{{\mathbb {R}}^d}e^{-i x \cdot } f(x) dx$$, where the integral notation is formal, with inverse $${\mathscr {F}}^{-1} f = (2\pi )^{-d}\int _{{\mathbb {R}}^d} e^{i z \cdot } {{\hat{f}}}(z)dz$$.

### Definition 9

Let $$\chi ,\rho \in C^{\infty }(\mathbb {R}^{d})$$ be two radial functions such that $$\chi$$ is supported on a ball $$\mathcal {B}=\left\{ |x|\le c\right\}$$ and $$\rho$$ is supported on an annulus $$\mathcal {A}=\left\{ a\le |x|\le b\right\}$$ for $$a,b,c>0$$, such that

\begin{aligned} \chi +\sum _{j\ge 0}\rho \left( 2^{-j}\cdot \right)&\equiv 1,\\ \textrm{supp}\left( \chi \right) \cap \textrm{supp}\left( \rho \left( 2^{-j}\cdot \right) \right)&=\emptyset ,\,\,\,\forall j\ge 1,\\ \textrm{supp}\left( \rho \left( 2^{-j}\cdot \right) \right) \cap \textrm{supp}\left( \rho \left( 2^{-i}\cdot \right) \right)&=\emptyset ,\,\,\,\forall |i-j|\ge 1. \end{aligned}

Then we call the pair $$\left( \chi ,\rho \right)$$ a dyadic partition of unity. Furthermore, we write $$\rho _{j}:=\rho (2^{-j} \cdot )$$ for $$j\ge 0$$ and $$\rho _{-1}=\chi$$, as well as $$K_{j}={\mathscr {F}}^{-1}\rho _{j}$$.

The existence of a partition of unity is shown for example in [4, Proposition 2.10]. We fix a partition of unity $$(\chi ,\rho )$$ for the rest of the paper.

### Definition 10

For $$f\in \mathcal {{\mathscr {S}}}^{\prime }$$ we define its Littlewood-Paley blocks by

\begin{aligned} \Delta _{j}f:={\mathscr {F}}^{-1}(\rho _{j}{\hat{f}}) = K_j *f. \end{aligned}

It follows that $$f=\sum _{j\ge -1}\Delta _{j}f$$ with convergence in $${\mathscr {S}}'$$.

### Definition 11

For any $$\alpha \in \mathbb {R}$$ and $$p,q\in \left[ 1,\infty \right]$$, the Besov space $$B_{p,q}^{\alpha }(\mathbb {R}^d)$$ is

\begin{aligned} B_{p,q}^{\alpha }(\mathbb {R}^d):=\left\{ f\in {\mathscr {S}}^{\prime }\left| \Vert f\Vert _{B_{p,q}^{\alpha }(\mathbb {R}^d)}:=\left( \sum _{j\ge -1}\left( 2^{j\alpha }\Vert \Delta _{j}f \Vert _{L^{p}}\right) ^{q}\right) ^{\frac{1}{q}}<\infty \right. \right\} , \end{aligned}

with the usual interpretation as $$\ell ^\infty$$ norm if $$q = \infty$$.

At various places we will write $$B_{p,q}^{\alpha }$$ instead $$B_{p,q}^{\alpha }(\mathbb {R}^d)$$ to simplify notation. Furthermore we will work with the classical space of global Hölder continuous functions over the whole space $$\mathbb {R}^d$$. We denote the space of globally bounded Hölder continuous functions from $$\mathbb {R}^d$$ by $$C^\alpha _x:=C^\alpha _b(\mathbb {R}^d)$$. We extend these spaces to the differentiable functions with Hölder continuous derivatives in the canonical way. Note that $$B^\alpha _{\infty ,\infty }(\mathbb {R}^d) \simeq C^\alpha _b(\mathbb {R}^d)$$ whenever $$\alpha$$ is a positive non-integer number.

## 3 2D Non-linear Young integrals and equations

In this section we will provide a framework for a 2D non-linear Young theory, starting with the formulation of the 2D Sewing Lemma from [23] and followed by non-linear Young integrals and equations.

### 3.1 The 2D Sewing Lemma

In order to formulate the 2D Sewing Lemma, we will introduce an extension of the familiar $$\delta$$ operator known from the theory of rough paths [16]. We define this as follows: for a function $$f:[0,T]^4\rightarrow \mathbb {R}^d$$, and $$s<u<t\in [0,T]^2$$ define

\begin{aligned} \begin{aligned} \delta ^1_{u_1} f_{s,t}= f_{s,t}-f_{s,(u_1,t_2)}-f_{(u_1,s_2),t}, \\ \delta ^2_{u_2} f_{s,t}=f_{s,t}-f_{s,(t_1,u_2)}-f_{(s_1,u_2),t}. \end{aligned} \end{aligned}

Here $$a=(a_1,a_2) \in [0,T]^2$$ for $$a=s,u,t$$. Thus, for $$i=1,2,$$ $$\delta ^i:[0,T]\rightarrow L(C([0,T]^4); C([0,T]^4))$$ (where L(XY) is the space of linear operators from X to Y). Furthermore, we will invoke the composition of $$\delta ^1\circ \delta ^2$$ defined in the canonical way; i.e. for $$u=(u_1,u_2)\in [0,T]^2$$

\begin{aligned} (\delta ^1\circ \delta ^2)_u f=\delta ^1_{u_1}(\delta ^2_{u_2} f)=\delta ^2_{u_2}(\delta ^1_{u_1} f), \end{aligned}

and we note that $$\delta ^1\circ \delta ^2:[0,T]^2\rightarrow L(C([0,T]^4); C([0,T]^4))$$. Remark that we can canonically identify $$\delta ^i$$ and $$(\delta ^1\circ \delta ^2)$$ as mappings in $$L(C([0,T]^4); C([0,T]^5))$$ respectively $$L(C([0,T]^4); C([0,T]^6))$$.

### Definition 12

Consider $$\alpha \in (0,1)^2$$ and $$\beta \in (1,\infty )^2$$. We denote by $$\mathcal {C}^{\alpha ,\beta }_2$$ the space of all functions $$\Xi :[0,T]^4\rightarrow \mathbb {R}^d$$ such that $$\Xi _{s,t}=0$$ if $$s_1=t_1$$ or $$s_2 = t_2$$ and

\begin{aligned}{}[\Xi ]_{\alpha ,\beta }:=[\Xi ]_{\alpha }+[\Xi ]_{(0,1),\alpha ,\beta }+[\Xi ]_{(1,0),\alpha ,\beta }+[\Xi ]_{(1,1),\alpha ,\beta }<\infty , \end{aligned}

where the semi-norm $$[\Xi ]_{\alpha }$$, and the remaining terms above are given by

\begin{aligned} \begin{aligned} \,[\Xi ]_{\alpha }&:=\sup _{s\ne t\in [0,T]^2} \frac{|\Xi _{s,t}|}{m(t-s)^\alpha }, \\ \,[\Xi ]_{(1,0),\alpha ,\beta }&:=\sup _{s<u< t\in [0,T]^2} \frac{|\delta ^1_{u_1} \Xi _{s,t}|}{|t_1-s_1|^{\beta _1}|t_2-s_2|^{\alpha _2}}, \\ [\Xi ]_{(0,1),\alpha ,\beta }&:=\sup _{s<u< t\in [0,T]^2} \frac{|\delta ^2_{u_2} \Xi _{s,t}|}{|t_1-s_1|^{\alpha _1}|t_2-s_2|^{\beta _2}}, \\ [\Xi ]_{(1,1),\alpha ,\beta }&:=\sup _{s<u< t\in [0,T]^2} \frac{|(\delta ^1\circ \delta ^2)_u \Xi _{s,t}|}{m(t-s)^\beta }, \end{aligned} \end{aligned}
(3.1)

where we recall that $$m(t-s)^\beta$$ is defined as in (2.2). For later notational convenience we also define $$[\delta \Xi ]_{\alpha ,\beta }=[\Xi ]_{(0,1),\alpha ,\beta }+[\Xi ]_{(1,0),\alpha ,\beta }+[\Xi ]_{(1,1),\alpha ,\beta }$$.

When working with the two dimensional sewing lemma it is convenient to simplify notation for two dimensional partitions. We therefore provide the following definition.

### Definition 13

We will say that $$\mathcal {P}$$ is a partition of the rectangle $$[s,t]\subset [0,T]^2$$ if

\begin{aligned} \mathcal {P}=\mathcal {P}^1\times \mathcal {P}^2, \end{aligned}

where $$\mathcal {P}^1$$ is a standard partition of $$[s_1,t_1]$$ and $$\mathcal {P}^2$$ is a standard partition of $$[s_2,t_2]$$.

We are now ready to state a two dimensional version of the sewing lemma. A version of this lemma was first introduced in [33] using variation norms, and extended to the hyper-cubes in arbitrary dimension in the setting of Hölder type norms in [23]. Here we follow the last reference.

### Lemma 14

([23], Lemma 14) For $$\alpha \in (0,1)^2$$ and $$\beta \in (1,\infty )^2$$, let $$\Xi \in \mathcal {C}^{\alpha ,\beta }_2$$. Let $$\mathcal {P}:=\mathcal {P}[s,t]$$ denote a partition of $$[s,t]\subset [0,T]^2$$ in the sense of Definition 13. There exists a unique continuous linear functional $$\mathcal {I}:\mathcal {C}^{\alpha ,\beta }_2 \rightarrow C^\alpha _t$$ given by

\begin{aligned} \mathcal {I}(\Xi )_{[s,t]}:=\lim _{|\mathcal {P}|\rightarrow 0} \sum _{[u,v]\in \mathcal {P}} \Xi _{u,v}, \end{aligned}

where the limit is taken over any sequence of partitions with $$|\mathcal {P}|\rightarrow 0$$. We note that it is under the restriction $$t\mapsto \mathcal {I}(\Xi )_t:=\mathcal {I}(\Xi )_{[0,t]}$$ that we have $$\mathcal {I}(\Xi )\in C^\alpha _t$$, and we have that $$\square _{s,t}\mathcal {I}(\Xi )_\cdot =\mathcal {I}(\Xi )_{[s,t]}$$. Furthermore, there exists a constant $$C=C(\alpha ,\beta ,T)>0$$ such that the function $$\mathcal {I}(\Xi )$$ satisfies the following inequality

\begin{aligned} |\mathcal {I}(\Xi )_{[s,t]}-\Xi _{s,t}|\le C m(t-s)^\alpha |t-s|^{\beta -\alpha } [\delta \Xi ]_{\alpha ,\beta }, \end{aligned}
(3.2)

where $$m(t-s)^\alpha$$ and $$|t-s|^{\beta -\alpha }$$ are defined in (2.2) and (2.3).

For the sake of brevity, we refer the reader to [23, Lem. 14] for a full proof of this lemma.

### 3.2 2D non-linear Young integral

With the aim of constructing a 2D analogue of the non-linear Young integral, we will need to control the rectangular increments of differentiable non-linear functions. We therefore provide the following elementary lemma, which will be frequently used in the sequel.

### Lemma 15

Let $$f\in C^{1+\eta }(\mathbb {R}^d)$$ for some $$\eta \in (0,1)$$. Then the following bound holds: For all $$x,y,z,w\in \mathbb {R}^d$$

\begin{aligned}{} & {} |f(x)-f(y)-f(z)+f(w)|\le \nonumber \\{} & {} \quad \Vert f\Vert _{C^{1+\eta }}(|x-y-z+w|+|x-y|(|x-z|+|y-w|)^\eta ). \end{aligned}
(3.3)

### Proof

From a first order Taylor expansion, it follows that

\begin{aligned}{} & {} f(x)-f(y)-f(z)+f(w)=\int _0^1\nabla f(\theta x+(1-\theta )y)\nonumber \\{} & {} \quad -\nabla f(\theta z+(1-\theta )w)\mathop {}\!\textrm{d}\theta (x-y) \nonumber \\{} & {} \quad +\int _0^1\nabla f(\theta x+(1-\theta )y)\mathop {}\!\textrm{d}\theta (x-y-z+w). \end{aligned}
(3.4)

Using that $$\nabla f\in C^\eta$$, it follows that

\begin{aligned}{} & {} |f(x)-f(y)-f(z)+f(w)|\lesssim \Vert f\Vert _{C^{1+\eta }}\\{} & {} \quad (|x-y-z+w|+\int _0^1|\theta (x-z)+(1-\theta )(y-w)|^\eta \mathop {}\!\textrm{d}\theta \,|x-y|). \end{aligned}

$$\square$$

With the above lemma at hand, we are now ready to prove the existence of the 2D Non-Linear Young integral (NLY), and its properties.

### Proposition 16

(2D-Non-linear Young integral) Let $$A:[0,T]^2\times \mathbb {R}^d \rightarrow \mathbb {R}^d$$ be such that for some $$\gamma \in (\frac{1}{2},1]^2$$ and some $$\eta \in (0,1]$$, $$A\in C^\gamma _tC^{1+\eta }_{x}$$. Consider a path $$y\in C^\alpha _t([0,T]^2;\mathbb {R}^d)$$ with $$\alpha \in (0,1)^2$$ such that for $$i=1,2$$ we have $$\alpha _i \eta +\gamma _i>1$$.

Then the 2D non-linear Young integral of y with respect to A is defined by

\begin{aligned} \int _0^t A(\mathop {}\!\textrm{d}s,y_s):=\lim _{|\mathcal {P}|\rightarrow 0} \sum _{[u,v]\in \mathcal {P}} \square _{u,v}A(y_u), \end{aligned}
(3.5)

where $$\mathcal {P}$$ is a partition of $$[0,t]\subset [0,T]^2$$, as given in Definition 13. Furthermore, there exists a constant $$C>0$$ such that

\begin{aligned}{} & {} \Bigg |\int _s^t A(\mathop {}\!\textrm{d}r,y_r)-\square _{s,t}A(y_s)\Bigg |\le C\Vert A\Vert _{C^{\gamma }_tC^{1+\eta }_x} ([y]_{(1,1),\alpha }\nonumber \\{} & {} \quad \vee ([y]_{(1,0),\alpha }+[y]_{(0,1),\alpha })^{1+\eta }) m(t-s)^\gamma |t-s|^{\eta \alpha }, \end{aligned}
(3.6)

and it follows that

\begin{aligned}{}[0,T]^2\ni t\mapsto \int _0^t A(\mathop {}\!\textrm{d}s,y_s)\in C^\gamma _t. \end{aligned}

### Proof

Towards the construction of the integral in (3.5), we will apply the 2D sewing lemma to the integrand $$\square _{u,v}A(y_u)$$. Thus, we need to check that $$[0,T]^4\ni (s,t)\mapsto \square _{s,t}A(y_s)$$ belongs to $$\mathcal {C}^{\alpha ,\beta }_2$$ for the given $$\alpha$$ and some well chosen $$\beta \in (1,\infty )^2$$. Let $$u=(u_1,u_2)$$ such that $$s<u<t$$. It is readily checked that

\begin{aligned} \delta ^1_{u_1}\square _{s,t}A(y_s)=\square _{(u_1, s_2), t}A(y_s)-\square _{(u_1, s_2), t}A(y_{(u_1, s_2)}), \end{aligned}
(3.7)

and similarly for $$\delta ^2_{u_2} \square _{s,t}A(y_s)$$, and we have

\begin{aligned}{} & {} \delta ^1_{u_1}\circ \delta ^2_{u_2}\square _{s,t}A(y_s) \nonumber \\{} & {} \quad = -\left( \square _{u,t}A(y_u)-\square _{u,t}A(y_{u_1,s_2})-\square _{u,t}A(y_{s_1,u_2})+\square _{u,t}A(y_s)\right) . \end{aligned}
(3.8)

We first prove the necessary regularity of the increment in (3.7), and a similar estimate for $$\delta ^2$$ follows directly. Using that $$A\in C_t^\gamma C^{1+\eta }_{x}$$ we have that

\begin{aligned} |\delta ^1_{u_1}\square _{s,t}A(y_s)|\le \Vert A_{(u_1,s_2),t}\Vert _{C^1_x}[y]_{(1,0),\alpha }|u_1-s_1|^{\alpha _1}. \end{aligned}

Invoking the assumption that $$t\mapsto A(t,\cdot )\in C^\gamma _t$$ we get that

\begin{aligned} |\delta ^1_{u_1}\square _{s,t}A(y_s)|\le \Vert A\Vert _{C^\gamma _tC^1_x}[y]_{(1,0),\alpha }|t_1-s_1|^{\alpha _1+\gamma _1}|t_2-s_2|^{\gamma _2}, \end{aligned}
(3.9)

where we have used that $$|t_1-u_1|^{\gamma _1}|u_1-s_1|^{\alpha _1}\le |t_1-s_1|^{\alpha _1+\gamma _1}$$ since $$s<u<t$$.

Now, we will consider the increment in (3.8). By Lemma 15 and (3.8) it follows that

\begin{aligned}{} & {} |\delta ^1_{u_1}\circ \delta ^2_{u_2}\square _{s,t}A(y_s)|\le \Vert \square _{u,t} A \Vert _{C^{1+\eta }_x}\nonumber \\{} & {} \quad \left( |\square _{s,u}y|+|y_{s_1,s_2}-y_{u_1,s_2}|(|y_{s_1,s_2}-y_{s_1,u_2}|+|y_{u_1,s_2}-y_{u_1,u_2}|)^\eta \right) . \end{aligned}
(3.10)

Invoking the assumption of time regularity of A and y, we see that

\begin{aligned}{} & {} |\delta ^1_{u_1}\circ \delta ^2_{u_2}\square _{s,t}A(y_s)|\lesssim _T \Vert A\Vert _{C^\gamma _t C^{1+\eta }_x}([y]_{(1,1),\alpha }+[y]_{(1,0),\alpha }([y]_{(1,0),\alpha }+[y]_{(0,1),\alpha })^\eta ) \nonumber \\{} & {} \quad m(t-s)^{\gamma }(m(t-s)^\alpha +|t_1-s_1|^{\alpha _1}|t_2-s_2|^{\eta \alpha _2}). \end{aligned}
(3.11)

Here we recall that $$m(t-s)^\alpha =|t_1-s_1|^{\alpha _1}|t_2-s_2|^{\alpha _2}$$. Note that

\begin{aligned} m(t-s)^{\gamma }(m(t-s)^\alpha +|t_1-s_1|^{\alpha _1}|t_2-s_2|^{\eta \alpha _2}) \lesssim _T m(t-s)^{\gamma +\alpha \eta }. \end{aligned}

Thus, from the estimates in (3.9) and (3.11) and using the assumption that $$\alpha \eta +\gamma >1$$, we define $$\beta =\alpha \eta +\gamma$$ and it follows that $$\Xi _{s,t}:=\square _{s,t}A(y_s)$$ is contained in $$\mathcal {C}^{\alpha ,\beta }_2$$. Using that $$[y]_{(1,0),\alpha }\le [y]_{(1,0),\alpha }+[y]_{(0,1),\alpha }$$ and that for positive numbers ab we have $$a+b\le 2a\vee b$$, we have that

\begin{aligned}{} & {} ([y]_{(1,1),\alpha }+[y]_{(1,0),\alpha }([y]_{(1,0),\alpha }+[y]_{(0,1),\alpha })^\eta ) \\{} & {} \quad \le 2 \left( [y]_{(1,1),\alpha }\vee ([y]_{(1,0),\alpha }+[y]_{(0,1),\alpha })^{1+\eta }\right) , \end{aligned}

we conclude by an application of Lemma 14, where the inequality in (3.6) follows directly from (3.2).

$$\square$$

### Remark 17

Note that our construction of the 2D non-linear Young integral above requires one additional degree of spatial regularity compared with the 1d-nonlinear Young setting (refer e.g. to [17, Theorem 2.7]). This more restrictive condition appears, as we have to control already at this step four-point increments in the form of (3.8) due to the necessity of controlling $$\delta ^1\circ \delta ^2 \square A$$.

The following Lemma establishes the consistency of the 2D non-linear Young integral constructed in the above Proposition 16 with respect to classical Riemann integration in the setting of continuously differentiable A.

### Lemma 18

Suppose the conditions of Proposition 16 hold. Suppose moreover $$\partial _{t_1}\partial _{t_2}A$$ exists and is continuous. Then the 2D non-linear Young integral constructed in Proposition 16 coincides with the corresponding Riemann integral, i.e.

\begin{aligned} \int _0^t A(\mathop {}\!\textrm{d}s, y_s)=\int _0^t \partial _{t_1}\partial _{t_2}A(s, y_s)\mathop {}\!\textrm{d}s \end{aligned}

### Proof

Remark that as we have

\begin{aligned} \square _{s,t}A(y_s)=\int _s^t \partial _{t_1}\partial _{t_2}A(r, y_s)\mathop {}\!\textrm{d}r \end{aligned}

it suffice to show that for any sequence $$\mathcal {P}^n$$ of rectangular partitions of $$[s_1, t_1]\times [s_2, t_2]$$ such that $$|\mathcal {P}^n|\rightarrow 0$$, we have that

\begin{aligned} D_n= \sum _{[u_1,v_1]\times [u_2,v_2]\in \mathcal {P}^n} \left[ \int _u^v \partial _{t_1}\partial _{t_2}A(r, y_u)- \partial _{t_1}\partial _{t_2}A(r, y_r)\mathop {}\!\textrm{d}r\right] \rightarrow 0 \quad as\quad n\rightarrow \infty .. \end{aligned}

\begin{aligned} |D_n|\le \sup _{[u_1,v_1]\times [u_2,v_2]\in \mathcal {P}^n} \sup _{r\in [u, v]}|\partial _{t_1}\partial _{t_2}A(r, y_u)- \partial _{t_1}\partial _{t_2}A(r, y_r)|m(t-s), \end{aligned}

Since the mesh size of $$\mathcal {P}^n$$ goes to 0 as $$n\rightarrow \infty$$ by assumption, it follows that

\begin{aligned} \sup _{[u_1,v_1]\times [u_2,v_2]\in \mathcal {P}^n} \sup _{r\in [u, v]}|\partial _{t_1}\partial _{t_2}A(r, y_u)- \partial _{t_1}\partial _{t_2}A(r, y_r)|\rightarrow 0, \end{aligned}

as $$n\rightarrow \infty$$ since $$\partial _{t_1}\partial _{t_2}A$$ is assumed to be continuous. We conclude that $$D_n\rightarrow 0$$ as $$n\rightarrow \infty$$ which concludes the proof. $$\square$$

Being a concept of integration constructed free of probability, the following stability estimates are not only useful for the subsequent proof of existence and uniqueness of non-linear Young equations, but also provides powerful estimates when applied in combination with stochastic processes, as will be evident in our application towards regularization by noise.

### Proposition 19

(Stability of integral) For some $$\gamma \in (\frac{1}{2},1]^2$$ and $$\eta \in (0,1]$$, consider two functions $$A,{\tilde{A}}\in C^\gamma _t C^{2+\eta }_x$$. Furthermore, suppose $$y,{\tilde{y}}\in C^\alpha _t$$ such that $$\alpha \eta +\gamma >1$$ Then the 2D non-linear Young integral satisfies the following inequality

\begin{aligned}{} & {} \Bigg |\int _s^t A(\mathop {}\!\textrm{d}r,y_r)-\int _s^t {\tilde{A}}(\mathop {}\!\textrm{d}r, {\tilde{y}}_r)\Bigg |\nonumber \\{} & {} \quad \le \left( C_1\Vert A-{\tilde{A}}\Vert _{C^\gamma _tC^{2+\eta }_x}+C_2(|y_s-{\tilde{y}}_s|+[y-{\tilde{y}}]_{\alpha ;[s,t]})\right) m(t-s)^\gamma ,\qquad \end{aligned}
(3.12)

where the constants $$C_1$$ and $$C_2$$ are given by

\begin{aligned} \begin{aligned} C_1=&K ([y]_\alpha \vee [{\tilde{y}}]_\alpha ), \\ C_2=&K(\Vert A\Vert _{C^\gamma _tC^{2+\eta }_x}\vee \Vert {\tilde{A}}\Vert _{C^\gamma _tC^{2+\eta }_x}) \\&([y]_{(1,1),\alpha }+[{\tilde{y}}]_{(1,1),\alpha }) \vee ([y]_{(0,1),\alpha }+[y]_{(1,0),\alpha }+[{\tilde{y}}]_{(0,1),\alpha }+[{\tilde{y}}]_{(1,0),\alpha })^{1+\eta }, \end{aligned} \end{aligned}
(3.13)

for some constant K depending on $$T,\alpha ,\gamma ,\eta$$.

### Proof

To prove this, we will apply Lemma 14 to the increment $$\Xi _{s,t}:=\square _{s,t}A(y_s)-\square _{s,t}{\tilde{A}}({\tilde{y}}_s)$$, in order invoke the inequality (3.2). We have $$\Xi \in C^\gamma _t$$, as we get that

\begin{aligned} |\Xi _{s,t}|\lesssim _T (\Vert A\Vert _{C^\gamma _tC^{0}_x}+\Vert {\tilde{A}}\Vert _{C^\gamma _tC^{0}_x})m(t-s)^\gamma , \end{aligned}
(3.14)

We proceed to first prove bounds for $$\delta ^i\Xi$$ for $$i=1,2$$ and then we will provide bounds for $$\delta ^1\circ \delta ^2\Xi$$.

For $$u=(u_1,u_2)$$ we observe that

\begin{aligned}{} & {} \delta _{u_1}^1 \Xi _{s,t} =\square _{(u_1,s_2),t}A(y_s)-\square _{(u_1,s_2),t}A(y_{(u_1,s_2)}) \nonumber \\{} & {} \quad - \left[ \square _{(u_1,s_2),t}{\tilde{A}}({\tilde{y}}_s)-\square _{(u_1,s_2),t}{\tilde{A}}({\tilde{y}}_{(u_1,s_2)})\right] . \end{aligned}

Define the function $$G:[0,T]^2\times \mathbb {R}^d\times \mathbb {R}^d\rightarrow \mathbb {R}^{d\times d}$$ by

\begin{aligned}G_{t}(x, {\tilde{x}}):=\int _0^1 DA_{t}(\theta x +(1-\theta ){\tilde{x}})\mathop {}\!\textrm{d}\theta , \end{aligned}

where DA is the matrix valued derivative of A. Similarly we define $${\tilde{G}}$$ from the composition of $${\tilde{y}}$$ and $${\tilde{A}}$$. Note in particular that due to the assumption that $$x\mapsto A(t,x)\in C^{2+\eta }_x$$ for all $$t\in [0,T]^2$$, we have that

\begin{aligned}{} & {} |G_t(x, {\tilde{x}})|\le \Vert A_t(\cdot )\Vert _{C^{1}_x}, \qquad |G_t(x, {\tilde{x}})-G_t(z, {\tilde{z}})\nonumber \\{} & {} \quad \le |\Vert A_t(\cdot )\Vert _{C^{2}_x}(|x-\tilde{x}-z+{\tilde{z}}|+|{\tilde{x}}-{\tilde{z}}|) \end{aligned}
(3.15)

and similarly for $${\tilde{G}}$$. Then

\begin{aligned}{} & {} \delta _{u_1}^1 \Xi _{s,t}=\square _{(u_1,s_2),t}G(y_s,y_{(u_1,s_2)})(y_s-y_{(u_1,s_2)})\\{} & {} \quad -\square _{(u_1,s_2),t}{\tilde{G}}({\tilde{y}}_s,{\tilde{y}}_{(u_1,s_2)})({\tilde{y}}_s-{\tilde{y}}_{(u_1,s_2)}). \end{aligned}

By addition and subtraction of the term $$\square _{(u_1,s_2),t}G(y_s,y_{(u_1,s_2)})({\tilde{y}}_s-{\tilde{y}}_{(u_1,s_2)})$$ in the above relation we will seek to control the following two terms

\begin{aligned} I_1(s,u,t)&= \square _{(u_1,s_2),t}G(y_s,y_{(u_1,s_2)})\left( y_s-y_{(u_1,s_2)}-{\tilde{y}}_s+{\tilde{y}}_{(u_1,s_2)}\right) , \\ I_2(s,u,t)&= \left( \square _{(u_1,s_2),t}G(y_s,y_{(u_1,s_2)})-\square _{(u_1,s_2),t}{\tilde{G}} ({\tilde{y}}_s,{\tilde{y}}_{(u_1,s_2)})\right) ({\tilde{y}}_s-{\tilde{y}}_{(u_1,s_2)}). \end{aligned}

For the term $$I_1$$, using (3.15) it is readily checked that

\begin{aligned} |I_1(s,u,t)|\le \Vert A\Vert _{C^\gamma _tC^1_x}[y-{\tilde{y}}]_{(1,0),\alpha ;[s,t]}|t_1-s_1|^{\alpha _1+\gamma _1}|t_2-s_2|^{\gamma _2}. \end{aligned}

Next, we consider $$I_2$$. To this end, first observe that by addition and subtraction of the term $$\square _{(u_1,s_2),t}G({\tilde{y}}_s,{\tilde{y}}_{(u_1,s_2)})$$ we have

\begin{aligned}{} & {} \square _{(u_1,s_2),t}G(y_s,y_{(u_1,s_2)})-\square _{(u_1,s_2),t}{\tilde{G}}({\tilde{y}}_s,{\tilde{y}}_{(u_1,s_2)}) \\{} & {} \quad = \left[ \square _{(u_1,s_2),t}G(y_s,y_{(u_1,s_2)})-\square _{(u_1,s_2),t}G({\tilde{y}}_s,{\tilde{y}}_{(u_1,s_2)})\right] \\{} & {} \quad +\left[ \square _{(u_1,s_2),t}G({\tilde{y}}_s,{\tilde{y}}_{(u_1,s_2)})-\square _{(u_1,s_2),t}{\tilde{G}}({\tilde{y}}_s,{\tilde{y}}_{(u_1,s_2)})\right] . \end{aligned}

For the first bracket on the right hand side above, we use the second inequality in (3.15) to see that

\begin{aligned}{} & {} |\square _{(u_1,s_2),t}G(y_s,y_{(u_1,s_2)})-\square _{(u_1,s_2),t}G({\tilde{y}}_s,{\tilde{y}}_{(u_1,s_2)})|\\{} & {} \quad \le 2 \Vert A\Vert _{C^\gamma _tC^2_x}\Vert y-{\tilde{y}}\Vert _{\infty ;[s,t]} m(t-s)^\gamma . \end{aligned}

For the term in the second bracket, we use the first inequality in (3.15) to observe that

\begin{aligned} |\square _{(u_1,s_2),t}G({\tilde{y}}_s,{\tilde{y}}_{(u_1,s_2)})-\square _{(u_1,s_2),t}{\tilde{G}}({\tilde{y}}_s,{\tilde{y}}_{(u_1,s_2)})|&\le \Vert A-{\tilde{A}}\Vert _{C^\gamma _tC^1_x}m(t-s)^\gamma . \end{aligned}

Combining the above estimates, we see that

\begin{aligned}{} & {} |I_2(s,u,t)|\lesssim [{\tilde{y}}]_{(1,0),\alpha ;[s,t]}(\Vert A\Vert _{C^\gamma _tC^2_x}\Vert y-{\tilde{y}}\Vert _{\infty ;[s,t]}\\{} & {} \quad +\Vert A-{\tilde{A}}\Vert _{C^\gamma _tC^2_x}) |t_1-s_1|^{\alpha _1+\gamma _1}|t_2-s_2|^{\gamma _2}. \end{aligned}

From a combination of our estimates for $$I_1$$ and $$I_2$$, we see that

\begin{aligned}{} & {} |\delta ^1_{u_1}\Xi _{s,t}|\lesssim (1+\Vert A\Vert _{C^\gamma _tC^2_x}+[{\tilde{y}}]_{(1,0),\alpha ;[s,t]})(|y_s-{\tilde{y}}_s|\\{} & {} \quad +[y-{\tilde{y}}]_{\alpha ;[s,t]}+\Vert A-{\tilde{A}}\Vert _{C^\gamma _tC^2_x}) |t_1-s_1|^{\alpha _1+\gamma _1}|t_2-s_2|^{\gamma _2}. \end{aligned}

By similar considerations, we can show that

\begin{aligned}{} & {} |\delta ^2_{u_2}\Xi _{s,t}|\lesssim (1+\Vert A\Vert _{C^\gamma _tC^2_x}+[{\tilde{y}}]_{(0,1),\alpha ;[s,t]})(|y_s-{\tilde{y}}_s|+[y-{\tilde{y}}]_{\alpha ;[s,t]}\\{} & {} \quad +\Vert A-{\tilde{A}}\Vert _{C^\gamma _tC^2_x}) |t_1-s_1|^{\gamma _1}|t_2-s_2|^{\alpha _2+\gamma _2}. \end{aligned}

Next we move on to consider the term $$\delta ^1\circ \delta ^2 \Xi$$. By linearity of the $$\delta$$-operators, we have similarly to (3.8) that

\begin{aligned} \begin{aligned} \delta ^1_{u_1}\circ \delta ^2_{u_2} \Xi _{s,t}&= -\left( \square _{u,t}A(y_u)-\square _{u,t}A(y_{u_1,s_2}) \right. \\&\quad \left. -\square _{u,t}A(y_{s_1,u_2})+\square _{u,t}A(y_s)\right) \\&\quad +\left( \square _{u,t}{\tilde{A}}({\tilde{y}}_u)-\square _{u,t}{\tilde{A}}({\tilde{y}}_{u_1,s_2})-\square _{u,t}{\tilde{A}}({\tilde{y}}_{s_1,u_2})+\square _{u,t}{\tilde{A}}({\tilde{y}}_s)\right) . \end{aligned} \end{aligned}

By addition and subtraction of $${\tilde{A}}(y)$$, we see that this is the same as

\begin{aligned} \begin{aligned} \delta ^1_{u_1}\circ \delta ^2_{u_2} \Xi _{s,t}&= -\left( \square _{u,t}(A-{\tilde{A}})(y_u)-\square _{u,t}(A-{\tilde{A}})(y_{u_1,s_2})\right. \\&\quad \left. -\square _{u,t}(A-{\tilde{A}})(y_{s_1,u_2})+\square _{u,t}(A-{\tilde{A}})(y_s)\right) \\&\quad +\Big (\square _{u,t}{\tilde{A}}({\tilde{y}}_u)-\square _{u,t}{\tilde{A}}(y_u)-\square _{u,t}{\tilde{A}}({\tilde{y}}_{u_1,s_2})+\square _{u,t}{\tilde{A}}(y_{u_1,s_2})\\&\quad -\square _{u,t}{\tilde{A}}({\tilde{y}}_{s_1,u_2})+\square _{u,t}{\tilde{A}}(y_{s_1,u_2})+\square _{u,t}{\tilde{A}}({\tilde{y}}_s)-\square _{u,t}{\tilde{A}}(y_s)\Big ) \end{aligned} \end{aligned}
(3.16)

For the first term on the right hand side, we see that by defining a new function $${\bar{A}}=A-{\tilde{A}}$$, we can easily use the same arguments as in the proof of Proposition 16 (see in particular (3.11)) to see that

\begin{aligned} \begin{aligned}&|\square _{u,t}(A-{\tilde{A}})(y_u)-\square _{u,t}(A-{\tilde{A}})(y_{u_1,s_2})-\square _{u,t}(A-{\tilde{A}})(y_{s_1,u_2})\\&\quad +\square _{u,t}(A-{\tilde{A}})(y_s)|\\&\quad \lesssim \Vert A-{\tilde{A}}\Vert _{C^\gamma _tC^{2}_x}([y]_{\alpha ;[s,t]}\vee [y]_{\alpha ;[s,t]}^{2}) m(t-s)^{\alpha +\gamma }. \end{aligned} \end{aligned}
(3.17)

Note that in contrast to (3.11) we have $$A,{\tilde{A}}\in C^{2+\eta }$$ and thus the $$\eta$$ dependence on the right hand side above disappear. The remaining term in (3.16) must be treated differently, and in a similar procedure as what we did for $$\delta ^1 \Xi$$. By a first order Taylor approximation, we see that for $$x, {\tilde{x}}\in \mathbb {R}^d$$

\begin{aligned} \square _{u,t}{\tilde{A}}(x)-\square _{u,t} {\tilde{A}}({\tilde{x}})=\square _{u,t}{\tilde{G}}(x,{\tilde{x}})(x-{\tilde{x}}). \end{aligned}

In order to control the right hand side of (3.16) it now remains to bound the term

\begin{aligned} \begin{aligned}&+\Big (\square _{u,t}{\tilde{A}}({\tilde{y}}_u)-\square _{u,t}{\tilde{A}}(y_u)-\square _{u,t}{\tilde{A}}({\tilde{y}}_{u_1,s_2})+\square _{u,t}{\tilde{A}}(y_{u_1,s_2})\\&\quad -\square _{u,t}{\tilde{A}}({\tilde{y}}_{s_1,u_2})+\square _{u,t}{\tilde{A}}(y_{s_1,u_2})+\square _{u,t}{\tilde{A}}({\tilde{y}}_s)-\square _{u,t}{\tilde{A}}(y_s)\Big )\\&\quad =\square _{u,t}{\tilde{G}}({\tilde{y}}_u, y_u)({\tilde{y}}_u-y_u)- \square _{u,t}{\tilde{G}}({\tilde{y}}_{u_1, s_2}, y_{u_1, s_2})({\tilde{y}}_{u_1, s_2}-y_{u_1, s_2})\\&\quad -\square _{u,t}{\tilde{G}}({\tilde{y}}_{s_1, u_2}, y_{s_1, u_2})({\tilde{y}}_{s_1, u_2}-y_{s_1, u_2})+\square _{u,t}{\tilde{G}}({\tilde{y}}_s, y_s)({\tilde{y}}_s-y_s)\\&\quad =:\square _{s,u}\left( \square _{u,t} {\tilde{G}}(y,{\tilde{y}})(y-{\tilde{y}})\right) . \end{aligned} \end{aligned}

Due to the multiplicative nature of $$\square _{u,t}{\tilde{G}}(y,{\tilde{y}})(y-{\tilde{y}})$$, we must be careful. From [23, Lemma 5] it follows that we have the decomposition

\begin{aligned} \square _{s,u}\left( \square _{u,t} {\tilde{G}}(y,{\tilde{y}})(y-{\tilde{y}})\right) = \sum _{i=1}^3 J^i_{s,u,t}, \end{aligned}

where

\begin{aligned} \begin{aligned} J^1_{s,u,t}&:= \left[ \square _{s,u}\left( \square _{u,t} {\tilde{G}}(y,{\tilde{y}})\right) \right] (y_u-{\tilde{y}}_u), \\ J^2_{s,u,t}&:= \left[ \square _{u,t}{\tilde{G}}(y_s,{\tilde{y}}_s)\right] \square _{s,u}(y-{\tilde{y}}), \\ J^3_{s,u,t}&:= [\square _{u,t}{\tilde{G}}(y_{u_1,s_2},{\tilde{y}}_{u_1,s_2})-\square _{u,t}{\tilde{G}}(y_{s_1,s_2},{\tilde{y}}_{s_1,s_2})][(y_{s_1,u_2}-{\tilde{y}}_{s_1,u_2})-(y_s-{\tilde{y}}_s)]. \end{aligned} \end{aligned}

Each of these terms must be treated separately. The simplest term is $$J^2$$, where we observe that

\begin{aligned} |J^2_{s,u,t}|\lesssim \Vert {\tilde{G}}\Vert _{C^\gamma _t L^\infty _x}[y-{\tilde{y}}]_{(1,1),\alpha ;[s,t]}m(t-s)^{\alpha +\gamma }. \end{aligned}

Invoking again the first bound in (3.15), we see that

\begin{aligned} |J^2_{s,u,t}|\lesssim \Vert {\tilde{A}}\Vert _{C^\gamma _t C^1_x}[y-{\tilde{y}}]_{(1,1),\alpha ;[s,t]}m(t-s)^{\alpha +\gamma }. \end{aligned}
(3.18)

Next we consider $$J^1$$. By Lemma 15, setting $$z=(y,{\tilde{y}})$$, we see that

\begin{aligned} \begin{aligned}&| \square _{s,u}\left( \square _{u,t} {\tilde{G}}(y,{\tilde{y}})\right) | \lesssim \Vert \square _{u,t}{\tilde{G}}\Vert _{C^{1+\eta }_x}[( [z]_{(1,1),\alpha ;[s,t]})\vee ([z]_{(0,1),\alpha ;[s,t]}\\&\quad +[z]_{(1,0),\alpha ;[s,t]})^{1+\eta }]\\&\quad m(t-s)^{\gamma +\eta \alpha } \\&\quad \lesssim \Vert {\tilde{A}}\Vert _{C^\gamma _tC^{2+\eta }_x}[( [z]_{(1,1),\alpha ;[s,t]})\vee ([z]_{(0,1),\alpha ;[s,t]}+[z]_{(1,0),\alpha ;[s,t]})^{1+\eta }]\\&\quad m(t-s)^{\gamma +\eta \alpha }, \end{aligned} \end{aligned}

where we have used a slight extension of the second inequality in (3.15) using Lemma 15. Let us stress that it is precisely at this point that the required regularity of $$A, {\tilde{A}}\in C^\gamma _tC^{2+\eta }_x$$ enters into the picture, while previous esimates required less spatial regularity. Note that $$[z]_\alpha \le [y]_\alpha +[{\tilde{y}}]_\alpha$$. Thus for $$J^1$$ we get

\begin{aligned}{} & {} |J^1_{s,u,t}| \lesssim \Vert {\tilde{A}}\Vert _{C^\gamma _tC^{2+\eta }_x}([y]_{(1,1),\alpha ;[s,t]}+[{\tilde{y}}]_{(1,1),\alpha ;[s,t]})\nonumber \\{} & {} \quad \vee ([y]_{(0,1),\alpha ;[s,t]}+[y]_{(1,0),\alpha ;[s,t]}+[{\tilde{y}}]_{(0,1),\alpha ;[s,t]}+[{\tilde{y}}]_{(1,0),\alpha ;[s,t]})^{1+\eta }] \nonumber \\{} & {} \quad (|y_s-{\tilde{y}}_s|+[y-{\tilde{y}}]_{\alpha ;[s,t]})m(t-s)^{\gamma +\eta \alpha }, \end{aligned}
(3.19)

where we have used that $$\Vert y-{\tilde{y}}\Vert _{\infty ;[s,t]}\le |y_s-{\tilde{y}}_s|+[y-{\tilde{y}}]_{\alpha ;[s,t]}$$. At last we consider $$J^3$$, and by using the fact that $$z\mapsto {\tilde{G}}(z)$$ is differentiable, we get from similar type of estimates as above

\begin{aligned} |J^3_{s,u,t}|\lesssim \Vert {\tilde{A}}\Vert _{C^\gamma _tC^{2}_x}([y]_{\alpha ;[s,t]}+[{\tilde{y}}]_{\alpha ;[s,t]})[y-{\tilde{y}}]_{\alpha ;[s,t]} m(t-s)^{\alpha +\gamma }. \end{aligned}
(3.20)

Combining our bounds for $$J^1,J^2$$ and $$J^3$$ from (3.19), (3.18) (3.20), we have that

\begin{aligned}{} & {} | \square _{s,u}\left( \square _{u,t} {\tilde{G}}(y,{\tilde{y}})(y-{\tilde{y}})\right) | \lesssim _T \Vert {\tilde{A}}\Vert _{C^\gamma _tC^{2+\eta }_x}[([y]_{(1,1),\alpha ;[s,t]}+[{\tilde{y}}]_{(1,1),\alpha ;[s,t]}) \nonumber \\{} & {} \quad \vee ([y]_{(0,1),\alpha ;[s,t]}+[y]_{(1,0),\alpha ;[s,t]}+[{\tilde{y}}]_{(0,1),\alpha ;[s,t]}\nonumber \\{} & {} \quad +[{\tilde{y}}]_{(1,0),\alpha ;[s,t]})^{1+\eta }] (|y_s-{\tilde{y}}_s|+[y-{\tilde{y}}]_{\alpha ;[s,t]})m(t-s)^{\gamma +\eta \alpha }. \end{aligned}
(3.21)

A bound for (3.16) now follows by a combination of (3.21) and (3.17) and we obtain

\begin{aligned}{} & {} | \delta _{u_1}^1\circ \delta ^2_{u_2} \Xi _{s,t}|\\{} & {} \le C_2 (\Vert A-{\tilde{A}}\Vert _{C^\gamma _tC^{2+\eta }_x} +|y_s-{\tilde{y}}_s|+[y-{\tilde{y}}]_{\alpha ;[s,t]})m(t-s)^{\gamma +\eta \alpha }, \end{aligned}

where $$C_2$$ is given as in (3.13). It now follows that we can apply the 2D sewing lemma (Lemma 14), and invoking the inequality (3.2) in this lemma, the bound in (3.12) follows. $$\square$$

### Remark 20

Again, we require one more degree of spatial regularity in our setting of stability of 2D non-linear Young integrals than in the 1d-nonlinear Young setting (compare e.g. to [17, Theorem 2.7.4] for the limit case $$\delta =1$$). Essentially, as stability estimates of the above form require one additional degree of spatial regularity compared with the regime of existence of the integral, the difference in spatial regularity constraints observed already in Remark 17 carries over.

With the stability estimate for NLY integrals, we are now ready to also consider integral equations, where the 2D NLY integral appear. Similarly as how the field of (1D) non-linear Young equations is simply a generalisation of classical Young differential equations, the 2D non-linear Young equations are generalizing the already established notion of 2D Young equations.

In the following theorem we prove existence and uniqueness of these equations for sufficiently smooth non-linear functions $$A:[0,T]^2\times \mathbb {R}^d\rightarrow \mathbb {R}^d$$. As the techniques are strongly based on the proof of [23, Theorem 25], we will be mostly concerned with various estimates that differ from this reference due to the non-linear Young structure of the integral.

### Theorem 21

(Existence and Uniqueness) Suppose $$A\in C^\gamma _t C^{2+\eta }_x$$ for some $$\gamma \in (\frac{1}{2},1]^2$$ and $$\eta \in (0,1)$$. Furthermore, let $$\xi \in C^\gamma _t$$ be such that for any $$s<t\in [0,T]^2$$, $$\square _{s,t} \xi =0$$. If $$(1+\eta )\gamma >1$$, then there exists a unique solution $$\theta \in C^\gamma _t$$ to the equation

\begin{aligned} \theta _t=\xi _t+\int _0^t A(\mathop {}\!\textrm{d}s,\theta _s),\quad t\in [0,T]^2, \end{aligned}
(3.22)

where the integral is interpreted in the sense of the NLY integral in Proposition 16.

### Proof

Let $$\epsilon <\gamma$$ be small and $$\tau \in [0, T]^2$$ be two parameters to be chosen appropriately later, and let $$\mathcal {C}_\tau ^{\gamma -\epsilon }(\xi )$$ be a collection of paths z in $$C^{\gamma -\epsilon }([0,\tau ]^2;\mathbb {R}^d)$$ with the property that $$z=\xi$$ on the boundary (i.e. on $$\partial [0,\tau ]^2:=\{0\}\times [0,\tau ] \cup [0,\tau ]\times \{0\}$$). In this space, the norms are restricted to the interval $$[0,\tau ]$$, in the sense that $$[\cdot ]_{\gamma -\epsilon }:=[\cdot ]_{{\gamma -\epsilon };[0,\tau ]}$$. Note that this implies that all $$z\in \mathcal {C}^{\gamma -\epsilon }_\tau (\xi )$$ can be decomposed as $$z=\xi +y$$, where y is zero on the boundary, and by Remark 8 we have that

\begin{aligned} \Vert z\Vert _{{\gamma -\epsilon }}\sim _\tau |z_0|+[z]_{\gamma -\epsilon } =|\xi _0|+[\xi ]_{(1,0),{\gamma -\epsilon }}+[\xi ]_{(0,1),{\gamma -\epsilon }}+[y]_{(1,1),{\gamma -\epsilon }}. \end{aligned}

Thus with the metric defined by $$\mathcal {C}^{\gamma -\epsilon }_\tau (\xi ) \ni (x,y)\mapsto [x-y]_{(1,1),{\gamma -\epsilon }}$$ it follows that $$\mathcal {C}^{\gamma -\epsilon }_\tau (\xi )$$ is a complete affine metric space. Since $$(1+\eta )\gamma >1$$ and $$\gamma >\frac{1}{2}$$ choose $$\epsilon >0$$ small such that $$(1+\eta )({\gamma -\epsilon })>1$$. On the space $$\mathcal {C}^{\gamma -\epsilon }_\tau (\xi )$$ define the solution mapping M by

\begin{aligned} \mathcal {C}^{\gamma -\epsilon }_\tau (\xi ) \ni z \mapsto M_\tau (z):=\{\xi _t + \int _0^t A(\mathop {}\!\textrm{d}s,z_s)|\,t\in [0,\tau _1]\times [0,\tau _2] \}. \end{aligned}

Define now $$\mathcal {B}_\tau ^{\gamma -\epsilon }(\xi )\subset \mathcal {C}^{\gamma -\epsilon }_\tau (\xi )$$ to be the closed unit ball in $$\mathcal {C}^{\gamma -\epsilon }_\tau (\xi )$$ centered at $$\xi$$, i.e.

\begin{aligned} \mathcal {B}_\tau ^{\gamma -\epsilon }(\xi ):=\{z\in \mathcal {C}^{\gamma -\epsilon }_\tau (\xi )| \, [z-\xi ]_{(1,1), {\gamma -\epsilon }}\le 1\}. \end{aligned}

Note also that for any $$z\in \mathcal {B}^{\gamma -\epsilon }_\tau (\xi )$$, then

\begin{aligned}{} & {} [z]_{\gamma -\epsilon }=[y]_{(1,1),{\gamma -\epsilon }}+[\xi ]_{(1,0),{\gamma -\epsilon }}+[\xi ]_{(0,1),{\gamma -\epsilon }}\nonumber \\{} & {} \quad \le 1+[\xi ]_{(1,0),{\gamma -\epsilon }}+[\xi ]_{(0,1),{\gamma -\epsilon }}=:N_\xi . \end{aligned}
(3.23)

From here, a classical Picard fixed point argument can be developed; first we prove that the solution map $$M_\tau$$ leaves $$\mathcal {B}^{\gamma -\epsilon }_\tau (\xi )$$ invariant, i.e. $$M_\tau (\mathcal {B}^{\gamma -\epsilon }_\tau (\xi ))\subset \mathcal {B}^{\gamma -\epsilon }_\tau (\xi )$$, and then we prove that the solution map is a contraction, i.e. for $$z,y\in \mathcal {B}^{\gamma -\epsilon }_\tau (\xi )$$ then there exists a $$q\in (0,1)$$ such that

\begin{aligned}{}[M_\tau (y)-M_\tau (z)]_{\gamma -\epsilon } \le q[y-z]_{\gamma -\epsilon }. \end{aligned}

We begin by proving invariance. For any $$z\in \mathcal {B}^{\gamma -\epsilon }_\tau (\xi )$$ it follows from (3.6) that there exists a constant $$C>0$$ such that

\begin{aligned}{}[M_\tau (z)-\xi ]_{(1,1),{\gamma -\epsilon }}=[\int _0^\cdot A(\mathop {}\!\textrm{d}s,z_s)]_{(1,1),{\gamma -\epsilon }} \le C m(\tau )^{\epsilon }\Vert A\Vert _{C^\gamma _tC^{1+\eta }_x}(1+[z]_{\gamma -\epsilon }^{1+\eta }),\nonumber \\ \end{aligned}
(3.24)

where we recall that $$m(\tau )^{\epsilon }=\tau _1^{\epsilon _1}\tau _2^{\epsilon _2}$$. Bounding the term $$[z]_{\gamma -\epsilon }$$ by $$N_\xi$$ as defined in (3.23) we get

\begin{aligned}{}[M_\tau (z)-\xi ]_{(1,1),{\gamma -\epsilon }}\le C m(\tau )^{\epsilon }\Vert A\Vert _{C^\gamma _tC^{1+\eta }_x}(1+N_\xi ^{1+\eta } ), \end{aligned}

choosing $$\tau ^1$$ small enough, we see that $$M_{\tau ^1}$$ leaves the unit ball $$\mathcal {B}^{\gamma -\epsilon }_{\tau _1}(\xi )$$ invariant. Note however that the choice $$\tau ^1$$ depends on the boundary $$\xi$$, but only through the (1, 0) and (0, 1) Hölder norm, and does not depend on $$z-\xi$$, which will be important later.

For contraction we invoke the stability inequality from Proposition 19 to see that for $$z,{\tilde{z}}\in \mathcal {B}^{{\gamma -\epsilon }}_\tau (\xi )$$ we have

\begin{aligned}{}[M_\tau (z)-M_\tau ({\tilde{z}})]_{(1,1),{\gamma -\epsilon }}\le m(\tau )^{\epsilon }2K\Vert A\Vert _{C^\gamma _t C^{2+\eta }_x}(1+N_\xi )^{1+\eta }[z-{\tilde{z}}]_{(1,1),{\gamma -\epsilon }}, \end{aligned}

where we have used that $$[z-{\tilde{z}}]_{{\gamma -\epsilon }}=[z-{\tilde{z}}]_{(1,1),{\gamma -\epsilon }}$$ since z and $${\tilde{z}}$$ share the same boundary processes $$\xi$$. Again, choose $$\tau ^2=\tau$$ small, such that

\begin{aligned} m(\tau ^2)^{\epsilon }2K\Vert A\Vert _{C^\gamma _tC^{2+\eta }_x}(1+N_\xi )^{1+\eta }<1, \end{aligned}

we see that $$M_{\tau ^2}$$ is a contraction mapping from $$\mathcal {B}_{\tau ^2}^{\gamma -\epsilon }(\xi )$$ into $$\mathcal {B}_{\tau ^2}^{\gamma -\epsilon }(\xi )$$. Let $${\bar{\tau }}=\tau ^1\wedge \tau ^2$$, and then it follows that there exists a unique fixed point of $$z\mapsto M_{{\bar{\tau }}}(z)$$ on the interval $$[0,{\bar{\tau }}]$$. For integers $$k,j\ge 1$$, the solution can now be iterated to rectangles on the form $$[k{\bar{\tau }}_1,(k+1){\bar{\tau }}_1]\times [j{\bar{\tau }}_2,(j+1){\bar{\tau }}_2] \subset [0,T]^2$$ by following the exact procedure provided in the proof of [23, Theorem 25]. The crucial point to obtain global existence is to use the fact A and its derivatives is globally bounded and the fact that the constant $$N_\xi$$, and thus $$\tau ^1$$ and $$\tau ^2$$ only depends on the boundary information, and not on $$z-\xi$$ for $$z\in \mathcal {B}_{\tau }^{\gamma -\epsilon }(\xi )$$. The detailed proof of this point is rather lengthy and written in full detail in [23], and we therefore omit any further details here. Once it is proven that the solution indeed exists on the full rectangle $$[0,T]^2$$, it follows that it is contained in $$C^\gamma ([0,T]^2;\mathbb {R}^d)$$. In fact, it is then readily seen that the solution $$\theta$$ to (3.22) is contained in $$C^{\gamma -\epsilon }([0,T]^2;\mathbb {R}^d)$$. Indeed, observe that

\begin{aligned} |\square _{s,t}\theta |=|\int _s^t A(\mathop {}\!\textrm{d}s,\theta _s)|\lesssim _T \Vert A\Vert _{C^\gamma _tC^{1+\eta }_x}(1+[\theta ]_{\gamma -\epsilon })m(t-s)^\gamma , \end{aligned}

and similar estimates shows that $$[\theta ]_{(1,0),\gamma }$$ and $$[\theta ]_{(0,1),\gamma }$$ are finite, and thus we conclude that $$\theta \in C^\gamma ([0,T]^2;\mathbb {R}^d)$$. $$\square$$

### Remark 22

In the above proof we have constructed a local solution on small rectangles $$[0,\tau ]=[0,\tau _1]\times [0,\tau _2]$$, in order to iterate the solution to rectangles of the form $$[k \tau _1, (k+1)\tau _1]\times [j \tau _2, (j+1) \tau _2]$$, consistent with, and fully described in, [23]. However, as pointed out by the anonymous referee, based on the estimate in (3.24), it is clear that one only really need to take $$\tau =(\tau _1,T)$$ for $$\tau _1$$ sufficiently small, and from there one may then do the iteration of the solution on small "strips" $$[k \tau _1, (k+1)\tau _1]\times [0,T]$$ of the rectangle $$[0,T]^2$$. Such a solution method would certainly be similar to the one that is shown in [23] and potentially reduce the length of some arguments slightly, but for brevity of the presentation we have here used the method outlined in [23] to avoid a detail presentation of this step.

We conclude this section with the following proposition providing stability of the solutions to the 2D non-linear Young equations in terms of the non-linear function A.

### Proposition 23

Consider two functions $$A,{\tilde{A}}\in C^\gamma _tC^{2+\eta }_x$$ for some $$\gamma \in (\frac{1}{2},1]^2$$ and $$\eta \in (0,1)$$ such that $$(1+\eta )\gamma >1$$. Furthermore, let $$\xi ,{\tilde{\xi }}\in C^\gamma _t$$ be such that for any $$s<t\in [0,T]^2$$, $$\square _{s,t} \xi =\square _{s,t} {\tilde{\xi }}=0$$. Let $$\theta ,{\tilde{\theta }}\in C^\gamma _t$$ be two solutions to (3.22) driven by $$(A,\xi )$$ and $$({\tilde{A}},\xi )$$ respectively, and assume there exists a constant $$M>0$$ such that

\begin{aligned} \Vert \theta \Vert _\gamma \vee \Vert {\tilde{\theta }}\Vert _\gamma \vee \Vert A\Vert _{C^\gamma _tC^{2+\eta }_x}\vee \Vert {\tilde{A}}\Vert _{C^\gamma _tC^{2+\eta }_x}\le M. \end{aligned}
(3.25)

Then the solution map $$(A,\xi )\mapsto \theta$$ is continuous, and there exists a constant C depending on $$\gamma ,M,T$$ such that

\begin{aligned}{}[\theta -{\tilde{\theta }}]_{\gamma } \le C( \Vert \xi -{\tilde{\xi }}\Vert _\gamma + \Vert A-{\tilde{A}}\Vert _{C^\gamma _t C^{2+\eta }_x}). \end{aligned}
(3.26)

### Proof

First observe that the difference $$\theta -{\tilde{\theta }}$$ is given by

\begin{aligned} \theta _t-{\tilde{\theta }}_t=\xi _t-{\tilde{\xi }}_t+\int _0^t A(\mathop {}\!\textrm{d}s,\theta _s)-\int _0^t{\tilde{A}}(\mathop {}\!\textrm{d}s,{\tilde{\theta }}_s). \end{aligned}

Using the exact same techniques as for proving the stability result for the non-linear Young integral in (3.12), we see that

\begin{aligned}{} & {} \left| \int _s^t A(\mathop {}\!\textrm{d}r,\theta _r)-\int _s^t{\tilde{A}}(\mathop {}\!\textrm{d}r,{\tilde{\theta }}_r) - (\square _{s,t}A(\theta _s)-\square _{s,t}{\tilde{A}}({\tilde{\theta }}_s))\right| \\{} & {} \quad \le (C_1\Vert A-{\tilde{A}}\Vert _{C^\gamma _t C^{2+\eta }_x}+C_2(|\theta _s-\tilde{\theta _s}|+[\theta -{\tilde{\theta }}]_\beta ))|t-s|^{\eta \gamma } m(t-s)^\gamma . \end{aligned}

Here $$C_1$$ and $$C_2$$ are given as in (3.13), and by definition of M we see that

\begin{aligned} C_1\vee C_2\le K(M), \end{aligned}

for some monotone increasing function K. Furthermore, using that $$\square _{s,t}\xi =\square _{s,t}{\tilde{\xi }}=0$$, we see from the same inequality that the following bound also holds

\begin{aligned}{} & {} \left| \square _{s,t}(\theta -{\tilde{\theta }})- (\square _{s,t}A(\theta _s)-\square _{s,t}{\tilde{A}}({\tilde{\theta }}_s))\right| \\{} & {} \quad \le K(M) (\Vert A-{\tilde{A}}\Vert _{C^\gamma _t C^{2+\eta }_x}+|\theta _s-\tilde{\theta _s}|+[\theta -{\tilde{\theta }}]_{\beta ;[s,t]})|t-s|^{\eta \gamma } m(t-s)^\gamma . \end{aligned}

For some $$\tau \in (0,T)^2$$ consider any interval $$[\rho ,\rho +\tau ]$$ such that $$[\rho ,\rho +\tau ]\subset [0,T]^2.$$ It follows that for $$\beta <\gamma$$ we have

\begin{aligned}{} & {} [\theta -{\tilde{\theta }}-(\square _{\rho ,\cdot }A(\theta _\rho )-\square _{\rho ,\cdot }{\tilde{A}}({\tilde{\theta }}_\rho ))]_{(1,1),\gamma ;[\rho ,\rho +\tau ]}\le K(M) |\tau |^{\eta \gamma } (\Vert A-{\tilde{A}}\Vert _{C^\gamma _t C^{2+\eta }_x}\\{} & {} \quad +|\theta _\rho -{\tilde{\theta }}_\rho |+[\theta -{\tilde{\theta }}]_{\gamma ;[\rho ,\rho +\tau ]}). \end{aligned}

By similar computations as above one can also show that

\begin{aligned}&[\theta -{\tilde{\theta }}-(\square _{\rho ,\cdot }A(\theta _\rho )-\square _{\rho ,\cdot }{\tilde{A}}({\tilde{\theta }}_\rho ))]_{(1,0),\gamma ;[\rho ,\rho +\tau ]}\\&\quad \lesssim _T \tau _1^{\gamma \eta }K(M) (\Vert A-{\tilde{A}}\Vert _{C^\gamma _t C^{2+\eta }_x}+|\theta _{\rho }-{\tilde{\theta }}_{\rho }|+[\theta -{\tilde{\theta }}]_{\gamma ;[\rho ,\rho +\tau ]}), \\&[\theta -{\tilde{\theta }}-(\square _{\rho ,\cdot }A(\theta _\rho )-\square _{\rho ,\cdot }{\tilde{A}}({\tilde{\theta }}_\rho ))]_{(0,1),\gamma ;[\rho ,\rho +\tau ]}\\&\quad \lesssim _T\tau _2^{\gamma \eta }K(M) (\Vert A-{\tilde{A}}\Vert _{C^\gamma _t C^{2+\eta }_x}+|\theta _{\rho }-{\tilde{\theta }}_{\rho }|+[\theta -{\tilde{\theta }}]_{\gamma ;[\rho ,\rho +\tau ]}). \end{aligned}

Combining these estimates, it follows that there exists a constant $$C=C(T,K(M),\gamma \eta )>0$$ such that

\begin{aligned}{} & {} [\theta -{\tilde{\theta }}-(\square _{\rho ,\cdot }A(\theta _\rho )-\square _{\rho ,\cdot }{\tilde{A}}({\tilde{\theta }}_\rho ))]_{\gamma ;[\rho ,\rho +\tau ]} \le C\left( \Vert A-{\tilde{A}}\Vert _{C^\gamma _t C^{2+\eta }_x} \right. \\{} & {} \quad \left. + |\theta _\rho -{\tilde{\theta }}_{\rho }|+|\tau |^{\gamma \eta } [\theta -{\tilde{\theta }}]_{\beta ;[\rho ,\rho +\tau ]}\right) . \end{aligned}

In particular, choosing $$\tau$$ small enough such that

\begin{aligned} |\tau |^{\eta \gamma } \le \frac{1}{2C}, \end{aligned}

it follows that

\begin{aligned}{}[\theta -{\tilde{\theta }}-(\square _{\rho ,\cdot }A(\theta _\rho )-\square _{\rho ,\cdot }{\tilde{A}}({\tilde{\theta }}_\rho ))]_{\gamma ;[\rho ,\rho +\tau ]} \le 2 C( |\theta _{\rho }-{\tilde{\theta }}_{\rho }| + \Vert A-{\tilde{A}}\Vert _{C^\gamma _t C^{2+\eta }_x}). \end{aligned}

and in particular, reformulating using the triangle inequality, we see that

\begin{aligned}{}[\theta -{\tilde{\theta }}]_{\gamma ;[\rho ,\rho +\tau ]} \le 2 C( |\theta _{\rho }-{\tilde{\theta }}_{\rho }| + \Vert A-{\tilde{A}}\Vert _{C^\gamma _t C^{2+\eta }_x}). \end{aligned}
(3.27)

Note that this inequality holds for any $$\rho \in [0,T)^2$$ such that $$[\rho ,\rho +\tau ]\subset [0,T]^2$$, an in particular for the rectangle $$[0,\tau ]$$. Iterating the inequality obtained on this interval to any interval $$[k\tau , (k+1)\tau ]\subset [0,T]^2$$, using that the relation that $$|x_t|\le |x_0|+[x]_{\gamma }$$, one can show that on any interval $$[\rho ,\rho +\tau ]$$

\begin{aligned}{}[\theta -{\tilde{\theta }}]_{\gamma ;[\rho ,\rho +\tau ]} \lesssim \Vert \xi -{\tilde{\xi }}\Vert _\gamma + \Vert A-{\tilde{A}}\Vert _{C^\gamma _t C^{2+\eta }_x}. \end{aligned}

We therefore conclude by the 2D-Hölder norm scaling property proven in [23, Proposition 24], that (3.26) holds. $$\square$$

## 4 Regularization of SDEs on the plane

Consider the stochastic differential equation formally given by

\begin{aligned} x_t =\xi _t +\int _0^t b(x_s)\mathop {}\!\textrm{d}s + w_t, \quad t\in [0,T]^2, \end{aligned}
(4.1)

where $$\xi _t=\xi ^1_{t_1}1_{t_2=0}+\xi _{t_2}^21_{t_1=0}$$ is a function supported on the boundary $$\{0\}\times [0,T]\cup [0,T]\times \{0\}$$. The integral equation can therefore be seen to be equipped with the two boundary conditions $$x_{t_1,0}=\xi ^1_{t_1}+w_{t_1, 0}$$ and $$x_{0,t_2}=\xi ^2_{t_2}+w_{0, t_2}$$. The goal of this section is to prove well-posedness of this equation, even in the case when b is distributional (in the sense of generalized functions) given that w provides a sufficiently regularizing effect.

### Remark 24

In this article the formulation of the results for regularization by noise will be in terms of the potential regularity of the local time, similar to the approach in [24, 25]. Similar type of pathwise regularization by noise results can also be formulated as requirements on the so called averaged field associated with w and the drift b, as in [10]. On two dimensional domains, the local time and occupation measure related to stochastic fields is a well studied topic, and some types of regularity estimates are well known, see e.g. [21, Theorem 28.1] where it is established that the $$(H_1, \dots , H_N)$$-fractional Brownian sheet on $$[0,T]^N$$ with $$H_1=\dots =H_N=H$$ has a local time $$L^{H}_t$$ contained $$H^{\frac{N}{2H}-\frac{d}{2}}$$ $$\mathbb {P}$$-almost surely (see also [3, 34] for joint space-time continuity of the local time in this setting). However, such regularity estimates are not sufficient in order to apply non-linear Young theory, as one still misses proofs of higher quantified joint space time regularity of these occupations measures (in particular, estimates that provide at the same time a quantified Young regularity in time and higher spatial regularity appear lacking. While [3, 34] do provide some Hölder regularity in time for fixed space points respectively Hölder regularity in space for fixed time points, estimates that establish jointly Hölder continuity in time on a Bessel potential scale of high order as in space as in Theorem 31 appear to be new).

Suppose now that $$w:[0,T]^2\rightarrow \mathbb {R}^d$$ is a stochastic field. Let $$\mu ^w$$ denote the occupation measure of w, defined as follows for a Borel set $$A\in \mathcal {B}(\mathbb {R}^d)$$,

\begin{aligned} \mu ^w_t(A) = \lambda \{s<t\in [0,T]^2|\, w_s\in A\}, \end{aligned}

where $$\lambda$$ is the Lebesgue measure. If the occupation measure is absolutely continuous with respect to the Lebesgue measure on $$\mathbb {R}^d$$ it admits a density, $$L^w$$, i.e.

\begin{aligned} \mu ^w_t(A)=\int _A L^w_t(z)\mathop {}\!\textrm{d}z,\quad t\in [0,T]^2. \end{aligned}

The function $$L^w:[0,T]^2 \times \mathbb {R}^d \rightarrow \mathbb {R}_+$$ is called the local time associated to w. Given a bounded measurable function $$b:\mathbb {R}^d\rightarrow \mathbb {R}^d$$ and $$x\in \mathbb {R}^d$$, the following local time formula holds

\begin{aligned} \int _s^t b(x+w_r)\mathop {}\!\textrm{d}r= \int _{\mathbb {R}^d}b(x-z)\square _{s,t} L^{-w}(z)\mathop {}\!\textrm{d}z = (b*\square _{s,t}L^{-w})(x). \end{aligned}
(4.2)

### Remark 25

For the reader familiar with the concept of averaged fields in the pathwise regularization by noise approach developed by Catellier, Galeati and Gubinelli in [10, 19, 20], the left hand side of the above equation could be seen as a two dimensional extension of the averaged field associated to b and w. That is,

\begin{aligned} T^{w}b(t,x)=\int _0^t b(x+w_r)\mathop {}\!\textrm{d}r,\quad t\in [0,T]^2, \end{aligned}

where we stress that the integral above is a double integral, and from convention $$\mathop {}\!\textrm{d}r=\mathop {}\!\textrm{d}r_2 \mathop {}\!\textrm{d}r_1$$.

With the aim of proving well-posedness of (4.1) in the case when b is truly distributional, we need to make sense of the integral appearing in (4.1). Similarly to the (1D) pathwise regularization approach, we will construct this integral in terms of a non-linear Young integral. That is, let us first reformulate (4.1) by setting $$\theta =x-w$$. Then formally $$\theta$$ solves the equation

\begin{aligned} \theta _t=\xi _t+\int _0^t b(\theta _s+w_s)\mathop {}\!\textrm{d}s. \end{aligned}
(4.3)

In the case when b is continuous the integral appearing above can be constructed in the Riemann sense; for a sequence of partitions of $$\{\mathcal {P}^n\}$$ of $$[0,t_1]\times [0,t_2]$$ (constructed as in Definition 13) with mesh going to zero when n tends to infinity, we have that

\begin{aligned} \int _0^t b(\theta _s+w_s)\mathop {}\!\textrm{d}s=\lim _{n\rightarrow \infty }\sum _{[u_1,v_1]\times [u_2,v_2]\in \mathcal {P}^n} b(\theta _u+w_u)\square _{u,v} id, \end{aligned}
(4.4)

where $$\square _{u,v} id:=\int _{u_1}^{v_1}\int _{u_2}^{v_2}\mathop {}\!\textrm{d}r_2\mathop {}\!\textrm{d}r_1$$. An alternative approach to constructing this integral is in terms of the non-linear Young integral. Suppose the local time associated with w is differentiable in its spatial variable and (2D) $$\gamma$$-Hölder continuous in the time variable with $$\gamma >\frac{1}{2}$$. Then we can use Proposition 16 to show that the following integral exists:

\begin{aligned} \int _0^t (b*L^{-w})(\mathop {}\!\textrm{d}s, \theta _s):=\lim _{n\rightarrow \infty }\sum _{[u_1,v_1]\times [u_2,v_2]\in \mathcal {P}^n} (b*\square _{u,v} L^{-w})(\theta _u). \end{aligned}
(4.5)

In the next proposition we prove that the above integral indeed exists in the non-linear Young sense, and that it agrees with the classical Riemann integral in the case of continuous functions b.

### Proposition 26

Consider $$p,q\in [1,\infty ]$$ such that $$\frac{1}{p}+\frac{1}{q}=1$$. Let $$b\in B^\zeta _{p,p}(\mathbb {R}^d)$$ for some $$\zeta \in \mathbb {R}$$ and assume $$w:[0, T]^2\rightarrow \mathbb {R}^d$$ to be continuous such that $$L^{-w}\in C^\gamma _tB^\kappa _{q,q}(\mathbb {R}^d)$$, with $$\gamma \in (\frac{1}{2},1]^2$$ and $$\zeta +\kappa >1+\eta$$ for some $$\eta \in (0,1)$$. Suppose $$\theta \in C^\alpha _t$$ for $$\alpha \in (0,1)^2$$ such that $$\alpha \eta +\gamma >1$$. Then the nonlinear Young integral defined in (4.5) exists. Furthermore, if b is continuous, then this integral agrees with the classical Riemann integral.

### Proof

Set $$\square _{s,t}A(x):=(b*\square _{s,t}L^{-w})(x)=\int _s^tb(x+w_r)\mathop {}\!\textrm{d}r$$. By Young’s convolution inequality (see e.g. [4]), it follows that

\begin{aligned} \Vert \square _{s,t}A\Vert _{C^{\kappa +\zeta }_x}\lesssim \Vert b\Vert _{B^\zeta _{p,p}(\mathbb {R}^d)}\Vert L^{-w}\Vert _{C^\gamma _tB^\kappa _{q,q}(\mathbb {R}^d)}m(t-s)^\gamma . \end{aligned}
(4.6)

This implies that $$A\in C^\gamma _t C^{\kappa +\zeta }_x$$. Furthermore, $$t\mapsto \square _{0,t}L^w$$ is clearly 0 on the boundary, as illustrated in Remark 8. Thus, since $$\kappa +\zeta >1+\eta$$ and $$\theta \in C^\alpha _t$$ with $$\eta \alpha +\gamma >1$$ it then follows from Proposition 16 that the integral

\begin{aligned} \int _0^t (b*L^{-w})(\mathop {}\!\textrm{d}s, \theta _s)=\int _0^t A(\mathop {}\!\textrm{d}s,\theta _s), \end{aligned}

exists in the sense of (3.5).

Suppose now that b is continuous so that the Riemann integral in (4.4) exists. Remark that then $$A_t(x)=\int _0^t b(x+w_r)\mathop {}\!\textrm{d}r$$, i.e. $$\partial _{t_1}\partial _{t_2}A_s(x)=b(x+w_s)$$ is continuous. Hence, by Lemma 18, the 2d-nonlinear Young integral constructed coincides with the corresponding Riemann integral. $$\square$$

Now that the non-linear Young integral for the convolution with a function b and the local time is well defined, we will move on to prove existence and uniqueness of solutions to (4.1). However, as we are interested in allowing for distributional b, we need a rigorous concept of solution which behaves well under approximation. This is provided in the following definition.

### Definition 27

Let $$w:[0,T]^2\rightarrow \mathbb {R}^d$$ be a continuous field, and $$b\in \mathcal {S}(\mathbb {R}^d)'$$ (the space of Schwartz distributions). Assume that $$b*L^{-w}\in C^{\gamma }_tC^{2+\eta }_x$$ for some $$\gamma \in (\frac{1}{2},1]^2$$ and for some $$\eta \in (0,1)$$ such that $$(1+\eta )\gamma >1$$. We say that $$x\in C([0,T]^2; \mathbb {R}^d)$$ is a solution to (4.1) if there exists a $$\theta \in C^\gamma ([0,T]^2;\mathbb {R}^d)$$, such that $$x=w+\theta$$, and $$\theta$$ satisfies

\begin{aligned} \theta _t=\xi _t+\int _0^t (b*L^{-w})(\mathop {}\!\textrm{d}s,\theta _s), \quad t\in [0,T]. \end{aligned}
(4.7)

Here, $$\xi _t=\xi ^1_{t_1}1_{t_2=0}+\xi _{t_2}^21_{t_1=0}$$ and $$\xi \in C^\beta _t$$ for some $$\beta \ge \gamma$$, the integral is understood as a non-linear Young integral as defined in Proposition 26 and the equation is interpreted in the non-linear Young sense (see Theorem 21). We call $$\xi$$ boundary data of $$\theta$$. The boundary data of x is accordingly $$\xi +w$$.

The next result provides simple conditions for the existence and uniqueness of solutions in terms of the regularity of the (possibly distributional) coefficient b and the regularity of the local time associated to the continuous field w.

### Theorem 28

Let $$p,q\in [1,\infty ]$$ be such that $$\frac{1}{p}+\frac{1}{q}=1$$. Assume $$b\in B^\zeta _{p,p}(\mathbb {R}^d)$$ for $$\zeta \in \mathbb {R}$$ and that $$w\in C([0,T]^2;\mathbb {R}^d)$$ has an associated local time $$L^{-w}\in C^\gamma _t B^\alpha _{q,q}(\mathbb {R}^d)$$ for some $$\gamma \in (\frac{1}{2},1]^2$$ and $$\alpha \in (0,1)^2$$. If there exists an $$\eta \in (0,1)$$ such that

\begin{aligned} \gamma (1+\eta )>1\quad and \quad \zeta +\alpha >2+\eta , \end{aligned}
(4.8)

then there exists a unique solution $$x\in C([0,T]^2;\mathbb {R}^d)$$ to equation 4.1, where the solution is given in the sense of Definition 27.

### Proof

We know that under the conditions in (4.8) the non-linear Young integral in (4.7) is well defined according to Proposition 26, and it follows by Theorem 21 that a unique solution to (4.7) exists in $$C^\gamma _t$$. Setting $$x=w+\theta$$ it follows that $$x\in C([0,T]^2;\mathbb {R}^d)$$, and that x is a solution in the sense of Definition 27. $$\square$$

### Remark 29

Note that in the case when b is a continuous function and the conditions of Theorem 28 is satisfied, then the solution coincides with the classical one. Indeed, in this case, the simple transformation (4.1) given in (4.3) holds, and since the non-linear Young integral agrees with the Riemann integral, in this case, the two concepts of solution also agree.

With the stability result from Proposition 23 it also follows that smooth approximations of a solution converge to the solution of the non-linear Young equation.

### Corollary 30

Suppose b, w and $$L^{-w}$$ satisfies the conditions of Theorem 28 such that a unique solution x to (4.1) with boundary data $$\xi +w$$ exists in the sense of Definition 27. Let $$\{b^n\}_{n\in \mathbb {N}}$$ be a sequence of smooth functions approximating b such that $$\lim _{n\rightarrow \infty } \Vert b^n-b\Vert _{B^\zeta _{p,p}(\mathbb {R}^d)}=0$$. Denote by $$\{x^n\}_{n \in \mathbb {N}}\subset C([0,T]^2;\mathbb {R}^d)$$ the sequence of solutions with boundary data $$\xi +w$$ constructed from the sequence of solutions to (4.1) where the drift of $$x^n$$ is given by $$b^n$$. Then $$x^n\rightarrow x$$ in $$C([0,T]^2;\mathbb {R}^d)$$, and we have that for any $$\beta \in (0,\gamma )$$ and any $$n\in \mathbb {N}$$

\begin{aligned} \Vert x^n-x\Vert _{\infty } \le C\Vert b^n-b\Vert _{B^\zeta _{p,p}(\mathbb {R}^d)}. \end{aligned}

### Proof

This follows directly from Proposition 23, the fact that $$\Vert x^n-x\Vert _{\infty }\le |x_0^n-x_0|+ [x^n-x]_\beta$$, together with the interpretation of the solutions in the non-linear Young sense, as illustrated in Proposition 26 and Theorem 28. $$\square$$

### 4.1 Regularity of the local time of the fractional Brownian sheet

While the above theorem provides explicit conditions for the existence and uniqueness of solutions to the equation in terms of regularity of the local time associated with the continuous field w, it does not provide any further conditions on the field w to guarantee that the local time indeed has the assumed regularity. The above theorem is therefore abstract in itself, and one needs to study space-time regularity properties of local times associated with various continuous (stochastic) fields in order to get concrete conditions on the field w and b to guarantee existence and uniqueness. In the following, we derive joint space-time regularity estimates for the local time associated with two types of noises: the fractional Brownian sheet and sums of independent fractional Brownian motions in distinct variables. The following theorem can be considered an extension of [25, Theorem 3.1] to the two dimensional setting.

### Theorem 31

Let $$w:[0,T]^2\rightarrow \mathbb {R}^d$$ be a fractional Brownian sheet of Hurst parameter $$H=(H_1, H_2)$$ on $$(\Omega , \mathcal {F}, \mathbb {P})$$. Suppose that

\begin{aligned} \lambda <\frac{1}{2(H_1\vee H_2)}-\frac{d}{2} \end{aligned}

Then for almost every $$\omega \in \Omega$$, w admits a local time L such that for $$\gamma _1\in (1/2, 1-(\lambda +\frac{d}{2})H_1)$$ and $$\gamma _2\in (1/2, 1-(\lambda +\frac{d}{2})H_2)$$

\begin{aligned} \left\Vert \square _{s,t} L\right\Vert _{H^\lambda _x}\lesssim (t_1-s_1)^{\gamma _1}(t_2-s_2)^{\gamma _2}. \end{aligned}

### Proof

We proceed similar to [25, Theorem 3.1]. Recall that if $$\mu$$ denotes the occupation measure associated with w, we have by the occupation times formula

\begin{aligned} \square _{s,t}{\hat{\mu }}(z)=\int _s^t e^{iz\cdot w_r}dr. \end{aligned}

Recall also that it was shown in [25, Theorem 3.1] that for a d-dimensional fractional Brownian motion $$B^{H_1}$$ of Hurst parameter $$H_1$$, one obtains the bound

\begin{aligned} \left\Vert \int _{s_1}^{t_1} e^{i\alpha z\cdot B^{H_1}_r}dr\right\Vert _{L^m(\Omega )}\lesssim (1+|z|^2)^{-\lambda '/2}|\alpha |^{-\lambda '}(t_1-s_1)^{1-\lambda 'H_1}, \end{aligned}
(4.9)

provided $$1-\lambda 'H_1>1/2$$ thanks to the stochastic sewing lemma [28]. Moreover, remark that for the Brownian sheet $$w_r=w_{r_1, r_2}$$, we have that for fixed time points $$r_2\in [0,T]$$, $$w_{r_1, r_2}\simeq r_2^{H_2}B_{r_1}^{H_1}$$ in law, where $$B^{H_1}$$ is a standard fractional Brownian motion of Hurst parameter $$H_1$$. This implies that

\begin{aligned} \left\Vert \int _s^te^{iz\cdot w_r}dr\right\Vert _{L^m}&\le \int _{s_2}^{t_2}\left\Vert \int _{s_1}^{t_1}e^{iz\cdot w_r}dr_1\right\Vert _{L^m}dr_2\\&=\int _{s_2}^{t_2}\left\Vert \int _{s_1}^{t_1}e^{iz\cdot r_2^{H_2}B^{H_1}_{r_1}}dr_1\right\Vert _{L^m}dr_2\\&\lesssim \int _{s_2}^{t_2} (1+|z|^2)^{-\lambda '/2}r_2^{-H_2\lambda '}(t_1-s_1)^{1-\lambda 'H_1}dr_2\\&\lesssim (1+|z|^2)^{-\lambda '/2}(t_1-s_1)^{1-\lambda 'H_1}(t_2-s_2)^{1-\lambda 'H_2}, \end{aligned}

where we need to require $$1-\lambda 'H_2>1/2$$ as to assure $$\gamma _2>1/2$$. By definition of Bessel-potential spaces, we therefore obtain by Minkowski’s integral inequality

\begin{aligned} \mathbb {E}[\left\Vert \square _{s,t}\mu \right\Vert _{H^s}^p]^{1/p}&=\left( \mathbb {E}\left( \int _{\mathbb {R}^d}\square _{s,t}{\hat{\mu }}(z)^2(1+|z|^2)^{s}dz \right) ^{p/2}\right) ^{1/p}\\&\le \left( \int _{\mathbb {R}^d}\left\Vert |\square _{s,t}{\hat{\mu }}(z)|^2\right\Vert _{L^{p/2}(\Omega )}(1+|z|^2)^{s}dz \right) ^{1/2}\\&=\left( \int _{\mathbb {R}^d}\left\Vert |\square _{s,t}{\hat{\mu }}(z)\right\Vert _{L^{p}(\Omega )}^{2}(1+|z|^2)^{s}dz \right) ^{1/2}\\&\lesssim (t_1-s_1)^{1-\lambda 'H_1}(t_2-s_2)^{1-\lambda 'H_2}\int _{\mathbb {R}^d}(1+|z|^2)^{s-\lambda '}dz\\&\lesssim (t_1-s_1)^{1-\lambda 'H_1}(t_2-s_2)^{1-\lambda 'H_2}\int _{0}^\infty (1+|r|^2)^{s-\lambda '}r^{d-1}dr\\&\lesssim (t_1-s_1)^{p(1-\lambda 'H_1)}(t_2-s_2)^{p(1-\lambda 'H_2)}, \end{aligned}

provided $$s<\lambda '-d/2$$. We can then conclude by the joint Kolmogorov continuity theorem as expressed in [26, Theorem 3.1] to obtain for $$\gamma _1<1-\lambda 'H_1$$ and $$\gamma _2<1-\lambda 'H_2$$

\begin{aligned} \left\Vert \square _{s,t}\mu \right\Vert _{H^s}\lesssim (t_1-s_1)^{\gamma _1}(t_2-s_2)^{\gamma _2}. \end{aligned}

It thus follows that for such $$\gamma =(\gamma _1, \gamma _2)$$, we may conclude

\begin{aligned} L\in C^\gamma _tH^s, \end{aligned}

where

\begin{aligned} s<\lambda '-\frac{d}{2}<\frac{1}{2(H_1\vee H_2)}-d/2. \end{aligned}

$$\square$$

### Remark 32

Let us remark that we expect the above Theorem 31 to be far from optimal: In particular, no genuinely two dimensional stochastic cancellations have been employed, meaning regularization is not obtained from "both directions", but rather limited by the one with the biggest Hurst parameter i.e. $$H_1\vee H_2$$. Indeed, note that already the above proof is exploiting self-similarity properties to transfer the one parameter setting to the present two parameter setting. As already in [25], a crucial role in the regularity estimates for local times was played by the Stochastic Sewing Lemma, we expect that in our setting, a "2D Stochastic Sewing Lemma" not yet available in the literature (see however [27] for a stochastic reconstruction theorem very close in spirit) might prove instrumental in establishing regularization from "both directions".

Combing Theorem 28 with the above 31, we immediately obtain:

### Theorem 33

Let $$w:[0,T]^2\rightarrow \mathbb {R}^d$$ be a fractional Brownian sheet of Hurst parameter $$H=(H_1, H_2)$$ on $$(\Omega , \mathcal {F}, \mathbb {P})$$. Let $$b\in H^\zeta$$ for $$\zeta \in \mathbb {R}$$. Suppose that

\begin{aligned} \zeta >3-\frac{1}{2(H_1\vee H_2)}+\frac{d}{2}, \end{aligned}
(4.10)

then for almost all $$\omega \in \Omega$$ independent of b there exists a unique solution $$x\in C([0,T]^2;\mathbb {R}^d)$$ to equation 4.1, where the solution is given in the sense of Definition 27.

### Lemma 34

Let $$\beta ^1$$ and $$\beta ^2$$ are two fractional Brownian motions on a probability space $$(\Omega ,\mathcal {F},\mathbb {P})$$ with Hurst parameters $$H_1,H_2 \in (0,1)$$. Then, for almost all $$\omega \in \Omega$$, the local time $$L^{-w}$$ associated with $$w=\beta ^1+\beta ^2$$ is given by the convolution of local times $$L^{-\beta ^1}$$ and $$L^{-\beta ^2}$$, i.e.

\begin{aligned} L^{-w}_t(x)=L_{t_1}^{-\beta ^1}*L^{-\beta ^2}_{t_2} (x), \quad t=(t_1,t_2) \in [0,T]^2. \end{aligned}

### Proof

Let $$b: \mathbb {R}^d \rightarrow \mathbb {R}$$ be any measurable function and fix any $$\omega \in \Omega$$. To keep the notation simple we avoid writing $$\omega$$ explicitly. By applying the local time formula twice we have

\begin{aligned} b*L^{-w}_t(x)&=\int _0^t b(x+w_r)\mathop {}\!\textrm{d}r =\int _{0}^{t_1}\int _0^{t_2} b(x+\beta _{r_1}^1+\beta _{r_2}^2)\mathop {}\!\textrm{d}r_1\mathop {}\!\textrm{d}r_2 \\&= \int _0^{t_1}\int _{\mathbb {R}^d} b(x+\beta ^1_{r^1}-z)L_{t_2}^{-\beta ^2}(z)\mathop {}\!\textrm{d}z\mathop {}\!\textrm{d}r_1 \\&=\int _{\mathbb {R}^d}\int _{\mathbb {R}^d} b(x-z'-z)L_{t_2}^{-\beta ^2}(z)L_{t_1}^{-\beta ^1}(z')\mathop {}\!\textrm{d}z\mathop {}\!\textrm{d}z' \\&= b*L_{t_1}^{-\beta ^1}*L^{-\beta ^2}_{t_2} (x). \end{aligned}

Hence the result. $$\square$$

### Remark 35

Observe that the result proved in Lemma 34 is pathwise and thus holds for any random sampling of $$\beta ^1$$ and $$\beta ^2$$, regardless of whether they are independent or not.

The next result is an interesting application of Theorem 28 about the regularization by a special type of two-dimensional stochastic field which is the sum of two fBms.

### Theorem 36

Let $$w_t:=\beta ^1_{t_1}+\beta ^2_{t_2}$$, where $$\beta ^1$$ and $$\beta ^2$$ are two fractional Brownian motions on a probability space $$(\Omega ,\mathcal {F},\mathbb {P})$$ with the Hurst parameters, respectively, $$H_1,H_2 \in (0,1)$$. Then, if $$b\in B^\zeta _{1,1}(\mathbb {R}^d)$$, where $$\zeta$$ satisfies

\begin{aligned} \zeta >d+3 - \frac{1}{2H_1}- \frac{1}{2H_2}, \end{aligned}
(4.11)

then, for almost all $$\omega \in \Omega$$, there exists a unique solution to the equation

\begin{aligned} x_t(\omega ) = \int _0^t b(x_s(\omega ))\mathop {}\!\textrm{d}s +w_t(\omega ), \quad t\in [0,T]^2, \end{aligned}

where the solution is given in the sense of Definition 27 with $$\xi _t=0$$.

### Proof

We know that $$\mathbb {P}$$-a.s. the local times $$L^{-\beta ^1}$$ and $$L^{-\beta ^2}$$ associated to $$\beta ^1$$ and $$\beta ^2$$ are contained in $$C^{\gamma _1}_tH^{\frac{1}{2H_1}-\frac{d}{2}-\epsilon }_x$$ and $$C^{\gamma _2}_tH^{\frac{1}{2H_2}-\frac{d}{2}-\epsilon }_x$$ for any $$\epsilon >0$$ and some $$\gamma _1,\gamma _2>\frac{1}{2}$$, see e.g. [25]. Let us set $$\gamma =(\gamma _1,\gamma _2)$$ and choose $$\eta \in (0,1)$$ such that $$(1+\eta )\gamma >1$$.

Since, from Lemma 34, the local time $$L^{-w}$$ is given by a convolution of two (one-dimensional) local times $$L^{-\beta ^1}*L^{-\beta ^2}$$, its regularity is found from Young’s convolution inequality in Besov spaces (see e.g. [4]). Thus its regularity in the spatial variable is given as the sum of the spatial regularities of the one dimensional local times, i.e. for all $$t\in [0,T]^2$$, $$L^{-w}_t\in C_x^{\frac{1}{2H_1}+\frac{1}{2H_2}-d}\simeq B^{\frac{1}{2H_1}+\frac{1}{2H_2}-d}_{\infty ,\infty }$$.

Furthermore, it is readily checked that

\begin{aligned} \square _{s,t}L^{-w} = (L_{t_1}^{-\beta ^1}-L_{s_1}^{-\beta ^1})*(L_{t_2}^{-\beta ^2}-L_{s_2}^{-\beta ^2}), \end{aligned}

and thus it follows by elementary computations that $$t\mapsto L^{-w}_t\in C^\gamma _t$$, and we conclude that $$L^{-w}\in C^\gamma _tC^{\frac{1}{2H_1}+\frac{1}{2H_2}-d}_x$$. Again by invoking the Young’s convolution inequality using that $$b\in B^{\zeta }_{1,1}$$ it follows by the same estimate as in (4.6) together with the, we get that $$b*L^{-w}\in C^\gamma _tC^{\zeta +\frac{1}{2H_1}+\frac{1}{2H_2}-d}_x$$. Now it follows from the assumption (4.11) that we can apply Theorem 28, which concludes the proof. $$\square$$

Our next result shows that it is sufficient to have only one of $$\beta _1$$ and $$\beta _2$$ random in Theorem 36.

### Lemma 37

Let $$[0,T]^2$$, $$\xi _t$$ and b as in Theorem 36. Let $$w_t:=\beta _{t_1} + f(t_2), t=(t_1,t_2) \in [0,T]^2$$, where $$\beta$$ is a fractional Brownian motion on a probability space $$(\Omega ,\mathcal {F},\mathbb {P})$$ with the Hurst parameter $$H \in (0,1)$$ and $$f:[0,T] \rightarrow \mathbb {R}^d$$ is a measurable function. Then, if $$\zeta$$ satisfies the relation

\begin{aligned} \zeta >d+3 - \frac{1}{2H}, \end{aligned}
(4.12)

then, the conclusion of Theorem 36 holds true.

### Proof

First observe that by definition of occupation measure

\begin{aligned} \Vert \mu ^{f}_{s_2,t_2} \Vert _{TV} = |t_2-s_2|, \quad \forall s_2<t_2 \in [0,T], \end{aligned}

where $$\mu ^{f}$$ is the occupation measure of $$\beta ^2$$ and $$\Vert \cdot \Vert _{TV}$$ is the total variation norm.

Next, since $$\Vert \mu ^{f}_{\cdot ,\cdot } \Vert _{B_{1,\infty }^0} \lesssim \Vert \mu ^{f}_{\cdot ,\cdot } \Vert _{TV}$$ and, as in Theorem 36, $$L^{-\beta ^1} \in C^{\gamma }_t H^{\frac{1}{2\,H}-\frac{d}{2}-\epsilon }_x, \mathbb {P}$$-a.s., local time relation (4.2) and convolution inequalities give, for $$s=(s_1,s_2) < t=(t_1,t_2) \in [0,T]^2$$,

\begin{aligned}&\Vert b *\square _{s,t}L^{-w}\Vert _{C_x^{\zeta +\frac{1}{2H} -d -\epsilon }} = \Vert b *L_{s_1,t_1}^{-\beta ^1}*\mu ^f_{s_2,t_2} \Vert _{C_x^{\zeta +\frac{1}{2H} -d -\epsilon }} \\&\qquad \lesssim \Vert b\Vert _{B_{1,1}^{\zeta }} \Vert L_{s_1,t_1}^{-\beta ^1} \Vert _{C_x^{\frac{1}{2H} -d -\epsilon }} \Vert \mu ^f_{s_2,t_2} \Vert _{TV} \lesssim |t_1-s_1|^\gamma |t_2-s_2|. \end{aligned}

This implies that $$b *\square _{s,t}L^{-w} \in C_t^{(\gamma ,1)} C_x^{\zeta +\frac{1}{2\,H} -d -\epsilon }$$. Then, as in the proof of previous theorem, the conclusion follows by applying Theorem 28. $$\square$$

## 5 Wave equation with noisy boundary

### 5.1 Statement of the problem

In the following section, we show how the theory of 2D non-linear Young equations can be employed in the study of Goursat boundary regularization for wave equations with singular non-linearities. More precisely, we intend to study the problem

\begin{aligned} \begin{aligned} \left( \frac{\partial ^2}{\partial x^2}- \frac{\partial ^2}{\partial y^2}\right) u=h(u(x,y)), \end{aligned} \end{aligned}
(5.1)

on $$(x, y)\in R_{\frac{\pi }{4}}\circ [0,T]^2$$ (here $$R_{\frac{\pi }{4}}$$ denotes the rotation operator of the plane by $$\pi /4$$) subject to the boundary conditions along characteristics

\begin{aligned} \begin{aligned} u(x,y)&=\beta ^1(y) \qquad \text {if}\ y=x\\ u(x,y)&=\beta ^2(y) \qquad \text {if}\ y=-x, \end{aligned} \end{aligned}
(5.2)

and the consistency condition $$\beta ^1(0)=\beta ^2(0)$$ for a potentially distributional non-linearity h. Note that for distributional h, it is a priori even unclear what is meant by a solution to (5.1). We therefore start by considering smooth mollifications $$h^\epsilon =h*\rho ^\epsilon$$ as non-linearities, for which a change of coordinates yields a reformulation of (5.1) as a Goursat problem (5.5) which in turn can be analysed as a 2D non-linear Young equation (5.6). As seen in the previous section, 2D non-linear Young equations can be well posed even in the case of distributional h, provided sufficient regularization by the boundary conditions $$\beta ^1, \beta ^2$$ is assumed. Moreover, they enjoy the stability property of Proposition 23. In particular, this will imply that the sequence of solutions $$u^\epsilon$$ to the problem (5.1) with mollified non-linearity $$h^\epsilon$$—constructed in passing by the 2D non-linear Young equation—will converge uniformly as $$\epsilon \rightarrow 0$$, independent of the sequence of mollifications chosen. It will be in this sense that we solve the problem (5.1) for distributional non-linearities h.

After this brief motivation, let us proceed to introduce the aforementioned transformations. Assume for some smooth $$h^\epsilon$$, we have a solution $$u^\epsilon$$ to

\begin{aligned} \begin{aligned} \left( \frac{\partial ^2}{\partial x^2}- \frac{\partial ^2}{\partial y^2}\right) u^\epsilon =h^\epsilon (u^\epsilon (x,y)), \end{aligned} \end{aligned}
(5.3)

that satisfies the Goursat boundary conditions (5.2). Consider the transformation

\begin{aligned} \begin{aligned} t_1&=\frac{y+x}{\sqrt{2}},\\ t_2&=\frac{y-x}{\sqrt{2}}, \end{aligned} \end{aligned}
(5.4)

which corresponds to the rotation operator $$R_{-\frac{\pi }{4}}$$ and consider the function

\begin{aligned} \phi ^\epsilon (t_1,t_2):=u^\epsilon \left( \frac{t_1-t_2}{\sqrt{2}},\frac{t_1+t_2}{\sqrt{2}}\right) . \end{aligned}

Then it is easily verified that $$\phi ^\epsilon$$ solves

\begin{aligned} -2\frac{\partial ^2}{\partial t_1 \partial t_2} \phi ^\epsilon (t_1, t_2)=h^\epsilon \left( \phi ^\epsilon (t_1, t_2)\right) . \end{aligned}

Moreover, note that if $$t_2=0$$, then $$y-x=0$$, which implies that

\begin{aligned} \phi ^\epsilon (t_1, 0)=u^\epsilon \left( \frac{t_1}{\sqrt{2}}, \frac{t_1}{\sqrt{2}}\right) =\beta ^1\left( \frac{t_1}{\sqrt{2}}\right) . \end{aligned}

Similarly, if $$t_1=0$$, then $$y=-x$$, i.e.

\begin{aligned} \phi ^\epsilon (0,t_2)= u^\epsilon \left( \frac{-t_2}{\sqrt{2}}, \frac{t_2}{\sqrt{2}}\right) =\beta ^2\left( \frac{t_2}{\sqrt{2}}\right) . \end{aligned}

Hence, we derived from the wave equation with the boundary condition along characteristics the new boundary problem

\begin{aligned} -2\frac{\partial ^2}{\partial t_1 \partial t_2} \phi ^\epsilon =h^\epsilon \left( \phi ^\epsilon (t_1,t_2)\right) , \end{aligned}
(5.5)

with boundary condition

\begin{aligned} \begin{aligned} \phi ^\epsilon (t_1, 0)&=\beta ^1\left( \frac{t_1}{\sqrt{2}}\right) ,\\ \phi ^\epsilon (0, t_2)&=\beta ^2\left( \frac{t_2}{\sqrt{2}}\right) . \end{aligned} \end{aligned}

Note that conversely, by employing the inverse transform to (5.4), solutions to (5.5) give rise to solutions to (5.3): If $$\phi ^\epsilon$$ solves (5.5), then $$u^\epsilon (x,y):=\phi ^\epsilon ((y-x)/\sqrt{2}, (y+x)/\sqrt{2})$$ solves (5.3).

It follows from the above derivation that by integration on both sides of (5.5) using $$t=(t_1,t_2)\in [0,T]^2$$ (for some well chosen T) we have

\begin{aligned} \phi _t^\epsilon =\beta ^1\left( \frac{t_1}{\sqrt{2}}\right) +\beta ^2\left( \frac{t_2}{\sqrt{2}}\right) -2\int _0^t h(\phi _s^\epsilon )\mathop {}\!\textrm{d}s, \end{aligned}

where $$\phi _t^\epsilon =u^\epsilon \left( \frac{t_1-t_2}{\sqrt{2}},\frac{t_1+t_2}{\sqrt{2}}\right)$$. Consider then $$\psi ^\epsilon =\phi ^\epsilon _{\cdot }-\beta ^1\left( \frac{\cdot }{\sqrt{2}}\right) -\beta ^2\left( \frac{\cdot }{\sqrt{2}}\right)$$, which then solves the equation

\begin{aligned} \psi _t^\epsilon =-2\int _0^t h^\epsilon \left( \psi ^\epsilon _s+\beta ^1\left( \frac{s_1}{\sqrt{2}}\right) +\beta ^2\left( \frac{s_2}{\sqrt{2}}\right) \right) \mathop {}\!\textrm{d}s. \end{aligned}
(5.6)

The above problem (5.6) can now alternatively be solved with the 2D non-linear Young theory as brought forward above, provided $$\beta ^1, \beta ^2$$ are sufficiently regularizing. To go back to the wave equation, we employ the reverse transform to (5.4) to obtain

\begin{aligned} u^\epsilon (x,y):=\psi ^\epsilon \left( \frac{y+2}{\sqrt{2}}, \frac{y-x}{\sqrt{2}}\right) +\beta ^1\left( \frac{y+x}{\sqrt{2}}\right) +\beta ^2\left( \frac{y-x}{\sqrt{2}}\right) , \end{aligned}
(5.7)

as the unique solution to (5.3). Note that as indeed $$\psi ^\epsilon (t_1, 0)=\psi ^\epsilon (0, t_2)=0$$ for any $$t_1, t_2\in [0,T]$$, we have that the boundary conditions (5.2) are satisfied. Finally, if $$h^\epsilon$$ is issued from the mollification of some distribution h, i.e. $$h^\epsilon =h*\rho ^\epsilon$$, we know by Proposition 23 that $$(\psi ^\epsilon )_\epsilon$$ and thus $$(u^\epsilon )_\epsilon$$ will converge uniformly. This observation will serve us to define the following notion of solutions to (5.1) even for distributional nonlinearities h.

### Definition 38

Let $$h\in \mathcal {S}(\mathbb {R})'$$. We say that u is a solution to (5.1) if for any sequence $$(\rho ^\epsilon )_\epsilon$$ of mollifications, the sequence $$(u^\epsilon )_\epsilon$$ of solutions to (5.3) with mollified non-linearity $$h^\epsilon =h*\rho ^\epsilon$$ converges to u uniformly on $$R_{\frac{\pi }{4}}\circ [0,T]^2$$, i.e. in $$C(R_{\frac{\pi }{4}}\circ [0,T]^2; \mathbb {R})$$.

Remark that with the above notion of solution, existence implies also uniqueness. Let us now pass to the main result of this section.

### Theorem 39

Let $${\bar{\beta }}^i(\cdot )=\beta ^i(\cdot /\sqrt{2})$$ and $$p,q\in [1,\infty ]$$ such that $$\frac{1}{p}+\frac{1}{q}=1$$. Suppose $$h\in B^\zeta _{p,p}(\mathbb {R})$$ and suppose that $$L_t(x)=(L^{-{\bar{\beta }}^1}_{t_1}*L^{-{\bar{\beta }}^2}_{t_2})(x)$$ satisfies $$L\in C^\gamma _t B^\alpha _{q,q}(\mathbb {R})$$. Assume there exists $$\eta \in (0,1)$$ such that

\begin{aligned} \gamma (1+\eta )>1 \qquad and \quad \zeta +\alpha >2+\eta . \end{aligned}

Then there exists a unique solution u to (5.1) in the sense of Definition 38 given by

\begin{aligned} u(x,y):=\psi \left( \frac{y+2}{\sqrt{2}}, \frac{y-x}{\sqrt{2}}\right) +\beta ^1\left( \frac{y+x}{\sqrt{2}}\right) +\beta ^2\left( \frac{y-x}{\sqrt{2}}\right) , \end{aligned}

where $$\psi$$ is the unique solution to

\begin{aligned} \psi _t=-2\int _0^t (h*L)(ds, \psi _s), \end{aligned}
(5.8)

understood in the sense of Definition 27.

### Proof

The above is an immediate consequence of Theorem 28 as well as Proposition 23 in conjunction with the above considerations. Indeed, note that for any mollification $$h^\epsilon =h*\rho ^\epsilon$$, the solution $$u^\epsilon$$ to (5.3) is given by (5.7). Note that by Corollary 30, $$\psi ^\epsilon$$ given by (5.6) converges uniformly to $$\psi$$, the solution to (5.8). Existence and uniqueness of $$\psi$$ is ensured by Theorem 28. Finally, by (5.7), this implies that $$(u^\epsilon )_\epsilon$$ converges uniformly to

\begin{aligned} u(x,y):=\psi \left( \frac{y+2}{\sqrt{2}}, \frac{y-x}{\sqrt{2}}\right) +\beta ^1\left( \frac{y+x}{\sqrt{2}}\right) +\beta ^2\left( \frac{y-x}{\sqrt{2}}\right) , \end{aligned}

completing the proof. $$\square$$

Finally, we illustrate the above theorem in the setting where the boundary processes $$\beta ^i$$ are given as fractional Brownian motions in the following corollary.

### Corollary 40

Let $$h\in B^\zeta _{1,1}(\mathbb {R})$$ for any $$\zeta \in \mathbb {R}$$. Then there exist two independent fractional Brownian motions $${\bar{\beta }}^1, {\bar{\beta }}^2$$ of Hurst parameters, respectively, $$H_1, H_2\in (0,1)$$ such that problem (5.1) has a unique solution in the sense of Definition 38.

### Proof

This is a direct consequence of Theorem 36 in combination with Theorem 39. $$\square$$

### Remark 41

It is intriguing to note that one can partially connect Lemma 37 to the solution theory of non-linear wave equations with random initial data as developed by Burq, and Tzvetkov in [9], see also the work of Bourgain [7] which is the first step in this direction. One of the key step in these works is to define a probability measure suitable Sobolev spaces $$\mathcal {H}^s$$ corresponding to each initial data belonging to $$\mathcal {H}^s$$. To understand the relation loosely let us set one variable as time, say $$t_1$$, and other as space, say $$t_2$$, then by taking $$t_1=0$$, $$(\beta ^2,0)$$, where $$\beta ^2$$ is a fBm with Hurst parameter $$H \in (0,1)$$, can be regarded as pair of initial data.

Since the law of $$\beta ^2$$ defines a probability measure on $$C([0,T]; \mathbb {R})$$, the results of the current section (together with Lemma 37) gives the existence of a unique solution, in suitable sense, to a class of wave equation, which gives (5.1) under $$R_{\frac{\pi }{4}}$$, with $$(\beta ^2,0)$$ as random initial data. In particular, since $$H \in (0,1)$$ is arbitrary, we construct a set of probability measures $$\mathcal {M}$$ on $$C([0,T]; \mathbb {R})$$, of size uncountable infinitely, such that for each measure $$\mu \in \mathcal {M}$$, a class of wave equation is locally well-posed with $$\mu$$ as random initial data taking values in $$C([0,T]; \mathbb {R})$$. This connection is striking and pointed out to us by the referee. Since it is really fascinating and challenging to see if one can make this formal argument rigorous, we leave this line of research for future work.

## 6 Further challenges, open problems and concluding remarks

We have extended the pathwise regularization by noise framework introduced in [10] to equations on the plane, driven by a continuous regularizing field. The concept of regularization in this article has been presented in view of the local time associated with the field w. While we present here the case of regularization when the field w is given by a fractional Brownian sheet or as a sum of two independent fractional Brownian motions, further systematic investigations of the space-time regularity of the local time associated to various stochastic fields appears in order. In particular refined estimates for the local time of the fractional Brownian sheet using a “multiparameter Stochastic Sewing Lemma” or the application of a stochastic reconstruction theorem as recently provided in [27] appear as an interesting direction for further research.

Consider now a general stochastic partial differential equation of the form

\begin{aligned} Lu(t,x) =b(u(t,x))+ {\dot{w}}(t,x),\quad (t,x)\in [0,T]\times R, \end{aligned}

where b is a non-linear function, L is a differential operator, and $${\dot{w}}$$ is the formal mixed partial derivative a stochastic field w. We assume R is a hyper-cube in $$\mathbb {R}^k$$. Given that L generates a semi-group $$\{S_t\}$$, mild solution can typically be written as a multi-parameter Volterra equation, in the sense that

\begin{aligned} u(t,x)= & {} \xi (t,x)+\int _0^t\int _R S_{t-s}(x-y)b(u(s,y))\mathop {}\!\textrm{d}y\mathop {}\!\textrm{d}s\\{} & {} +\int _0^t\int _R S_{t-s}(x-y){\dot{w}}(s,y)\mathop {}\!\textrm{d}y \mathop {}\!\textrm{d}s. \end{aligned}

The equation above could be reformulated by similar principles as in the current article, although certain extensions must be made with respect to the construction of a non-linear Young integral to account for the possibly singular nature of the Volterra operator S. The stochastic process obtained in $$\int _0^t\int _R S_{t-s}(x-y){\dot{w}}(s,y)\mathop {}\!\textrm{d}y \mathop {}\!\textrm{d}s$$ might indeed provide a regularizing effect in this equation, but it is then also needed to investigate the space-time regularity of the local time associated with this field. The authors of [2] have recently made certain progress in this direction the case where L is the heat operator. There they prove regularization by noise when $${\dot{w}}$$ is a white noise, and allow for distributional coefficients, including the Dirac delta. For more general differential operators the approach outlined above might yield interesting results related to regularization by noise effects for a great variety of SPDEs.

An alternative approach to that presented in the current article would be to study the regularity of the averaged field instead of only the local time. That is, one can study the regularity of the mapping

\begin{aligned}{}[0,T]^2\times \mathbb {R}^d \ni (t,x)\mapsto \int _0^t b(x+w_r)\mathop {}\!\textrm{d}r, \end{aligned}

for a given distribution b, and then use this instead of the convolution between b and the local time $$L^w$$ as done in Theorem 28. As observed in [10] and further developed in [20], studying the averaged field directly in the concept of regularization by noise allows for less regularity requirements on the distribution b. In fact, in the one dimensional case when considering the SDE

\begin{aligned} x_t = x_0+\int _0^t b(x_s)\mathop {}\!\textrm{d}s + \beta _t,\quad t\in [0,T], \end{aligned}

where $$\beta _t$$ is a fractional Brownian motion with $$H<\frac{1}{2}$$ it is shown in [10, 20] that pathwise existence and uniqueness as well as differentiability of the flow holds if $$b\in C^\alpha _x$$ (with compact support) and

\begin{aligned} \alpha >2-\frac{1}{2H}. \end{aligned}

More recently in [18], Galeati and Gerenscér push these results further to prove pathwise existence, uniqueness, and differentiability of the flow under the condition

\begin{aligned} \alpha >1-\frac{1}{2H}, \end{aligned}

without the assumption of compact support. In contrast, using the local time approach presented in the current paper, in the one dimensional setting, [25] shows that a similar statement of existence and uniqueness holds if

\begin{aligned} \alpha >2+\frac{d}{2}-\frac{1}{2H}. \end{aligned}

The additional dimension dependency comes from the fact that the final set of $${\bar{\Omega }}\subset \Omega$$ of full measure that that admits unique solutions is independent of the drift coefficient b, something which is an advantage in certain regularization by noise problems, see e.g. [5] and [22].