Abstract
In this paper we provide sufficient conditions which ensure that the nonlinear equation \(\mathrm{d}y(t)=Ay(t)\mathrm{d}t+\sigma (y(t))\mathrm{d}x(t)\), \(t\in (0,T]\), with \(y(0)=\psi \) and A being an unbounded operator, admits a unique mild solution such that \(y(t)\in D(A)\) for any \(t\in (0,T]\), and we compute the blow-up rate of the norm of y(t) as \(t\rightarrow 0^+\). We stress that the regularity of y is independent of the smoothness of the initial datum \(\psi \), which in general does not belong to D(A). As a consequence we get an integral representation of the mild solution y which allows us to prove a chain rule formula for smooth functions of y.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The Young integral has been introduced in [15], where the author defines extension of the Riemann–Stieltjes integral \(\int fdg\) when neither f nor g have finite total variation. In particular in [15] it is shown that, if f and g are continuous functions such that f has finite p-variation and g has finite q-variation, with \(p,q>0\) and \(p^{-1}+q^{-1}>1\), then the Stieltjes integral \(\int fdg\) is well-defined as a limit of Riemann sums. This was the starting point of the crucial extension to rough paths integration. Indeed, in [13] the author proves that it is possible to define the integral \(\int f\mathrm{d}x\) also in the case when f has finite p-variation and x has finite q-variation with \(p,q>0\) and \(p^{-1}+q^{-1}<1\). In this case, additional information on the function x is needed, which would play the role of iterated integrals for regular paths.
An alternative formulation of the integration over rough paths is provided in [6], where the author considers Hölder-like (semi)norms instead of p-variation norms. Namely, if f is \(\alpha \)-Hölder continuous and g is \(\eta \)-Hölder continuous with \(\alpha +\eta >1\) then the Young integral is well defined as the unique solution to an algebraic problem. Recently, a more general theory of rough integration, when \(\alpha +\eta \le 1\), has been introduced in [5].
Here, we consider only Young integrals and focus on the spatial regularity of solutions to infinite dimensional evolution equations leaving aside the enormous amount of results connected to the rough paths case culminating in the breakthrough on singular SPDEs (see, e.g., [7]). Namely, we consider the nonlinear evolution equation
where A is the infinitesimal generator of a semigroup defined on a Banach space X with suitable regularizing properties and x is a \(\eta \)-Hölder continuous function with \(\eta >1/2\). Ordinary differential equations (in finite dimensional spaces) driven by an irregular path of Hölder regularity greater than 1/2 have been understood in full details since [16] (see also [10]). On the other hand, the infinite dimensional case was treated in [8] and then developed in [9] and [3], see also [14] for earlier results in the context of stochastic partial differential equations driven by an infinite dimensional fractional Brownian motion of Hurst parameter \(H>1/2\).
In [3], problem (1.1) is formulated in a mild form
where \((S(t))_{t\ge 0}\) is the analytic semigroup generated by the sectorial operator A, and the authors exploit the regularizing properties of S to show that, if the initial datum \(\psi \) is smooth enough (i.e., if it belongs to a suitable domain of the fractional powers \((-A)^{\alpha }\)), then Eq. (1.1) admits a unique mild solution with the same spatial regularity as the initial datum. The key technical point in [3] is to prove that the convolution
is well defined if f takes values in \(D((-A)^{\alpha })\) and belongs to a Hölder-type function space. To be more precise, the authors require that \(f:[0,T]\rightarrow D((-A)^\alpha )\) satisfies the condition
This is one of the main difference with respect to the finite dimensional case, where the condition on the function f reads in terms of classical Hölder norms. Once that convolution (1.2) is well-defined, the smoothness of the initial datum \(\psi \) and suitable estimates on (1.2) allow the authors to solve the mild reformulation of Eq. (1.1) by a fixed point argument in the same Hölder-type function space introduced above.
Our point in the present paper is that, if one looks a bit more closely to the trade-off between Hölderianity in time and regularity in space of the convolution (1.2), one discovers that an extra regularity in space can be extracted by estimates, see Lemma 2.2. This allows us to show that the mild solution to Eq. (1.1), which in our situation is driven by a finite dimensional noise, is more regular than the initial datum (that nevertheless has to enjoy the same regularity assumptions as in [3]). Namely, y(t) belongs to D(A) for any \(t\in (0,T]\) (see Theorem 3.1).
It is also worth mentioning that, when A is an unbounded operator, the mild formulation of Eq. (1.1) is the most suitable to prove existence and uniqueness of a solution since it allows to apply a fixed point argument in spaces of functions with a low degree of smoothness. On the other hand this formulation is too weak in several applications where an integral formulation of the equation helps a lot. Here, having proved that the mild solution y takes values in D(A), we are in a position to show that y admits an integral representation as well, i.e., it satisfies the equation:
Moreover, starting from the above relation, we can also obtain a chain rule; in other words we show that we can differentiate with respect to time regular enough functions of the solution to Eq. (1.1). Finally, as an example of possible applications of the chain rule, we propose (in Hilbertian setting, see Proposition 5.1) a necessary conditions for the invariance of Hyperplanes under the action of solutions of equations driven by irregular paths. In the case of an ordinary differential equation with a rough path, this problem is addressed in [2], when the state space is finite dimensional and no unbounded operators are involved in the equation. The problem of the invariance of a convex set with respect to a general infinite dimensional evolution equation driven by a rough trajectory is still unexplored (see [1] and the references therein for corresponding results in the case of classical evolution equations).
Summarizing, this paper can been described as a first step towards a systematic study, by the classical tools of semigroup theory, of smoothing properties of the mild solution to (1.1). We plan to go further in the analysis, first weakening the smoothness assumptions on \(\psi \) and, then, developing results analogous to those in this paper for equations driven by more irregular noises as in the case of rough paths.
The paper is structured as follows. In Sect. 2, we introduce the function spaces that we use and we recall some results taken from [6, 9], slightly generalizing some of those results. In Sect. 3, we prove the existence and uniqueness of a mild solution to the nonlinear Young equation (1.1) when \(\psi \) belongs to a suitable space \(X_\alpha \subset X\) (which will be defined later), x is \(\eta \)-Hölder continuous for some \(\eta \in (1/2,1)\) and \(\alpha +\eta >1\). We show that this solution takes values in D(A) and estimate the blow-up rate of its \(X_{1+\mu }\)-norm as t tends to \(0^+\), when \(\mu \in [0,\eta +\alpha -1)\) (see Theorem 3.1). The smoothness of the mild solution strongly relies on the smoothing effect of the semigroup associated with operator A. In general, when A is the infinitesimal generator of a strongly continuous semigroup, such smoothing properties are not satisfied by the associated semigroup. Nevertheless, we still can prove the existence and uniqueness of the mild equation to Eq. (1.1) by a suitable choice of the spaces \(X_{\alpha }\). Based on Theorem 3.1, in Sect. 4 we prove that the mild solution to (1.1) can be written in an integral form, which is used in Sect. 5 to prove the chain rule. By a simple example, we show how the availability both of a solution, which takes values in D(A), and of a chain rule, can be exploited to tackle the problem of the invariance of convex sets, when an unbounded operator A is involved. Finally, in Sect. 6 we provide two examples, one on the space of continuous functions and one on an \(L^p\) state space, to illustrate our results.
Notation. We denote by \([a,b]^2_<\) the set \(\{(s,t)\in {\mathbb {R}}^2:a\le s<t\le b\}\). Further, we denote by \({\mathscr {L}}(X_\alpha ,X_\gamma )\) the space of linear bounded operators from \(X_\alpha \) into \(X_\gamma \), for each \(\alpha ,\gamma \ge 0\). For every \(A\subseteq {\mathbb {R}}\), C(A; X) denotes the usual space of continuous functions from A into X endowed with the sup-norm. The subscript “b” stands for bounded. Finally, for every \(\alpha \in (0,1)\), \(C^{\alpha }(A;X)\) denotes the subset of \(C_b(A;X)\) consisting of \(\alpha \)-Hölder continuous functions. It is endowed with the norm \(\Vert f\Vert _{C^{\alpha }(A;X)}=\Vert f\Vert _{C_b(A;X)}+[f]_{C^{\alpha }(A;X)}\), where \([f]_{C^{\alpha }(A;X)}=\displaystyle \sup _{{s,t\in A}\atop {s\ne t}}\frac{\Vert f(t)-f(s)\Vert _X}{|t-s|^{\alpha }}\). When \(X={\mathbb {R}}\), we simply write \(C^{\alpha }(A)\).
2 The abstract Young equation
2.1 Function spaces and preliminary results
Throughout the paper, X denotes a Banach space and \(A:D(A)\subseteq X\rightarrow X\) is a linear operator which generates a semigroup \((S(t))_{t\ge 0}\). We further assume the following set of assumptions.
Hypothesis 2.1
-
(i)
For every \(\alpha \in [0,2)\), there exists a space \(X_\alpha \) (with the convention that \(X_0=X\) and \(X_1=D(A))\) such that if \(\beta \le \alpha \) then \(X_\alpha \) is continuously embedded into \(X_\beta \). We denote by \(K_{\alpha ,\beta }\) a positive constant such that \(|x|_{\beta }\le K_{\alpha ,\beta }|x|_{\alpha }\) for every \(x\in X_{\alpha }\);
-
(ii)
for every \(\zeta ,\alpha ,\gamma \in [0,2)\), \(\zeta \le \alpha \), and \(\mu ,\nu \in (0,1]\) with \(\mu >\nu \) there exist positive constants \(M_{\zeta ,\alpha ,T}\), and \(C_{\mu ,\nu ,T}\), which depend on T, such that
$$\begin{aligned} \left\{ \begin{array}{ll} (a)\ \Vert S(t)\Vert _{{\mathscr {L}}(X_\zeta , X_\alpha )}\le M_{\zeta ,\alpha ,T} t^{-\alpha +\zeta },\\ (b) \ \Vert S(t)-I\Vert _{{\mathscr {L}}(X_\mu ,X_\nu )}\le C_{\mu ,\nu ,T} t^{\mu -\nu }, \end{array} \right. \end{aligned}$$(2.1)for every \(t\in (0,T]\).
Example 2.1
If A is a sectorial operator on X, then Hypotheses 2.1 are satisfied if we set \(X_\alpha :=D_A(\alpha ,\infty )\) for every \(\alpha \in (0,2)\). Hypotheses 2.1 are satisfied also when A is a negative sectorial operator and \(X_\alpha :=D((-A)^{\alpha })\) for every \(\alpha \in (0,2)\). More generally, if A is a sectorial operator, then Hypotheses 2.1 are satisfied with \(X_\alpha :=[X,D(A)]_{\alpha }\) for every \(\alpha \in (0,1)\), \(X_1=D(A)\) and \(X_{\alpha }=\{x\in D(A): Ax\in X_{\alpha -1}\}\) if \(\alpha \in (1,2)\). We refer the reader also to Sect. 3.1 for another choice of the spaces \(X_{\alpha }\), which guarantees the validity of a part of Hypotheses 2.1.
We now introduce some operators which will be used extensively in this paper.
Definition 2.1
Let a and b be two real numbers with \(a<b\). Then, the operators \(\delta , \delta _S:C([a,b];X)\rightarrow C([a,b]^2_<;X)\) are defined as follows:
for every \((s,t)\in [a,b]^2_<\) and \(f\in C([a,b];X)\), where \({\mathfrak {a}}(s,t)=S(t-s)-I\).
Remark 2.1
We stress that the continuity of the function \({\mathfrak {a}}\) in \([a,b]^2_{<}\) is implied by the strong continuity of the semigroup \((S(t))_{t\ge 0}\) in \((0,+\infty )\). No continuity assumption at \(t=0\) is required.
2.2 Function spaces
Definition 2.2
For every \(a,b\in {\mathbb {R}}\), with \(a<b\) and \(\alpha ,\beta \in [0,2)\), we denote by:
-
(i)
\(C_{\beta }([a,b]_<^2;X_\alpha )\) the subspace of \(C([a,b]^2_<;X_{\alpha })\) consisting of functions f such that
$$\begin{aligned} \Vert f\Vert _{C_{\beta }([a,b]^2_<;X_{\alpha })}:=\sup _{(s,t)\in [a,b]^2_{<}}\frac{\Vert f(s,t)\Vert _{X_{\alpha }}}{|t-s|^\beta }<+\infty ; \end{aligned}$$ -
(ii)
\(E_{\beta }([a,b];X_\alpha )\) the subset of \(C([a,b];X_\alpha )\) consisting of functions f such that \(\delta _Sf\in C_{\beta }([a,b]_<^2;X_\alpha )\) endowed with the norm
$$\begin{aligned} \Vert f\Vert _{E_{\beta }([a,b];X_{\alpha })}:=\Vert f\Vert _{C([a,b];X_{\alpha })}+\Vert \delta _S f\Vert _{C_{\beta }([a,b]^2_<;X_{\alpha })}. \end{aligned}$$
Remark 2.2
For every \(a,b\ge 0\) with \(a<b\), and \(\alpha , \beta , k\in [0,2)\) the following properties hold true.
-
(i)
If \(f\in C([a,b];X_{\alpha })\cap E_k([a,b];X_\beta )\) then \(f\in C^\rho ([a,b];X_\gamma )\) for every \(\gamma \in [0,\beta ]\), such that \(\gamma <\alpha \), and \(\rho :=\min \{k,\alpha -\gamma \}\). Indeed, for every \((s,t)\in [a,b]_<^2\) we can estimate
$$\begin{aligned} \Vert f(t)-f(s)\Vert _{X_{\gamma }} \le \Vert (\delta _Sf)(s,t)\Vert _{X_{\gamma }}+\Vert {\mathfrak {a}}(s,t)f(s)\Vert _{X_{\gamma }}. \end{aligned}$$Estimating separately the two terms we get
$$\begin{aligned}&\Vert (\delta _Sf)(s,t)\Vert _{X_{\gamma }}\le K_{\beta ,\gamma }\Vert f\Vert _{E_k([a,b];X_{\beta })}|t-s|^k,\\&\Vert {\mathfrak {a}}(s,t)f(s)\Vert _{X_{\gamma }}\le C_{\alpha ,\gamma ,b}\Vert f\Vert _{C([a,b];X_{\alpha })}|t-s|^{\alpha -\gamma } \end{aligned}$$for every \(a\le s<t\le b\), which yields the assertion. In particular, \(E_{\alpha }([a,b];X_{\alpha })\) is continuously embedded into \(C^{\alpha -\gamma }([a,b];X_{\gamma })\) if \(\alpha \in (0,1)\) and \(\gamma \in [0,\alpha ]\), it is contained in the space of Lipschitz continuous functions over [a, b] with values in X if \(\alpha =1\), and it consists of constant functions if \(\alpha >1\).
-
(ii)
For every \(f:[a,b]_<^2\rightarrow X\) and \(\alpha ,\beta ,\gamma \ge 0 \), such that \(\beta >\gamma \), it holds that
$$\begin{aligned} \Vert f\Vert _{C_{\gamma }([a,b]^2_{<};X_{\alpha })}\le |b-a|^{\beta -\gamma }\Vert f\Vert _{C_{\beta }([a,b]^2_{<};X_{\alpha })}. \end{aligned}$$(2.2)Assume that \(\Vert f\Vert _{C_{\beta }([a,b]^2_{<};X_{\alpha })}\) is finite. Then, for any \(s,t\in [a,b]\) with \(s<t\) we can estimate
$$\begin{aligned} \frac{\Vert f(s,t)\Vert _{X_{\alpha }}}{|t-s|^{\gamma }} = \frac{\Vert f(s,t)\Vert _{X_{\alpha }}}{|t-s|^{\beta }}|t-s|^{\beta -\gamma }&\le |b-a|^{\beta -\gamma } \frac{\Vert f(s,t)\Vert _{X_{\alpha }}}{|t-s|^{\beta }}. \end{aligned}$$By taking the supremum over \((s,t)\in [a,b]_<^2\), (2.2) follows.
We recall some relevant results from [6] and [9]. In particular, we recall the definition of the Young integrals
where \(f:[a,b]\rightarrow X\) and \(x:[a,b]\rightarrow {\mathbb {R}}\) satisfy suitable assumptions. In particular, we assume the following condition on x.
Hypothesis 2.2
\(x\in C^\eta ([a,b])\) for some \(\eta \in (1/2,1)\).
Theorem 2.1
(Section 3 in [6] and Section 2 in [9]) Fix \(f\in C^\alpha ([a,b];X)\), where \(\alpha \in (1-\eta ,1)\). Then, for each \((s,t)\in [a,b]_<^2\) the Riemann sum
where \(\varPi (s,t):=\{t_0=s<t_1<\ldots <t_n=t\}\) is a partition of [s, t] and \(|\varPi (s,t)|:=\max \{t_{i+1}-t_i: i:=0,\ldots ,n-1\}\), converges in X as \(|\varPi (s,t)|\) tends to 0. Further, there exists a function \({{\mathscr {R}}}_f:[a,b]^2_<\rightarrow X\) such that
for each \((s,t)\in [a,b]_<^2\), and
In particular,
Remark 2.3
For each \(s, \tau ,t\in [a,b]\), with \(s<\tau <t\), it holds that
To check this formula it suffices to choose a family of partitions \(\varPi (s,t)\) such that \(\tau \in \varPi (s,t)\) and letting \(|\varPi (s,t)|\) tend to 0. As a byproduct, if we set \(\varPhi (t):={\mathscr {I}}_f(a,t)\), \(t\in (a,b]\), we deduce that \((\delta \varPhi )(s,t)={\mathscr {I}}_f(s,t)\). Indeed, from (2.6) we infer
Moreover, \(\varPhi \) is the unique function such that \(\varPhi (a)=0\) and
for every \((s,t)\in [a,b]_{<}^2\) and some positive constant c.
Remark 2.4
Clearly, when \(x\in C^1([a,b])\) the limit in (2.3) coincides with the Riemann–Stieltjes integral over the interval [s, t] of the function f with respect to the function x.
Remark 2.4 yields the following definition (see [9]).
Definition 2.3
For every \(f\in C^{\alpha }([0,T])\) \((\alpha \in (1-\eta ,1))\) and every \((s,t)\in [a,b]^2_{<}\), \({{\mathscr {I}}}_f(s,t)\) is the Young integral of f in [s, t] and is denoted by
The above result reports for the construction of the “classical” Young integral. The following one, proved in [9, Sections 3 & 4], accounts the construction of Young type convolutions with the semigroup \((S(t))_{t\ge 0}\).
Theorem 2.2
For each \(f\in E_k([a,b];X_\beta )\), such that \(\beta \in [0,2)\) and \(\eta +k>1\), the limit
exists in X for every \((s,t)\in [a,b]^2_{<}\). Further, there exists a function \({{\mathscr {R}}}_{Sf}:[a,b]_<^2\rightarrow X\) such that
for each \((s,t)\in [a,b]_<^2\), and for each \(\varepsilon \in [0,1)\) there exists a positive constant \(c=c(\eta +\alpha ,\varepsilon )\) such that
In particular,
Remark 2.5
Actually, in [9], Theorem 2.2 has been proved assuming that \(X_{\beta }=D((-A)^{\beta })\). A direct inspection of the proof of [9, Theorem 4.1(2)] shows that the assertion holds true also under our assumptions, since estimates (2.1) allow us to repeat verbatim the same arguments in the quoted paper.
Again, when \(x\in C^1([a,b])\) the limit in (2.7) coincides with the Riemann–Stieltjes integral of the function \(S(t-\cdot )f\) with respect to the function x over the interval [s, t]. As above, this remark inspires the following definition (see again [9]).
Definition 2.4
For every \(f\in E_k([a,b];X_\beta )\), with \(k\in (1-\eta ,1)\) and \(\beta \in [0,2)\), \({{\mathscr {I}}}_{Sf}(s,t)\) is the Young convolution of the function \(S(t-\cdot )f\) with respect to x in [s, t] for every \((s,t)\in [a,b]^2_{<}\) and it is denoted by
For further use, we prove a slight extension of the estimate in [9, Theorem 4.1(2)].
Lemma 2.1
Let f be a function in \(E_k([a,b];X_{\beta })\cap C([a,b];X_{\beta _1})\) and assume that \(k\in (1-\eta ,1)\) and \(\beta ,\beta _1\in [0,2)\). Then, for every \(r\in [k,1)\) the function \({{\mathscr {I}}}_{Sf}\) belongs to \(C_{\eta +k-r}([a,b]_<^2;X_{\nu _r})\), where \(\nu _r:=\min \{r+\beta ,r+\beta _1-k\}\). Further,
for every \(r\in [k,1)\).
Proof
From Theorem 2.2 it follows that \({\mathscr {I}}_{Sf}\) is well-defined as Young convolution and
Using condition (2.1)(ii)(a), we get
for each \((s,t)\in [a,b]^2_{<}\), \(\gamma \in [0,\eta )\).
Now, we fix \(r\in [ k,1)\) and take \(\gamma =r-k\). Since \(\eta +k>1\) it follows that \(\gamma<1-k<\eta \) and \(\eta -\gamma =\eta +k-r\). From (2.8) and (2.11) we conclude that \({{\mathscr {I}}}_{Sf}\in C_{\eta +k-r}([a,b]_<^2;X_{\nu _r})\), where \(\nu _r:=\min \{r+\beta , r+\beta _1-k\}\), and estimate (2.10) follows. \(\square \)
Remark 2.6
From the definition of the Young convolution it follows that if \(x,x_1,x_2\) belong to \(C^\eta ([a,b])\) and \(f,f_1,f_2\) belong to \(E_k([a,b];X_\beta )\), for some \(\eta \in (1/2,1)\), \(k\in (1-\eta ,1)\) and \(\beta \in [0,2)\), then
and
for every \((s,t)\in [a,b]^2_{<}\).
Now, we prove that the Young convolution (2.9) can be split into the sum of two terms.
Lemma 2.2
For every \(f\in E_k([a,b];X_{\beta })\), with \(\beta \in [0,2)\) and \(k\in (1-\eta ,1)\), every \((s,t)\in [a,b]^2_{<}\) and \(\tau \in [s,t]\), it holds that
Proof
The proof is straightforward: it is enough to take into account the properties of Young convolution and the semigroup property of \((S(t))_{t\ge 0}\). \(\square \)
Corollary 2.1
For every \(f\in E_k([a,b];X_{\beta })\), with \(k+\eta >1\) and \(\beta \in [0,2)\), it holds that
Proof
From the definition of \(\delta _S\) and of \({{\mathscr {I}}}_{Sf}\) it follows that
for every \((s,t)\in [a,b]^2_{<}\). Applying Lemma 2.2 with \(s=a\) and \(\tau =s\) we infer that
which combined with (2.14) yields the assertion. \(\square \)
3 Smoothness of mild solutions
We consider the following assumptions on the nonlinear term \(\sigma \).
Hypothesis 3.1
The function \(\sigma :X\rightarrow X\) is Fréchet differentiable with bounded and locally Lipschitz continuous Fréchet derivative. Moreover, the restriction of \(\sigma \) to \(X_{\alpha }\) maps this space into itself for some \(\alpha \in (0,1)\) such that \(\alpha +\eta >1\), it is locally Lipschitz continuous and there exists a positive constant \(L_\sigma ^\alpha \) such that
Hereafter, we assume that Hypothesis 2.2 with \(a=0\) and \(b=T>0\) and Hypothesis 3.1 hold true.
We consider the following nonlinear Young equation
and we are interested in its mild solutions which take values in D(A), where by mild solution we mean a function \(y:[0,T]\rightarrow X\) such that \(\sigma \circ y\in E_{\alpha }([0,T];X)\), \(\eta +\alpha >1\) and
Theorem 3.1
Let Hypotheses 2.1, 2.2 and 3.1 be satisfied, with \([a,b]=[0,T]\). Then, for every \(\psi \in X_\alpha \) such that \(\alpha \in (0,1/2)\) and \(\eta +\alpha >1\), there exists a unique mild solution \(y\in E_{\alpha }([0,T];X_\alpha )\) to equation (3.2). The solution y is actually smoother since for every \(a\in (0,T)\) and \(\gamma \in [\eta +\alpha -1,\eta +\alpha )\), y belongs to \(E_{\eta +\alpha -\gamma }([a,T];X_{\gamma })\). Moreover, for every \(\mu \in [0,\eta +\alpha -1)\) and \(\varepsilon >0\) there exists a positive constant \(c=c(\varepsilon ,\mu )\) such that
In particular, y(t) belongs to D(A) for every \(t\in (0,T]\) and \(y\in C_{\eta -\beta }([a,T]^2_{<};X_{\alpha +\beta })\) for every \(a\in (0,T)\) and \(\beta \in [0,\eta )\).
The proof follows the lines of [9, Theorem 4.3], but our assumptions are weaker. In particular, in [9] the authors assume that \(\eta >2\alpha \), while we do not need this condition.
Before proving Theorem 3.1, we state the following lemma, which is a straightforward consequence of Lemma 2.2.
Lemma 3.1
Suppose that y is a mild solution to (3.2). Then, for every \(\tau \in [0,T]\) it holds that
Proof of Theorem 3.1
We split the proof into some steps.
Step 1. Here, we prove an apriori estimate. Namely, we show that if \(y\in E_{\alpha }([0,T];X_\alpha )\) is a mild solution to (3.2), then there exists a positive constant \({\mathfrak {R}}\), which depends only on \(\psi \), T, \(\alpha \), x, \(\eta \) and \(\sigma \), such that
Let us fix \(a,b\in [0,T]\), with \(a<b\). Taking Corollary 2.1 into account, it is easy to check that \((\delta _Sy)(s,t)=({{\mathscr {I}}}_{S(\sigma \circ y)})(s,t)\) for every \((s,t)\in [0,T]^2_{<}\). Hence, to estimate \(\Vert \delta _Sy\Vert _{C_{\alpha }([a,b]^2_{<};X_{\alpha })}\) we can take advantage of Lemma 2.1. For this purpose, let us prove that \(\sigma \circ y\) belongs to \(E_{\alpha }([a,b];X)\cap C([a,b];X_{\alpha })\). The condition \(\sigma \circ y\in C([a,b];X_\alpha )\) follows immediately from (3.1), which also shows that
Further, we note that the function \(\delta _S(\sigma \circ y)\) is continuous in [0, T] with values in X. Indeed, fix \((t_0,s_0)\in [a,b]_{<}^2\). Then,
for every \((t,s)\in [0,T]^2_{<}\), where \(M_{0,0,T}\) and \(C_{\alpha ,0,T}\) are the constants in condition (2.1), L denotes the Lipschitz constant of \(\sigma \) on X, and the last side of the previous chain of inequalities vanishes as (t, s) tends to \((t_0,s_0)\). Next, we split
and estimate separately the two terms. As far as the first one is considered, we observe that
for every \((s,t)\in [a,b]^2_{<}\), where \(L_{\sigma }\) denotes the Lipschitz constant of the function \(\sigma \). As far as the term \({\mathfrak {a}}(s,t)\sigma (y(s))\) is concerned, we use (3.1) to estimate
for every \((s,t)\in [a,b]^2_{<}\). We have so proved that \(\sigma \circ y\in E_{\alpha }([a,b];X)\) and
Thus, we can apply Lemma 2.1 as claimed, with \(k=\beta _1=\alpha \) and \(\beta =0\), to infer that \({{\mathscr {I}}}_{S(\sigma \circ y)}\) belongs to \(C_{\eta +\alpha -r}([a,b]_<^2;X_r)\) for every \(r\in [\alpha ,1)\) and
Since \(\alpha<1/2<\eta \), it follows that
so that, applying (3.11) with \(r=\alpha \), we conclude that
where \({\mathfrak {C}}:=C_{\alpha ,\eta ,r,\alpha }(L_\sigma +L_\sigma ^\alpha )(2+C_{\alpha ,0,T})\). Further, from (3.5) with \(\tau =a\), \(t\in [a,b]\) and Corollary 2.1, we get
Taking (3.12) and (3.13) into account, this gives
Let us set
If \(b-a\le {\overline{T}}\), then we get
Now, we introduce the function \(\phi :(0,\infty )\rightarrow (0,\infty )\), defined by \(\phi (r)=2M_{\alpha ,\alpha ,T} r+1\) for every \(r>0\) and split
where \(0=t_0<t_1<t_2<\ldots <t_N=T\) and \(t_{n+1}-t_n\le {\overline{T}}\) for every \(n=0,\ldots ,N-1\). From (3.15) it follows that
for every \(n=0,\ldots ,N-1\), where \(\phi ^k\) denotes the composition of \(\phi \) with itself k times. Since \(\phi (r)>r\) for every \(r>0\), from (3.16) we conclude that
In particular, for each interval \([s,t]\subset [0,T]\) whose length is less than or equal to \({\overline{T}}\) we get
Now we are able to estimate \(\Vert \delta _Sy\Vert _{C_{\alpha }([0,T]^2_{<};X_{\alpha })}\). We stress that, if \(|t-s|\le {\overline{T}}\), then from (3.15) we get
and if \(|t-s|>{\overline{T}}\) then
From (3.17), (3.18) and (3.19) it follows that
Step 2. Here, we prove that there exists a unique mild solution to Eq. (3.2). For this purpose, we introduce the operator \(\varGamma _1:E_\alpha ([0,T_*];X_\alpha )\rightarrow E_\alpha ([0,T_*];X_\alpha )\), defined by \((\varGamma _1(y))(t)=S(t)\psi +{{\mathscr {I}}}_{S(\sigma \circ y)}(0,t)\) for every \(t\in [0,T_*]\) and \((\varGamma _1(y))(0)=\psi \), where \(T_*\in (0,T]\) has to be properly chosen later on. We are going to prove that \(\varGamma _1\) is a contraction in \({\mathcal {B}}=\{y\in E_\alpha ([0,T_*];X_\alpha ):\Vert y\Vert _{E_{\alpha }([0,T_*];X_{\alpha })}\le 2M_{\alpha ,\alpha ,T}{\mathfrak {R}}\}\). To begin with, we fix \(y\in {\mathcal {B}}\) and observe that \(\delta _S\varGamma _1(y)={{\mathscr {I}}}_{S(\sigma \circ y)}\). Hence, from (3.14), we can estimate
We now choose \(T_*\le T\) such that \({\mathfrak {C}}T_*^{\eta -\alpha }(1+T^\alpha )\Vert x\Vert _{C^{\eta }([0,T])}(1+2M_{\alpha ,\alpha ,T}{\mathfrak {R}})\le M_{\alpha ,\alpha ,T}{{\mathfrak {R}}}\). With this choice of \(T_*\), we conclude that \(\varGamma _1(y)\) belongs to \({{\mathcal {B}}}\).
Let us prove that \(\varGamma _1\) is a 1/2-contraction. Fix \(y_1,y_2\in {\mathcal {B}}\). The linearity of the Young integral gives \((\varGamma _1(y_1))(t)-(\varGamma _1(y_2))(t)={{\mathscr {I}}}_{S(\sigma \circ y_1-\sigma \circ y_2)}(0,t)\) for every \(t\in [0,T_*]\), so that we can estimate
and, as in Step 1 (see the first inequality in (3.11)),
We set \(R:=2M_{\alpha ,\alpha ,T}{\mathfrak {R}}\ge \max \{\Vert y_1\Vert _{C([0,T_*];X_{\alpha })},\Vert y_2\Vert _{C([0,T_*];X_{\alpha })}\}\) and note that
where \(L_\sigma ^{\alpha ,r}\) denotes the Lipschitz constant of the restriction of \(\sigma \) to the ball \(B(0,r)\subset X_{\alpha }\) and we have used the condition (2.1)(b). Further, by taking advantage of the smoothness of \(\sigma \) we get
Since for every \(s,t\in [0,{\overline{T}}]\), with \(s<t\), and \(r\in (0,1)\), it holds that
and recalling that \(R\ge 1\), it follows that
where \(K_{\sigma '}^R\) denotes the Lipschitz constant of the restriction of function \(\sigma '\) to the ball \(B(3K_{\alpha ,0}R)\subset X\). As far as \(\Vert \sigma \circ y_1-\sigma \circ y_2\Vert _{C([0,T_*];X_{\alpha })}\) in (3.21) is concerned, it holds that
for every \(t\in [0,T_*]\). From (3.21), (3.22), (3.24) and (3.25) we get
where c is a positive constant which depends on \(x, \alpha ,R,\sigma ,\eta \) but not on \(T_*\) nor on \(\psi \).
Based on (3.20) and (3.26), we can now fix \(T_*>0\) such that \(\varGamma _1\) is a 1/2-contraction in \({\mathcal {B}}\). If \(T_*=T\), then we are done. Otherwise, we use a standard procedure to extend the solution of the Young equation (3.2): we introduce the operator \(\varGamma _2\) defined by
for every \(y\in {\mathcal {B}}_2:=\{z\in E_{\alpha }([T_*,T_{**}];X_\alpha ):\Vert y\Vert _{E_{\alpha }([T_*,T_{**}];X_{\alpha })}\le 2M_{\alpha ,\alpha ,T}{\mathfrak {R}}\}\). Since \(y_1\) is a mild solution to (3.2), from (3.6), which clearly holds true also with \(T_*<T\) and the same constant \({\mathfrak {R}}\), it follows that \(\Vert y_1(T_*)\Vert _{X_{\alpha }}\le {\mathfrak {R}}\). Then, by the same computations as above we show that \(\varGamma _2\) is a 1/2-contraction in \({\mathcal {B}}_2\). Denote by \(y_2\) its unique fixed point. Thanks to Lemma 2.2, the function y defined by \(y(t)=y_1(t)\) if \(t\in [0,T_*]\) and \(y(t)=y_2(t)\) if \(t\in [T_*, T_{**}]\) is a mild solution to Eq. (3.2) in \([0,T_{**}]\). Obviously, if \(T_{**}<T\), then we can repeat the same procedure and in a finite number of steps we extend y to whole [0, T]. Estimate (3.6) yields also the uniqueness of the mild solution to Eq. (3.2).
Step 3. From the arguments in the first part of Step 1 (see (3.11)), we deduce that \({{\mathscr {I}}}_{S(\sigma \circ y)}\) belongs to \(C_{\eta +\alpha -r}([0,T]_<^2;X_r)\) for every \(r\in [\alpha ,1)\). The smoothing properties of the semigroup \((S(t))_{t\ge 0}\) (see condition (2.1)(ii)(a)), estimates (3.6) and (3.11) show that \(y(t)\in X_r\) and
for every \(t\in (0,T]\) and some positive constant \(c_1\), which depends on \(\alpha \), \(\eta \), r, \(\Vert x\Vert _{C^{\eta }([0,T])}\), \(\Vert \psi \Vert _{X_\alpha }\), \(\sigma \), \({\mathfrak {R}}\), and is a continuous function of \(\Vert x\Vert _{C^{\eta }([0,T])}\) and \({\mathfrak {R}}\). Now, we observe that \(\Vert y(t)-y(s)\Vert _{X_r}\le \Vert \delta _Sy(s,t)\Vert _{X_r}+\Vert {\mathfrak {a}}(s,t)y(s)\Vert _{X_r}\). Since \(\delta _Sy={{\mathscr {I}}}_{S(\sigma \circ y)}\), from (3.11) it follows that
where \(c_2=c_2(\alpha ,\eta ,r,\Vert x\Vert _{C^{\eta }([0,T])},\Vert \psi \Vert _{X_\alpha },\sigma ,{{\mathfrak {R}}})\) is a positive constant, which depends in a continuous way on \(\Vert x\Vert _{C^{\eta }([0,T])}\) and \({{\mathfrak {R}}}\). Moreover, using condition (2.1)(b) and estimate (3.27) (with r being replaced by \(r+\beta \)), we get
where \(\beta >0\) is such that \(r+\beta <1\) (such \(\beta \) exists since we are assuming \(r\in [\alpha ,1)\)). From these two last estimates it follows immediately that \(y\in C((0,T];X_r)\). Moreover, for every \(\varepsilon \in (0,T]\) and \(r\in [\alpha ,1)\), there exists a positive constant \(c_3=c_3(\alpha ,\eta ,r,\Vert x\Vert _{C^{\eta }([0,T])},\Vert \psi \Vert _{X_\alpha },\sigma ,{\mathfrak {R}},T)\), which depends in a continuous way on \(\Vert x\Vert _{C^{\eta }([0,T])}\) and on \({\mathfrak {R}}\), such that
Next, we estimate \(\Vert (\delta _S(\sigma \circ y))(s,t)\Vert _{X_{\lambda }}\) when \(\eta +\alpha -\lambda >1\), i.e., \(\lambda \in [0,\eta +\alpha -1)\). As usually, we separately estimate \(\Vert (\delta (\sigma \circ y))(s,t)\Vert _{X_{\lambda }}\) and \(\Vert {\mathfrak {a}}(s,t)\sigma (y(s))\Vert _{X_{\lambda }}\). Note that \(\lambda <\alpha \) since \(\eta <1\). We fix \(\varepsilon >0\) and observe that the continuous embedding \(X_{\alpha }\hookrightarrow X_{\lambda }\), (3.11) and (3.27) (with \(r=2\alpha -\lambda \), which belongs to \([\alpha ,1)\) since \(\alpha <1/2\)) give
for every \((s,t)\in [\varepsilon ,T]^2_{<}\), where \(c_4=c_4(\alpha ,\eta ,\Vert x\Vert _{C^{\eta }([0,T])},{\mathfrak {R}},T,\lambda )\). Moreover,
for every \((s,t)\in [\varepsilon ,T]^2_{<}\). From (3.30) and (3.31), it follows that
Moreover, arguing as in the proof of (3.8) we can show that
for every \((t_0,s_0), (t,s)\in [\varepsilon ,T]^2_{<}\), where \(L^\alpha \) denotes the Lipschitz constant of \(\sigma \) on the subset \(\{y\in X_\alpha :\Vert y\Vert _{X_{\alpha }}\le \sup _{t\in [0,T]}\Vert y(t)\Vert _{X_{\alpha }}\}\) of \(X_\alpha \), and conclude that \(\delta _S(\sigma \circ y)\in C_{\alpha -\lambda }([\varepsilon ,T]_<^2;X_\lambda )\). Further, \(\sigma \circ y\) belongs to \(C([\varepsilon ,T];X_\alpha )\). From Lemma 2.1 with \(k=\alpha -\lambda \), \(\beta =\lambda \), \(\beta _1=\alpha \) and \(r=\gamma \), we infer that \({{\mathscr {I}}}_{S(\sigma \circ y)}\) belongs to \(C_{\eta +\alpha -\lambda -\gamma }([\varepsilon ,T]_<^2;X_{\gamma +\lambda })\) for every \(\gamma \in [\alpha -\lambda ,1)\) and
for some positive constant \(c_5=c_5(\alpha ,\eta ,\sigma ,\Vert x\Vert _{C^{\eta }([0,T])},{\mathfrak {R}},T,\lambda ,\gamma ,\Vert \psi \Vert _{X_{\alpha }})\), which does not depend on \(\varepsilon \). From (3.5), with \(\tau =\varepsilon \), we can write
and applying (3.27), with \(t=\varepsilon \) and \(r=\alpha \), (3.32) and (3.33) we infer that
for every \(t\in (\varepsilon ,T]\) and some positive constant \(c_6=c_6(\lambda ,\gamma ,\eta ,\alpha ,\sigma ,x,\psi ,{\mathfrak {R}}, T)\). In particular, since the range of the function \(\varrho :D\rightarrow {\mathbb {R}}\), defined by \(\varrho (\lambda ,\gamma )=\lambda +\gamma \) for every \((\lambda ,\gamma )\in D=\{ (\lambda ,\gamma )\in {\mathbb {R}}^2: \lambda \in [0,\eta +\alpha -1), \gamma \in [\alpha -\lambda ,1)\}\) is the interval \([\eta +\alpha -1,\eta +\alpha )\), for every \(\mu \in [0,\eta +\alpha -1)\) we can choose \(\lambda \) and \(\gamma \) such that \(1+\mu =\lambda +\gamma \). Then, from (3.34) we conclude that
so that, for every \(\varepsilon \in (0,T/2)\),
and \(c_7=c_7(\lambda ,\mu ,\eta ,\alpha ,\sigma ,x,\psi ,{\mathfrak {R}}, T)\) is a positive constant, which depends in a continuous way on \(\Vert x\Vert _{C^{\eta }([0,T])}\) and on \({\mathfrak {R}}\) but not on \(\varepsilon \). From (3.35), estimate (3.4) follows at once. Finally, using again (3.33) and the smoothness properties of the semigroup \((S(t))_{t\ge 0}\), we conclude that \(y\in E_{\eta +\alpha -\mu }([2\varepsilon ,T];X_{\mu })\) for every \(\mu \in [\eta +\alpha -1,\eta +\alpha )\) and \(\varepsilon \in (0,T/2)\). \(\square \)
Remark 3.1
- (i)
-
(ii)
From the last part of Step 3 in the proof of Theorem 3.1 it follows that \(y\in C((0,T];X_\mu )\) for any \(\mu \in [0,\eta +\alpha )\).
-
(iii)
In Step 3 of the proof of Theorem 3.1 we have proved that for each \(r\in [\alpha ,1)\) there exists a constant c such that
$$\begin{aligned} \Vert y(t)\Vert _{X_r}\le ct^{\alpha -r}, \qquad \;\, t\in (0,T], \end{aligned}$$(3.36)for some constant c, independent of t. If \(\psi \in X_{\gamma }\) for some \(\gamma \in [\alpha ,1)\), then arguing as in estimate (3.27), we can easily show that we can replace \(\alpha -r\) with \((\gamma -r)\wedge 0\) in (3.36), with \(r\in [\alpha ,1)\). Based on this estimate, (3.28) and (3.29), we conclude that
$$\begin{aligned} \Vert y(t)-y(s)\Vert _X&\le \Vert (\delta _Sy)(t,s)\Vert _{X_r}+\Vert {\mathfrak {a}}(s,y)y(s)\Vert _{X_r}\nonumber \\&\le c_{*}(t-s)^{\eta +\gamma -r}+c_{**}s^{(\gamma -r-\beta )\wedge 0}|t-s|^{\beta } \end{aligned}$$(3.37)for every \(\beta >0\) such that \(r+\beta <1\), every \(0<s<t\le T\) and some positive constants \(c_*\) and \(c_{**}\), independent of s and t. Since \(\beta <\eta +\gamma -r\), from (3.37) we conclude that
$$\begin{aligned} \Vert y(t)-y(s)\Vert _{X_r}\le c s^{(\gamma -r-\beta )\wedge 0}|t-s|^{\beta }, \qquad \;\, 0<s<t\le T. \end{aligned}$$If \(\gamma -r-\beta \ge 0\) then the above estimate can be extended to \(s=0\). We will use these estimates in Sect. 5.
Remark 3.2
The result in Theorem 3.1 extend, using the same techniques, to the case of the Young equation
where the nonlinear terms \(\sigma _i\) \((i=1,\ldots ,m)\) satisfy Hypotheses (3.1) and the paths \(x_i\), \((i=1,\ldots ,m)\), belong to \(C^{\eta }([0,T])\).
3.1 The case when the semigroup has no smoothing effects
The proof of Theorem 3.1 strongly relies on the smoothing effects on the semigroup \((S(t))_{t\ge 0}\), i.e., on condition 2.1(a), which in general is not satisfied when the semigroup associated with operator A is merely strongly continuous. For instance, one may think to the semigroup of left-translations in the space of bounded and continuous functions over \({\mathbb {R}}^d\) or in the usual \(L^p({\mathbb {R}}^d)\)-space related to the Lebesgue measure: the function S(t)f has the same degree of smoothness as the function f.
In the proof of Theorem 3.1, condition (2.1)(a) is heavily used to prove that the mild solution y to the nonlinear Young equation 3.2 takes values to D(A).
In this subsection we show that partially removing condition (2.1)(a), i.e. assuming that it holds true only when \(\alpha =\zeta \), and suitably choosing the intermediate spaces \(X_{\alpha }\), the existence and uniqueness of a mild solution to Eq. (3.2) can still be guaranteed.
Theorem 3.2
Let Hypotheses 2.1(i), 2.1(ii)(a) (with \(\zeta =\alpha \)), 2.1(ii)(b), 2.2 and 3.1 be satisfied, with \([a,b]=[0,T]\). Then, for every \(\psi \in X_\alpha \), such that \(\alpha \in (0,1/2)\) and \(\eta +\alpha >1\), there exists a unique mild solution \(y\in E_\alpha ([0,T];X_\alpha )\) to Eq. (3.2).
Proof
The proof follows the same lines as the first two steps of the proof of Theorem 3.1. The only difference is that, under these weaker assumptions, Lemma 2.1 can be applied only with \(r=k\), so that estimate (3.11) now reads as follows:
From this point on the proof of the theorem carries on as in the proof of the quoted theorem. \(\square \)
We now provide an example of intermediate spaces \(X_{\alpha }\) for which any strongly continuous semigroup satisfies Hypothesis 2.1(ii)(b) and 2.1(ii)(a), this latter at least with \(\zeta =\alpha \).
Example 3.1
Let A be the generator of a strongly continuous semigroup \((S(t))_{t\ge 0}\) and for each \(\alpha \in (0,1)\) let us consider the Favard space
endowed with the norm
If \(\alpha =k+\beta \) for some \(k\in {\mathbb {N}}\) and \(\beta \in (0,1)\), then
endowed with the norm
Each space \(F_{\alpha }\) is a Banach space when endowed with the norm \(\Vert \cdot \Vert _{F_{\alpha }}\).
Fix \(\alpha \in {\mathbb {R}}\), \(x\in F_\alpha \) and \(t\in [0,+\infty )\). For any \(s\in (0,1]\), we can estimate
Hence, S(t)x belongs to \(F_\alpha \) and \(\Vert S(t)x\Vert _{F_\alpha }\le \Vert S(t)\Vert _{{\mathcal {L}}(X)}\Vert x\Vert _{F_\alpha }\), so that Hypothesis 2.1(ii)(a), with \(\zeta =\alpha \) holds true if we take \(X_\alpha =F_\alpha \).
Let us prove that the semigroup \((S(t))_{t\ge 0}\) satisfies Hypothesis (2.1)(ii)(b) with \(X_\alpha =F_\alpha \). Fix \(\mu ,\nu \in (0,1]\) with \(\mu >\nu \), \(x\in X_\mu \) and \(t\in (0,1]\). Then, for any \(s\in (0,t]\) it holds that \(s^{-\nu }\le s^{-\mu }t^{\mu -\nu }\), so that
On the other hand, if \(s\in (t,1]\), then \(s^{-\nu }<t^{-\nu }\) so that
We have so proved that
and estimate (2.1)(ii)(b) follows, with \(T=1\) and with \(C_{\mu ,\nu }=(1+M_{0,0,1})\). If \(T>1\) and \(t\in (1,T]\), then
so that estimate (2.1)(ii)(b) holds true in any interval [0, T].
We refer the reader to [4, Chapter 2, Section 5.b] for further results on the Favard spaces.
Remark 3.3
Note that if \(X=C_b({\mathbb {R}})\) and A is the first-order derivative, with \(C^1_b({\mathbb {R}})\) as domain, then \((S(t))_{t\ge 0}\) is the semigroup of left translations on \(C_b({\mathbb {R}})\). For every \(\alpha \in (0,+\infty )\setminus {\mathbb {N}}\), \(F_\alpha \) is the space of all functions \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\), which are differentiable up to the \([\alpha ]\)-th order and such that the derivative of order \([\alpha ]\) is bounded and \((\alpha -[\alpha ])\)-Hölder continuous on \({\mathbb {R}}\).
4 The integral representation formula
Knowing that mild solutions take their values in D(A) we are in a position to prove that they solve equation (3.2) in a natural integral form.
Definition 4.1
Let y belong to \(E_\alpha ([0,T];X_\alpha )\cap L^1((0,T);D(A))\) for some \(\alpha \in (1-\eta ,1)\). We say that y solves equation (3.2) in the integral form if, for every \(t>0\), it satisfies the equation
Remark 4.1
To prove that mild solutions verify (4.1), we first need to check that the integral
is well defined as Young integral, when y is the unique mild solution to (3.2). But, if \(\sigma \) satisfies Hypothesis 3.1, then for every \(f\in E_{\alpha }([0,T];X_\alpha )\) and \(x\in C^{\eta }([0,T])\), where \(\eta \in (1/2,1)\) and \(\alpha \in (1-\eta ,1)\), the Young integral
is well defined. Indeed, arguing as in the proof of (3.9) it can be easily checked that \(\sigma \circ f\in C^{\alpha }([0,T];X)\). Therefore, Theorem 2.1 guarantees that the integral in (4.2) is well-defined.
We can now prove that, under Hypotheses 2.1, 2.2 and 3.1, the mild solution y verifies (4.1) To prove this result, we first show that the mild solution to (3.2) can be approximated by mild solutions of classical problems.
Proposition 4.1
Let \((x_n)\subset C^1([0,T])\) be a sequence converging to x in \(C^{\eta }([0,T])\) for some \(\eta >1/2\) and fix \(\psi \in X_{\alpha }\) for some \(\alpha \in (0,1/2)\) such that \(\alpha +\eta >1\). For every \(n\in {\mathbb {N}}\), denote by \(y_n\) the mild solution to (3.2) with x replaced by \(x_n\), and let y be the mild solution to (3.2). Then, the following properties are satisfied:
-
(i)
\(y_n\) converges to y in \(E_{\alpha }([0,T];X_{\alpha })\) as n tends to \(+\infty \);
-
(ii)
if we set \(\displaystyle {\mathbb {J}}(t)=\int _0^t \sigma (y(u))\mathrm{d}x(u)\) and \({\mathbb {J}}_n(t)=\displaystyle \int _0^t \sigma (y_n(u))\mathrm{d}x_n(u)\) for every \(t\in [0,T]\) and \(n\in {\mathbb {N}}\), then \({\mathbb {J}}_n\) converges to \({\mathbb {J}}\) in \(C^{\eta }([0,T];X)\) as n tends to \(+\infty \).
Proof
(i) We split the proof into two steps. In the first one, we show the assertion when T is small enough and in the second step we remove this additional condition.
Step 1. Let us fix \(\tau ,{\widetilde{T}}\in [0,T]\) with \(\tau <{\widetilde{T}}\). To begin with, we observe that applying Lemma 2.1 (with \(k=r=\beta _1=\alpha \), \(\beta =0\) and \(a=\tau \), \(b={\widetilde{T}}\)) and noticing that, by Corollary 2.1 (with \(a=\tau \) and \(b={\widetilde{T}}\)), \((\delta _S{{\mathscr {I}}}_{Sf}(\tau ,\cdot ))(s,t)={{\mathscr {I}}}_{Sf}(s,t)\) for every \((s,t)\in [\tau ,{\widetilde{T}}]\), we can show that
for every \(x\in C^{\eta }([0,T])\) and \(f\in E_{\alpha }([0,T];X)\cap C([0,T];X_{\alpha })\) such that \(\alpha \in (0,1/2)\), \(\eta +\alpha >1\) and \(\tau , {\widetilde{T}}\in [0,T]\), with \(\tau <{\widetilde{T}}\), where \(C=C_{\alpha ,\eta ,\alpha ,\alpha }\) is the constant in Lemma 2.1.
Now, we fix \({T_*}\in (0,T]\) to be chosen later on. From (2.12) and (2.13), we get
for every \(t\in [0,{T_*}]\), where \({\overline{x}}_n:=x-x_n\). Taking (3.10) and (3.7) into account, we can estimate
where C is a positive constant which depends on \(\alpha \), \(\eta \), \(\sigma \) and T. An inspection of the proof of estimate (3.6) shows that the constant \({\mathfrak {R}}\) depends in a continuous way on the \(\eta \)-Hölder norm of the path. Since \(\sup _{n\in {\mathbb {N}}}\Vert x_n\Vert _{C^{\eta }([0,T])}<+\infty \), from (4.4) we can infer that
for some positive constant \({\mathfrak {M}}\), independent of n. As far as \({{\mathbb {I}}}_{2,n}\) is considered, from (4.3), with f replaced by \(\sigma \circ y-\sigma \circ y_n\), and estimates (3.24), (3.25), we infer that
and \({\widetilde{c}}\) is a positive constant which depends on \(\alpha \), T, \(\sigma \), \({\mathfrak {M}}\), K, \(\eta \) and on the constant \(C_{\alpha ,0,T}\). We choose \({T_*}\le T\) such that \({\widetilde{c}}T_*^{\eta -\alpha }\Vert x\Vert _{C^{\eta }([0,T])}\le 1/2\) and use the previous estimate to conclude that
and, consequently, that \(y_n\) converges to y in \(E_{\alpha }([0,T_*];X_{\alpha })\) as n tends to \(+\infty \).
Step 2. If \(T_*= T\) then we are done. Otherwise, let us fix \({\widehat{T}}:=(2T_*)\wedge T\). For every \(t\in [ T_*,{\widehat{T}}]\), from (3.5) we can write
In Step 1 we have proved that \(y_n({T_*})\) converges to \(y({T_*})\) in \(X_\alpha \) as n tends to \(+\infty \). Moreover, for every \((s,t)\in [T_*,T]^2_{<}\) it holds that \(\delta _SS(\cdot -{T_*})(y({T_*})-y_n({T_*}))(s,t)=0\). Hence, \(\Vert S(\cdot -{T_*})(y({T_*})-y_n({T_*}))\Vert _{E_{\alpha }([{T_*},{\widehat{T}}];X_{\alpha })}\) vanishes as n tends to \(+\infty \). Repeating the same arguments as in Step 1, we conclude that
and therefore \(y_n\) converges to y in \(E_{\alpha }([{T_*},{\widehat{T}}];X_{\alpha })\) as n tends to \(+\infty \). If \({\widehat{T}}=T\) then the assertion follows. Otherwise by iterating this argument, we get the assertion in a finite number of steps.
(ii) As in the proof of property (i), we can write
From (2.5), (3.7) and (3.10), we infer that
for every \(t\in [0,T]\), As far as the term \({\mathbb {J}}_2^n(0,t)\) is concerned, we argue similarly, taking advantage of the computations in (3.23) and estimate (3.24), and get
From (4.5) and (4.6) it thus follows that
for a suitable constant \(C'\), independent of n. From the assumptions on x and \((x_n)\), and property (i), we conclude that \({\mathbb {J}}_n\) converges to \({\mathbb {J}}\) in C([0, T]; X) as n tends to \(+\infty \).
To prove that \({{\mathbb {J}}}_n\) converges to \({{\mathbb {J}}}\) in \(C^{\eta }([0,T];X)\), now it suffices to note that (see Remark 2.3)
and repeat the above computations to infer that
for every \(n\in {\mathbb {N}}\). \(\square \)
We are now ready to show that the mild solution y to (3.2) satisfies the integral representation formula (4.1).
Theorem 4.1
Let Hypotheses 2.1, 2.2 and 3.1 be satisfied and let \(\psi \in X_{\alpha }\) for some \(\alpha \in (0,1)\) such that \(\alpha +\eta >1\). Further, let y be the unique mild solution to Eq. (3.2). Then, y satisfies (4.1).
Proof
Let \((x_n)\subset C^1([0,T])\) be a sequence of smooth paths which converges to x in \(C^{\eta }([0,T])\) as n tends to \(+\infty \). For every \(n\in {\mathbb {N}}\), let \(y_n\) be the unique mild solution to (3.2) with x replaced by \(x_n\). The computations in Step 3 of the proof of Theorem 3.1 with x replaced by \(x_n\) and the fact that \(\sup _{n\in {\mathbb {N}}}\Vert x_n\Vert _{C^{\eta }([0,T])}<+\infty \) imply that \(y_n(t)\) belongs to D(A) for each \(t\in (0,T]\) and \(n\in {\mathbb {N}}\), and for every \(\lambda \in [0,\eta +\alpha -1)\) there exists a positive constant \(c=c(\lambda )\), independent of n, such that \(\Vert Ay(t)\Vert _X\le ct^{\lambda -1}\) and \(\Vert Ay_n(t)\Vert _X\le ct^{\lambda -1}\) for every \(t\in (0,T]\) and \(n\in {\mathbb {N}}\). From [12, Proposition 4.1.5] we infer that
Let us fix \(t\in (0,T]\). From Proposition 4.1 we know that \(y_n\) converges to y in C([0, T]; X) and \(\displaystyle \int _0^t\sigma (y_n(s))\mathrm{d}x_n(s)\) converges to \(\displaystyle \int _0^t\sigma (y(s))\mathrm{d}x(s)\) in X as n tends to \(+\infty \). Hence, \(\displaystyle \int _0^ty_n(s)\mathrm{d}s\) and
converge, as n tends to \(+\infty \), to \(\displaystyle \int _0^ty(s)\mathrm{d}s\) and \(\displaystyle y(t)-\psi -\int _0^t\sigma (y(s))\mathrm{d}x(s)\), respectively, for every \(t\in [0,T]\). Since A is a closed operator, it follows that
Finally, since \(\Vert Ay(t)\Vert _X\le ct^{\lambda -1}\) for every \(t\in (0,T]\) (see (3.4) with \(\mu =0\)), we conclude that Ay belongs to \(L^1((0,T);X)\). Hence, \(\displaystyle A\int _0^ty(s)\mathrm{d}s=\int _0^tAy(s)\mathrm{d}s\), which gives
The arbitrariness of \(t\in [0,T]\) yields the assertion. \(\square \)
Corollary 4.1
Let \(\sigma _i:X\rightarrow X\) \((i=1,\ldots ,m)\) satisfy Hypotheses 3.1 and let the paths \(x_i\) \((i=1,\ldots ,m)\) belong to \(C^{\eta }([0,T])\). Then, the unique mild solution y to (3.38) with \(\psi \in X_\alpha \), with \(\alpha +\eta >1\), satisfies the equation
Proof
The statement follows from Remark 3.2, and by repeating the computations in this section. \(\square \)
5 Chain rule for nonlinear Young equations
In this subsection we use the integral representation formula (4.1) of the unique mild solution y to problem (3.2) to prove a chain rule for \(F(\cdot ,y(\cdot ))\), where F is a smooth function.
Theorem 5.1
Let \(F\in C^1([0,T]\times X)\) be such that and \(F_x\) is \(\alpha \)-Hölder continuous with respect to t, locally uniformly with respect to x, and is locally \(\gamma \)-Hölder continuous with respect to x, uniformly with respect to t, for some \(\alpha , \gamma \in (0,1)\) such that \(\eta +\alpha \gamma >1\). Further, let y be the unique mild solution to (3.2). Then,
for every \((s,t)\in [0,T]\).
Proof
Let us fix \(0<s<t\le T\) and a sequence \((\varPi _n(s,t))\) of partitions \(\varPi _n(s,t)=\{s=s_0^n<s_1^n<\ldots <s_{m_n}^n=t\}\) of [s, t], with mesh-size which converges to zero, and note that
where \(\varDelta y_j=y(s^n_j)-y(s^n_{j-1})\), \(\varDelta s^n_j=s^n_j-s^n_{j-1}\), \({\tilde{s}}^n_j=s^n_{j-1}+\theta _j^n(s^n_j-s^n_{j-1})\), \({\tilde{y}}_j=y(s^n_{j-1})+\eta _j^n(y(s^n_j)-y(s^n_{j-1}))\) and \(\theta _j^n,\eta _j^n\in (0,1)\) are obtained from the mean-value theorem, for every \(j=1,\ldots ,m_n\).
Analysis of the terms \(I_{1,n}\) and \(I_{2,n}\). Since the function \(s\mapsto F_t(s,y(s))\) is continuous in [0, T], \(I_{1,n}\) converges to \(\displaystyle \int _s^tF_t(u,y(u))\mathrm{d}u\) as n tends to \(+\infty \). Moreover, since y([0, T]) is a compact subset of X, the restriction of function \(F_t\) to \([0,T]\times y([0,T])\) is uniformly continuous. Thus, for every \(\varepsilon >0\) there exists a positive constant \(\delta \) such that \(|F_t(t_2,x_2)-F_t(t_1,x_1)|\le \varepsilon \) if \(|t_2-t_1|^2+|x_2-x_1|^2\le \delta ^2\). As a byproduct, it follows that, if \(|\varPi (s,t)|\le \delta \), then \(|I_{2,n}|\le \varepsilon \sum _{j=1}^n\varDelta s^n_j=\varepsilon (t-s)\) and this shows that \(I_{2,n}\) converges to 0 as n tends to \(+\infty \).
Analysis of the term \(I_{3,n}\). Using (4.1) we can write (see Remark 2.3)
for \(j=1,\ldots ,m_n\). By assumptions, the function \(s\mapsto F_x(s,y(s))\) is continuous with values in \(X'\). Similarly, by Theorem 3.1 the function Ay is continuous in (0, T]. Indeed, y belongs to \(E_{\eta +\alpha -\mu }([s,T];X_{\mu })\) for every \(\mu \in [\eta +\alpha -1,\eta +\alpha )\). Taking \(\mu =1\) we deduce that \(\Vert (\delta _Sy)(u,w)\Vert _{X_1}\le c|w-u|^{\eta +\alpha -1}\) for every \((u,w)\in [s,t]_<^2\) and some positive constant c, independent of u and w. Hence,
Choosing \(\mu =1+\rho \) for some \(\rho <\eta +\alpha -1\) and using (2.1)(b) we get
Therefore, Ay is \(\rho \)-Hölder continuous in \([\varepsilon ,T]\) for any \(\varepsilon \in (0,T)\). Since \(s>0\), it follows that \(u\mapsto Ay(u)\) is continuous in [s, T], and we thus conclude that
and
for every \(j=1,\ldots ,m_n\), so that
and the right-hand side of the previous inequality vanishes as n tends to \(+\infty \).
Let us consider the third term in the right-hand side of (5.1). From Theorem 2.1 and recalling that \(\alpha +\eta >1\), we infer that
for \(j=1,\ldots ,m_n\). Hence,
Letting n tend to \(+\infty \) gives
To conclude the study of \(I_{3,n}\) it remains to consider the term
For this purpose, we introduce the function \(g:[s,t]\rightarrow {\mathbb {R}}\), defined by \(g(\tau )=\langle F_x(\tau ,y(\tau )),\sigma (y(\tau ))\rangle \) for every \(\tau \in [s,t]\). Let us prove that \(g\in C^{\alpha \gamma }([s,t])\). To this aim, we recall that
Hence, we can estimate
for every \(\tau _1,\tau _2\in [s,t]\), which shows that g is \(\alpha \gamma \)-Hölder continuous in [s, t]. Since \(\eta +\gamma \alpha >1\), we can apply Theorem 2.1 which implies that
where the integral is well-defined as Young integral. From (5.2)–(5.5) we conclude that
To complete the proof, we observe that \(I_{4,n}\) converges to 0 as n tends to \(+\infty \). This property can be checked arguing as we did for the term \(I_{2,n}\), noting that \(F_x\) is uniformly continuous in \([0,T]\times y([0,T])\).
Summing up, we have proved that
for every \(0<s<t\le T\). As s tends to \(0^+\), the left-hand side of (5.6) converges to \(F(t,y(t))-F(0,y(0))\). As far as the right-hand side is concerned, the first and the third term converge to the corresponding integrals over [0, t] since the functions \(u\mapsto F_t(u,y(u))\) and \(u\mapsto F_x(u,y(u))\) are continuous in [0, T]. As far as the second term in the right-hand side of (5.6) is concerned, thanks to (3.4) with \(\mu =0\) we can apply the dominated convergence theorem which yields the convergence to the integral over (0, t). The assertion in its full generality follows. \(\square \)
The same arguments as in the proof of Theorem 5.1 and Corollary 4.1 give the following result.
Corollary 5.1
Let \(\sigma _i:X\rightarrow X\) \((i=1,\ldots ,m)\) satisfy Hypotheses 3.1, let the paths \(x_i\in C^\eta ([0,T])\) \((i=1,\ldots ,n)\) belong to \(C^{\eta }([0,T])\), and let y be the unique mild solution to (3.38) with \(\psi \in X_\alpha \), with \(\alpha +\eta >1\). Then, for any function F satisfying the assumptions in Theorem 5.1 it holds that
for every \((s,t)\in [0,T]\).
As an immediate application of the chain rule, we provide necessary conditions, in the contexts of Hilbert spaces, for the invariance of the set \(K=\{x\in X: \langle x,\varphi \rangle \le 0\}\) for the mild solution y to (3.38), where invariance means that, if \(\psi \in X_\alpha \cap K\), then y(t) belongs to K for any \(t\in [0,T]\). For this purpose, we assume that \(A:D(A)\subset X\rightarrow X\) is a self-adjoint nonpositive closed operator which generates an analytic semigroup of bounded linear operators \((S(t))_{t\ge 0}\) on H and that the results so far proved hold true with \(X_\zeta =D((-A)^\zeta )\) for any \(\zeta \ge 0\).
Proposition 5.1
Let Hypotheses 2.1, 2.2, 3.1 be fulfilled with \(\eta +\alpha >1\). Let \(\varphi \in X_\varepsilon \) for some \(\varepsilon \in [0,1)\), \(\psi \in X_\zeta \) for some \(\zeta \in [\alpha ,1)\) and let \(K:=\{x\in X:\langle x,\varphi \rangle \le 0\}\) be invariant for y. The following properties are satisfied.
-
(i)
If \(\psi \in \partial K\) and \(\eta \le \zeta +\varepsilon \le 1\), then
$$\begin{aligned} \limsup _{t\rightarrow 0^+}t^{-\beta }\sum _{i=1}^m\langle \varphi ,\sigma _i(\psi )\rangle (x_i(t)-x_i(0))\le 0, \end{aligned}$$for \(\beta \in [\eta ,\zeta +\varepsilon )\) and
$$\begin{aligned} \sup _{\lambda >0}\limsup _{t\rightarrow 0^+}t^{-\beta }\displaystyle \bigg (&-\int _0^t\frac{(\lambda +\langle \varphi ,y(s)\rangle )_+}{\lambda }\langle (-A)^\varepsilon \varphi ,(-A)^{1-\varepsilon }y(s)\rangle \mathrm{d}s \\&\quad +\sum _{i=1}^m\langle \varphi ,\sigma _i(\psi )\rangle (x_i(t)-x_i(0))\bigg )\le 0, \end{aligned}$$for \(\beta \in [\zeta +\varepsilon ,1]\).
-
(ii)
If \(y(t_0)\in \partial K\) for some \(t_0\in [0,T)\) and \(\zeta +\varepsilon >1\), then
$$\begin{aligned} \limsup _{t\rightarrow t_0^+}|t-t_0|^{-\beta }\sum _{i=1}^m\langle \varphi ,\sigma _i(y(t_0))\rangle (x_i(t)-x_i(t_0))\le 0, \end{aligned}$$if \(\beta \in [\eta ,1)\) and
$$\begin{aligned}&\limsup _{t\rightarrow t_0^+}|t-t_0|^{-1}\sum _{i=1}^m\langle \varphi ,\sigma _i(y(t_0))\rangle (x_i(t)-x_i(t_0))\\&-\langle (-A)^{\varepsilon }\varphi , (-A)^{1-\varepsilon }y(t_0)\rangle \le 0, \end{aligned}$$if \(\beta =1\).
Remark 5.1
If \(t_0>0\) in (ii) then \(y(t)\in X_{1+\mu }\) for any \(\mu \in [0,\eta +\alpha -1)\) and \(t\in (0,T]\) (see Theorem 3.1). Hence, the condition \(\zeta +\varepsilon >1\) is automatically satisfied.
Proof
For any \(\lambda >0\) we introduce the function \(F_{\lambda }:X\rightarrow X\), defined by \(F_\lambda (x):=(\lambda +\langle \varphi ,x\rangle )^2_+\), for any \(x\in X\). As it is easily seen, each function \(F_\lambda \) belongs to \(C^{1,1}(X)\) and \(DF_\lambda (x)=2(\lambda +\langle \varphi ,x\rangle )_+\varphi \) for any \(x\in X\). Further, for any \(x\in K\) it holds that \(F_\lambda (x)\le \lambda ^2\), and \(F_\lambda (x)=\lambda ^2\) if and only if \(x\in \partial K\). Fix \(t_0\in [0,T)\). If \(y(t_0)\in \partial K\) then from (5.7) it follows that
for any \(t\in [t_0,T]\) and any \(\lambda >0\), where in the equality we have used (2.3) with \(f(t)=(\lambda +\langle \varphi ,y(t)\rangle )_+\langle \varphi ,\sigma _i(y(t))\rangle \) for \(t\in [0,T]\). We recall that from (2.4) it follows that \({{\mathscr {R}}}_f(t_0,t)=o(|t-t_0|)\) as \(t\rightarrow t_0^+\), and from Remark 3.1(ii) we know that \(y\in C((0,T];X_\mu )\) for any \(\mu \in [0,\eta +\alpha )\).
Now, we separately consider the cases (i) and (ii).
\(\mathbf{(i)}\). Fix \(\psi \in \partial K\cap X_\zeta \) with \(\eta \le \zeta +\varepsilon \le 1\). From (3.36) with \(r=1-\varepsilon \) we infer that there exists a positive constant c, independent of s, such that
Let \(\beta \in [\eta ,1]\). Dividing by \(t^\beta \) and \(\lambda \) both sides of (5.8) (with \(t_0=0\)) and taking (5.9) into account, the assertion follows easily.
\(\mathbf{(ii)}\). Fix \(t_0\in [0,T)\) with \(y(t_0)\in \partial K\) and \(\zeta +\varepsilon >1\). Since \(1-\varepsilon <\zeta \), y is continuous up to 0 with values in \(X_{1-\varepsilon }\). Indeed, from (2.1)(b) we get \(\Vert S(t)\psi -\psi \Vert _{X_{1-\varepsilon }}\le Ct^{\zeta +\varepsilon -1}\) for every \(t\in [0,T]\), and from (3.3) and the smoothness of \({\mathscr {I}}_{S(\sigma \circ y)}\) at \(t=0\) we infer that \(y\in C_b([0,T];X_{1-\varepsilon })\). As a consequence, the function \(s\mapsto (\lambda +\langle \varphi ,y(s)\rangle )_+\langle (-A)^\varepsilon \varphi ,(-A)^{1-\varepsilon }y(s)\rangle \) belongs to \(C_b([0,T])\). Let \(\beta \in [\eta ,1]\). Dividing (5.8) by \(|t-t_0|^\beta \) and \(\lambda \), and letting \(t\rightarrow t_0^+\), the assertion follows also in this case. \(\square \)
6 Examples
In this section, we provide two examples to which our results apply. We consider the second-order elliptic operator \({\mathcal {A}}\), defined by
Example 6.1
Let us assume that the coefficients of the operator \({\mathcal {A}}\) are bounded and \(\beta \)-Hölder continuous on \({\mathbb {R}}^d\), for some \(\beta \in (0,1)\), and \(\sum _{i,j=1}^dq_{ij}(x)\xi _i\xi _j\ge \mu |\xi |^2\) for every \(x,\xi \in {\mathbb {R}}^d\) and some positive constant \(\mu \).
Let A be the realization of operator \({\mathcal {A}}\) in \(X=C_b({\mathbb {R}}^d)\) with domain
For every \(\alpha \in (0,2)\setminus \{1/2,1\}\), we take \(X_{\alpha }=C^{2\alpha }_b({\mathbb {R}}^d)\) endowed with the classical norm of \(C^{2\alpha }_b({\mathbb {R}}^d)\). Moreover, we take as \(X_{1/2}\) the Zygmund space of all bounded functions \(g:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) such that
endowed with the norm \(\Vert g\Vert _{X_{1/2}}=\Vert g\Vert _{\infty }+[g]_{X^{1/2}}\). It is well known that A generates an analytic semigroup on \(C_b({\mathbb {R}}^d)\) and \(X_{\alpha }\) is the interpolation space of order \(\alpha \) between X and \(X_1=D(A)\). We refer the reader to e.g., [11, Chapters 3 and 14]. Finally, we fix a function \({\hat{\sigma }}\in C^2_b({\mathbb {R}})\) and note that the function \(\sigma :X\rightarrow X\), defined by \(\sigma (f)={\hat{\sigma }}\circ f\) satisfies Hypothesis 3.1 for every \(\alpha \in (0,1/2)\), with \(L_{\sigma }^{\alpha }=\Vert \hat{\sigma } \Vert _{\mathrm {Lip}({\mathbb {R}})}\). Since the assumptions of Theorem 3.1 are satisfied, we conclude that, for every \(\psi \in C^{\alpha }_b({\mathbb {R}}^d)\) (\(\alpha \in (0,1)\)), there exists a unique solution y to Problem (3.2), which takes values in D(A).
Example 6.2
Let \(\varOmega \subset {\mathbb {R}}^n\) be a bounded open domain with \(C^2\)-boundary and assume that the coefficients \(q_{ij}\) (\(i,j=1,\ldots ,d\)) of operator \({\mathcal {A}}\) are uniformly continuous in \(\varOmega \), whereas the other coefficients are in \(L^{\infty }(\varOmega )\). We further assume that \(\sum _{i,j=1}^dq_{ij}(x)\xi _i\xi _j\ge \mu |\xi |^2\) for every \(x,\xi \in {\mathbb {R}}^d\) and some positive constant \(\mu \) and, for \(p\in (1,+\infty )\), we denote by \(A_p\) the realization of the operator \({\mathcal {A}}\) in \(X=L^p(\varOmega )\) with homogeneous Dirichlet boundary conditions, with domain \(D(A_p)=W^{2,p}(\varOmega )\cap W^{1,p}_0(\varOmega )\). It is well-known that \(A_p\) generates an analytic semigroup on \(L^p(\varOmega )\). We assume that \(p>n\). Hence, \(n/(2p)<1/2\). It is also well-known that, for any \(\alpha \in (1/(2p),1)\),
where with \(W^{2\alpha ,p}_0(\varOmega )\) we denote the fractional Sobolev space of order \(2\alpha \) with null trace on \(\partial \varOmega \). We set \(X_\alpha :=W^{2\alpha ,p}_0(\varOmega )\).
Let us fix \(f\in C^2_b({\mathbb {R}})\) with \(f(0)=0\) and let us define the function \(\sigma \) by setting \(\sigma \circ y(\cdot )=f(y(\cdot ))\) for any \(y\in L^p(\varOmega )\). It is not hard to show that \(\sigma :X\rightarrow X\) is Fréchet differentiable with bounded and locally Lipschitz Fréchet derivative. We claim that for \(\alpha \in (n/(2p),1/2)\) the function \(\sigma \) satisfies condition (3.1) and it is locally Lipschitz continuous in \(X_\alpha \). Let us notice that \(\sigma \circ y\in W^{2\alpha ,p}_0(\varOmega )\) for any \(y\in W^{2\alpha ,p}_0(\varOmega )\). Let us denote by \(\sigma '\) the Fréchet derivative of \(\sigma \). For any \(y,h\in X_\alpha \) it holds that \((( \sigma '\circ y)h)(\xi )=f'(y(\xi ))h(\xi )\) for almost every \(\xi \in \varOmega \). Hence,
Further, we recall that since \(2\alpha p>n\), it follows that \(W^{2\alpha ,p}(\varOmega )\subset C(\overline{\varOmega })\). Therefore,
where \(c_1\) is a positive constant which depends on the \(C^2_b({\mathbb {R}})\)-norm of f. It follows that \(\Vert (\sigma '\circ y)h\Vert _{X_\alpha }\le c_2\Vert h\Vert _{X_\alpha }(1+\Vert y\Vert _{X_\alpha })\) for every \(y,h\in X_\alpha \), and some positive constant \(c_2\), so that \(\sigma \) is locally Lipschitz continuous on \(X_\alpha \). Hence, the assumptions of Theorem 3.1 are satisfied, and problem (3.2) admits a unique mild solution y, for any \(\psi \in X_\alpha \), such that \(y(t)\in D(A_p)\) for every \(t\in (0,T]\).
References
P. Cannarsa, G. Da Prato, Stochastic viability for regular closed sets in Hilbert spaces. Rend. Lincei Mat. Appl., 22, 337–346 (2011).
L. Coutin, N. Marie, Invariance for rough differential equations, Stochastic Processes and their Applications 127, 2373–2395 (2017).
A. Deya, M. Gubinelli, S. Tindel, Non-linear rough heat equations, Probab. Theory Related Fields 153, 97–147 (2012).
K.J. Engel, R. Nagel, One-parameter semigroups for linear evolution equations, Springer, New York, NY (2000).
P. Fritz, M Hairer, A Course on Rough Paths With an Introduction to Regularity Structures, Springer Universitext 2014.
M. Gubinelli, Controlling rough paths, J. Funct. Anal., 216, 86–140 (2004).
M. Gubinelli, A panorama of Singular SPDEs, Proceedings of the International Congress of Mathematicians (ICM 2018), World Scientific (2019).
M. Gubinelli, A. Lejay, S. Tindel, Young integrals and SPDEs, Potential Anal. 25, 307–326 (2006).
M. Gubinelli, S. Tindel, Rough evolution equations, Ann. Probab. 38, 1–75 (2010).
A. Lejay, An introduction to rough paths. In: Seminaire de Probabilités XXXVII. Lecture Notes in Mathematics 1832, 1–59. Springer, Berlin Heidelberg New York (2003).
L. Lorenzi, A. Rhandi, Semigroups of bounded operators and second-order elliptic and parabolic partial differential equations, CRC Press (2021).
A. Lunardi, Analytic semigroups and optimal regularity in parabolic problems, Modern Birkhäuser Classics, Birkhäuser/Springer Basel AG, Basel (1995).
T.J. Lyons, Differential equations driven by rough signals, Rev. Mat. Iberoamericana 14, no. 2, 215–310 (1998).
B. Maslowski, D. Nualart, Evolution equations driven by a fractional Brownian motion. J. Funct. Anal. 202, 277–305 (2003).
L.C. Young, An inequality of the Hölder type, connected with Stieltjes integration, Acta Math. 67, no. 1, 251–282 (1936).
M. Zähle, Integration with respect to fractal functions and stochastic calculus. I. Probab. Theory Related Fields 111, 333–374 (1998).
Open Access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Addona, D., Lorenzi, L. & Tessitore, G. Regularity results for nonlinear Young equations and applications. J. Evol. Equ. 22, 3 (2022). https://doi.org/10.1007/s00028-022-00757-y
Accepted:
Published:
DOI: https://doi.org/10.1007/s00028-022-00757-y
Keywords
- Nonlinear Young equations
- Mild solutions and their smoothness
- Integral representation formula
- Semigroups of bounded operators
- Invariance property