Regularisation by regular noise

We show that perturbing ill-posed differential equations with (potentially very) smooth random processes can restore well-posedness—even if the perturbation is (potentially much) more regular than the drift component of the solution. The noise considered is of fractional Brownian type, and the familiar regularity condition α>1-1/(2H)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha >1-1/(2H)$$\end{document} is recovered for all non-integer H>1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H>1$$\end{document}.


Introduction
Consider the stochastic differential equation For Hurst parameter ∈ (0, 1), fractional Brownian motions can be defined via the Mandelbrot -van Ness representation [MvN ] where is a two-sided -dimensional standard Brownian motion on some probability space (Ω, F, ℙ).The complete filtration generated by the increments of is denoted by = (F ) ∈ℝ , with respect to which the notion of adaptedness is understood in the sequel, unless otherwise specified.Since ( . ) is a fractional integration, it is natural to extend this scale of random processes to parameters ∈ (1, ∞) \ ℤ by It is a well-studied phenomenon that adding a noise term to an ill-posed differential equation can provide regularisation effects.That is, while for < 1 and ∈ the equation ′ = ( ) is in general not well-posed, upon perturbing it with some random process (Brownian, fractional Brownian, or Lévy-tpye), due to the oscillations of the noise one obtains a well-posed equation.It might seem counterintuitive to expect similar effects from the processes for large .Indeed, in the regime > 2 this would mean that I the noise component of ( . ) would be more regular (or, less oscillatory) than the drift component.Therefore it may be surprising that nontrivial regularisation effects happen for all ∈ (1, ∞) \ ℤ.
Before stating the main result, let us clarify the solution concept: given a stopping time , a strong solution of ( . ) up to time is an adapted process such that the equality ( . ) holds almost surely for all ∈ [0, ].Two solutions are understood to be equivalent if they are equal almost surely for all ∈ [0, ].We then have the following.and ∈ , where Then ( . ) has a unique strong solution up to time 1.
The well-posedness of ( . ) for ∈ (0, 1) under the condition ( . ) is well-known in the literature.This condition first appeared in [NO ] in the scalar, ∈ (1/2, 1) case and later in [CG ] the the multidimensional, ∈ (0, 1) case .In the multidimensional, ∈ (1/2, 1) case a simpler proof was given in [BDG ], relying on various sewing lemmas.The present approach is simpler still (for example Girsanov transformation is fully avoided ), and uses only the stochastic sewing lemma introduced in [Lê ].In the regime ∈ (0, 1/2), the condition ( . ) allows for negative , that is, distributional drift, which makes the interpretation of the integral in ( . ) nontrivial.While this is also possible with stochastic sewing (see e.g.[BDG , Cor . ] for a step towards this), since the novelty of Theorem . is in the other direction ( > 1), we do not pursue this here.
Instead of thinking of ( . ) as an equation being driven by "regular" noise, one can equivalently view it as a "degenerate" SDE ( . ) For = + 1/2, ∈ ℕ, the driving noise in ( . ) is standard Brownian, and therefore techniques based on the Zvonkin-Veretennikov transformation [Zvo , Ver ] apply.This approach was explored recently in [Cha , CHM ], where the authors prove wellposedness under the condition ( . ) with an elaborate PDE-based proof.
Let us conclude the introduction by spelling out some trivial equivalences.First, ( . ) can be rewritten as Second, a solution of ( . ) can be written as = + , where is a fixed point of the map ) Unlike the classical notion of uniqueness considered here, [CG ] proves uniqueness in the so-called path-by-path sense, which is a stronger notion and does not easily follow from stochastic sewing.This is not mere convenience but a necessity.One simply cannot expect the Girsanov transformation to work: the law of the solution is singular with respect to the law of as soon as > 1 + .

P
In Section , Theorem . is proved by showing that (a stopped version of) Tis a contraction on some space.The main tools for the proof are set up in Section , more precisely in Lemmas .and . .

Remark .
. A heuristic power counting argument for ( . ) can be given as follows.For strong uniqueness one would like the map ↦ → T( ) to be Lipschitz with a small constant.
A more modest goal would be try to show that , ↦ → T( ) is Lipschitz in ∈ ℝ .For starters, it is trivial that , ↦ → T( ) is Lipschitz in , in .Thanks to the presence of one can use stochastic sewing to trade time and space regularity at the exchange rate ∼ .However, we can only trade as long as we remain with more than 1/2 regularity in (this is the "1/2 condition of the stochastic sewing lemma"), which means that the available gain in regularity is 1/(2 ).

Notation and basics
The conditional expectation given F is denoted by E .We often use the conditional Jensen's inequality (CJI) E (Ω) ≤ (Ω) for ∈ [1, ∞] and the following elementary inequality for any ∈ [1, ∞], ∈ (Ω), and F -measurable : ) For 0 ≤ < ≤ 1 and ℝ -valued functions on [ , ] we introduce the (semi-)norms When the domain is ℝ instead of [ , ], we simply write .When considering functions with values in (Ω) instead of ℝ , ∈ [1, ∞], we denote the corresponding spaces and (semi-)norms by .If is an adapted process, > 0, 0 ≤ ≤ ≤ 1, one may choose in ( . ) to be the value at of the Taylor expansion of at up to order ⌊ ⌋, yielding the bound ) Combining with CJI (or with triangle inequality) one also gets, for 0 ≤ < < ≤ 1, ) We will also use the following simple property.Let be some random process and ˆ a stopped version of it, that is, for some stopping time , ˆ = ∧ .Then, for any ∈ (0, 1), ∈ (0, 1 − ), and ∈ ( / , ∞), there exists a constant depending on , , and , such that for all 0 < ≤ 1 one has ) Indeed, the first two inequalities are trivial, and the third follows from Kolmogorov's continuity theorem.

P
The "local intedeterminancy" property of the processes for ∈ (0, 1) is a key feature for their regularisation properties, and it also holds in the extended scale.

Proposition . . For any
∈ (0, ∞) \ ℤ there exists a constant ( ) such that for all Proof.Since the coordinates of are independent, we may and will assume = 1.For ∈ (0, 1) this is well known, so we assume ⌊ ⌋ ≥ 1.From ( .) and ( . ) we have Therefore, by Itô's isometry, For any > 0, − [0,1] is finite almost surely.Indeed, this is well-known for ∈ (0, 1) and immediately follows for > 1 from the definition ( . ). ) which is possible thanks to ( . ).For any > 0, define the stopping times From now on we consider the parameter to be fixed and prove the well-posedness of ( . ) up to .Correspondingly, we take the following modification of the map T from ( . ): ) Denote by S 0 the set of Lipschitz continuous and adapted processes and for ∈ ℕ introduce the set of Picard iterates by S = T (S 0 ).We denote by P the convolution with Gaussian density whose covariance matrix is times the identity.In light of ( . ), it is natural to use the reparametrisation P := P ( ) 2 .One then has the identity E ( ) = P − (E ).Let us recall two well-known heat kernel estimates: for ∈ [0, 1], ≤ 1, ∈ (0, 1], one has the bounds, with some constant depending only on , , , ( . )

P
In the sequel we use to denote that there exists a constant such that ≤ , with depending only on (a subset of) the following parameters: , , , , , , . .

Stochastic sewing lemma
A main tool in the proof is the stochastic sewing lemma introduced in [Lê ], which however needs to be suitably tweaked to avoid some singular integrals in the application.Let us first briefly recall the strategy of [Lê ].Given a filtration = ( G ) ≥0 and a -adapted sequence of random variables, ( ) ≥1 , one can write the estimate for ∈ [2, ∞): where the second sum was estimated via the Burkholder-Davis-Gundy and the Minkowski inequalities.The former can be applied, since the sequence ( − E G −1 ) ≥1 is one of martingale differences.Now assume that ( , ) 0≤ < ≤1 is a family of random variables such that , is F -measurable.If one is interested in the convergence of the Riemann sums , then the classical sewing ideas suggest to consider where , , = , − , − , .This sum fits into the context of ( . ), with the filtration G = F 2 − .This leads to the requirement in the stochastic sewing lemma on bounding E , , , see [Lê , Eq ( . )].It can be very useful to shift the conditioning backwards in time.In many applications , is actually F -measurable, which by the above considerations leads to bound E −( − )/2 , , .Even if this is not the case, one can achieve such a shift by bounding the "even" and "odd" part of the sum in ( . ) separately, noting that both are sequences of martingale differences.To make the formulation cleaner, we assume an extended filtration ( G ) ≥−1 , with which one has the following variant of ( .): Since separating the sum of -s into 2 was of course arbitrary, one can get the same bound with G −ℓ in place of G −2 above for any ℓ, as long as the filtration extends to negative indices up to −ℓ + 1.Another technical modification that we will need is to restrict the triples ( , , ) on which , , is considered, to triples where the distances between each pair is comparable.To this end define, for 0 ≤ < ≤ 1 and ≥ 0, The version of the stochastic sewing lemma suitable for our purposes then reads as follows.), ≥ 0, and let ( , ) ( , ) ∈ [ , ] 2 be a family of random variables in (Ω, ℝ ) such that , is F -measurable.Suppose that for some 1 , 2 > 0 and 1 , 2 the bounds hold for all ( , ) ∈ [ , ] 2 and ( , , ) ∈ [ , ]
Proof.The proof essentially follows [Lê ], keeping in mind the preceding remarks, and making sure that one only ever uses suitably regular partitions.For the present proof we understand the proportionality constant in to depend on 1 , 2 , , , .Take ( , ) ∈ [ , ] 2 and notice that We claim the following.

3
. The first term is simply bounded via ( . ).For the sum we use ( . ) and the succeeding remark.Note that -measurable, and that for large enough ℓ = ℓ( ) (uniform in ∈ Π, , and . Indeed, this is a consequence of the regularity of the partitions in Π.Therefore, the sum can be decomposed as the sum of ℓ different sums of martingale differences, each of which can be bounded as in ( . ).From the bounds ( .)-( .)one can conclude that ( .)- ( . ) holds for ′ = ( ).Now suppose that ′ is the trivial partition { , } and define as the smallest integer such that ( ) = ′ .One can write , this yields ( . ) (and one gets similarly ( . )) for ′ = { , }.Finally, if ′ = { 0 , . . ., } ∈ Π is arbitrary and ∈ Π is a refinement of it, define to be the restriction of to each term in the sum is of the form that fits in the previous case, and so admits bounds of the form ( .)- ( . ).Using these bounds, the sum is treated just like the one in ( .), using ( . ).Hence, (i) is proved.
Remark . .The usefulness of these small shifts can be illustrated with the following analogy.To have +1) , one requires > −1.However, upon shifting the integral, +1) is true for all ∈ ℝ.This example is actually more than an analogy: in ( . ) below we encounter such integrals and the above formulation allows one to bypass the condition ( − 2) > −1. .

Higher order expansion of solutions
As the final ingredient, let us discuss how well a solution (and more generally, an element of S ) can be approximated at time by an F -measurable random variable, for < .From the differentiability of , one immediately sees that taking gives an approximation of order | − |, or, slightly more cleverly, taking + ( − ) ′ gives an approximation of order | − | 1+ .Since is certainly not twice differentiable, it might be surprising that one can push this expansion much further and obtain an approximation of order almost | − | 1+ .Theorem . and Section . , and in addition assume < 1.Then there exists a 0 = 0 ( , , ) ∈ ℕ such that for all ∈ S 0 and for all 0 ≤ ≤ 1 one has almost surely for all ∈

Lemma . . Assume the setting and notations of
( . ) Clearly, for 0 ≤ ≤ ≤ , one has | − , | | − | − .Define maps ( ) on S as follows.For = 0 and ∈ S 0 set (0) , = .For > 0 and ∈ S , take ∈ S −1 such that = T ( ) and define inductively We aim to get almost sure bounds of the form with some to be determined.We proceed by induction.Clearly ( . ) holds for = 0 with 0 = 1.In the inductive step, from ( . ) one deduces where in the last inequality we used the induction hypothesis.Therefore, ( . ) holds for all ∈ ℕ, with defined by the recursion For < 0 := inf{ℓ ∈ ℕ : ℓ > − } + 1, one has −1 ≤ − and therefore Hence, 0 is finite.One then has . ) in an equivalent form, one has almost surely for all 0 ≤ ≤ ≤ 1 For fixed and , apply ( . ) with = ∞, = ∧ , and , since the latter is certainly measurable with respect to F .Therefore, one has almost surely From the continuity of , this bound holds for all 0 ≤ ≤ 1 almost surely for all ∈ [ , 1], which readily implies the claim.
Remark . .One can easily check that as far as ( . ) is concerned, condition ( . ) is overkill: indeed, the above argument works as long as > 1 and Condition ( . ) is also equivalent to the existence of > 0 such that 1 + ( − ) > .
In other words, ( . ) is precisely the condition that guarantees that the drift components of sufficiently high order Picard iterates (and in particular, solutions, if any exist) are "more regular", in a stochastic sense, than their noise components.While this is only a heuristic indication of regularisation effects, it is interesting to note that several works in the literature connect the condition ( . ) to weak well-posedness.
It is therefore natural to conjecture that ( . ) guarantees weak well-posedness also in the non-Markovian case, but to our best knowledge this question is still well open.

Proof of Theorem .
Take ∈ [2, ∞), to be specified to be large enough later.Recall that it suffices to prove the well-posedness of ( . ) up to .To this end, we will show that T is a contraction on S 0 in a suitable metric.For any process , we define the stopped process by = ∧ .We want to apply Lemma ., with = 1.Let , ∈ S 0 .Fix 0 ≤ < ≤ 1 and for ( , For ( , ) ∈ [ , ] 2 1 denote 1 = − ( − ), 2 = , 3 = .Then by ( . ) and CJI one has From ( . ), the condition ( .) is satisfied with Concerning the second condition of Lemma ., for ( , , ) ∈ [ , ] The two terms are treated in exactly the same way, so we only detail 1 .By ( . ), ) From CJI, ( . ), and ( .), we have the bounds Substituting this into ( . ), and recalling that ∫ From ( .) and ( . ), we see that both exponents of | − | above are strictly bigger than 1.With the analogous bound on 2 , one concludes that ( . ) holds with 2 = − , and therefore Lemma .applies.We claim that the process A is given by Indeed, first notice that that since both , , and are all trivially Lipschitz-continuous with their Lipschitz constant having -th moments, Ã belongs to 1+ .Therefore, by ( . ) we can write On the other hand, by CJI and ( .)again, we have Hence ( . ) is satisfied, and Ã = A. The bound ( . ) then yields, with some ′ > 0, for all ≤ < ≤ , ) Let us first choose = 0, in which case − . Therefore, upon dividing by | − | 1/2+ ′ /2 and taking supremum over 0 ≤ < ≤ , one gets We can apply ( . ) with = T ( ) − T ( ), = , = 1/2, = ′ /2.This is the only point where the choice of matters: we need to take > 2 / ′ .We get T ( ) − T ( ) and therefore, for sufficiently small > 0, ) We conclude that T is a contraction on S 0 with respect to [(•) ] 1/2 [0, ] .
To obtain a solution beyond time , let S 0, * to be the set of adapted Lipschitz continuous processes agreeing with ˆ * on [0, ] and set S , * = T (S 0 ), Ŝ , * = T (S 0 ).Note that elements of S , * agree with ˆ * on [0, ∧ ], while elements of Ŝ , * agree with ˆ * on [0, ].Also, trivially, S , * ⊂ S and therefore the bound ( . ) holds for , ∈ S 0 , * .Moreover, notice that on S , * , one has − The uniqueness is immediate from the above construction: any two fixed points of T necessarily lie in Ŝ 0 and therefore they agree on [0, ] by ( . ).One can then proceed iteratively as above.Therefore, = ˆ ★ + is the unique solution of ( . ) up to , finishing the proof.