1 Motivation and main result

Since the development of Itô’s stochastic calculus in the 1940s, quantitative properties of stochastic differential equations (SDEs) have been playing a central role in stochastic analysis for many decades. The present article is concerned with one particular aspect: tail probabilities of solutions. Consider a multidimensional SDE (written in Stratonovich form)

$$\begin{aligned} {dU_{t}=\sum _{\alpha =1}^{d}V_{\alpha }(U_{t})\circ dB_{t}^{\alpha },\ \ \ U_{0}=x\in {\mathbb {R}}^{N}} \end{aligned}$$
(1.1)

driven by a d-dimensional Brownian motion, where the vector fields \(V_{1},\cdots ,V_{d}:{\mathbb {R}}^{N}\rightarrow {\mathbb {R}}^{N}\) are assumed to be of class \(C_{b}^{\infty }\) (bounded, smooth with uniformly bounded derivatives of all orders). By using martingale methods, it is classical that the solution \(U_{t}\) has Gaussian tail for each fixed time t. Here we say that a random variable Z has Gaussian tail, if there exist positive constants \(C_{1},C_{2}\) such that

$$\begin{aligned} {\mathbb {P}}(|Z|>\lambda )\leqslant C_{1}e^{-C_{2}\lambda ^{2}}\ \ \ \forall \lambda >0. \end{aligned}$$

As an application, the existence of Gaussian tail for the solution \(X_{t}\) can be used to obtain Gaussian-type upper bounds on the fundamental solution of the heat equation associated with the generator of (1.1). We refer the reader to the seminal works [11, 12] for a quantitative study of these and other related questions.

Our main interest lies in understanding the tail behaviour of solutions beyond the diffusion case. A typical extension of SDEs to a non-semimartingale setting, where Itô’s calculus breaks down in an essential way, is to consider the situation where \(B_{t}\) is a Gaussian process, or even more specifically, a fractional Brownian motion with Hurst parameter \(H\ne 1/2\). When \(H>1/2\), by using Young’s integration theory it is possible to give a natural meaning of solutions to the SDE (1.1) (cf. [13]). When \(H<1/2\), the SDE (1.1) can no longer be defined in the classical sense of Young. A solution theory, commonly known as the rough path theory, was developed by Lyons [14] in 1998 to deal with this more singular regime. If \(H\in (1/4,1/2)\), one can establish the well-posedness of the SDE (1.1) within the framework of rough paths (cf. [6]).

Now let us consider a general SDE driven by fBM with Hurst parameter \(H\in (1/4,1)\). Since the driving process itself is Gaussian (thus having Gaussian tail), under the \(C_{b}^{\infty }\)-assumption on the vector fields it is not entirely unreasonable to believe that the solution should also have Gaussian tail. This turns out to be true in the case of \(H>1/2\), which was proved by Baudoin–Ouyang–Tindel [2] using Gaussian concentration techniques.

However, the situation becomes drastically subtler in the rough regime of \(H<1/2\). Under the \(C_{b}^{\infty }\)-assumption on the vector fields, it was a remarkable theorem of Cass–Litterer–Lyons [5] in 2013 that the following tail estimate of \(U_{t}\) holds true. For any \(\gamma <2H+1,\) there exist positive constants \(C_{1},C_{2}\) depending only on the vector fields, H and \(\gamma \), such that

$$\begin{aligned} {\mathbb {P}}(|U_{t}-x|>\lambda )\leqslant C_{1}e^{-C_{2}\lambda ^{\gamma }}\ \ \ \forall \lambda >0. \end{aligned}$$
(1.2)

As pointed out in [3], a more careful application of Cameron-Martin embedding allows one to achieve \(\gamma =2H+1\) in the estimate (1.2). This result, which is the best existing one, is a Weibull-type sub-Gaussian estimate since \(1+2H<2\).

While there are no available lower bounds in this rough regime, in view of the \(H\geqslant 1/2\) case it is tempting to ask if the Cass–Litterer–Lyons estimate (1.2) (referred to as the CLL estimate from now on) could be further improved to demonstrate that \(U_{t}\) does have Gaussian tail. The main goal of this article is to provide a negative answer of no to this question. Our main result is summarised as follows.

Theorem 1.1

Let \((X_{t},Y_{t})_{0\leqslant t\leqslant 1}\) be a two-dimensional fractional Brownian motion with Hurst parameter \(H\in (1/4,1/2)\). Let \(\phi :{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a \(C_{b}^{\infty }\)-function such that

$$\begin{aligned} \sup _{r>0}\inf _{x\in {\mathbb {R}}}\sup _{y\in [x,x+r]}|\phi '(y)|>0. \end{aligned}$$
(1.3)

Then for any \(\gamma >2H+1\), there exist positive constants \(C_{1},C_{2}\) depending only on \(H,\phi \) and \(\gamma \), such that

$$\begin{aligned} {\mathbb {P}}\big (\big |\int _{0}^{1}\phi (X_{t})dY_{t}\big |>\lambda \big )\geqslant C_{1}e^{-C_{2}\lambda ^{\gamma }} \end{aligned}$$
(1.4)

for all \(\lambda >0.\)

Remark 1.2

Heuristically, the condition (1.3) means that any interval of a certain length contains at least one point at which \(\phi '\) is not small. For instance, this condition is satisfied if \(\phi \) is non-constant and periodic.

To see how the rough line integral \(\int _{0}^{1}\phi (X_{t})dY_{t}\) is related to an SDE, one simply observes that it is the time-one value of the \(Z_{t}\) component of the following SDE:

$$\begin{aligned} d\left( \begin{array}{c} W_{t}\\ Z_{t} \end{array}\right) =\left( \begin{array}{cc} 1 &{}\quad 0\\ 0 &{}\quad \phi (W_{t}) \end{array}\right) \cdot \left( \begin{array}{c} dX_{t}\\ dY_{t} \end{array}\right) . \end{aligned}$$
(1.5)

In particular, Theorem 1.1 provides a simple class of examples of SDEs driven by fBM with \(C_{b}^{\infty }\)-vector fields, whose solutions do not possess Gaussian tail.

Corollary 1.3

Let \((B^{\alpha })_{\alpha =1}^{d}\) be fractional Brownian motion with Hurst parameter \(H\in (1/4,1/2)\). For all \(\gamma >1+2H\), there are \(C_{b}^{\infty }\)-vector fields \((V_{\alpha })_{\alpha =1}^{d}\) in \({\mathbb {R}}^{N}\) such that if \((U_{t})_{t\geqslant 0}\) is the solution to

$$\begin{aligned} dU_{t}=\sum _{\alpha =1}^{d}V_{\alpha }(U_{t})\circ dB_{t}^{\alpha },\ \ \ U_{0}=x\in {\mathbb {R}}^{N}, \end{aligned}$$

in the sense of rough path, then there exist \(C_{1},C_{2}>0\), such that

$$\begin{aligned} {\mathbb {P}}\big (\big |U_{1}-x\big |>\lambda \big )\geqslant C_{1}e^{-C_{2}\lambda ^{\gamma }}\qquad \forall \lambda >0. \end{aligned}$$

In particular, the \(1+2H\) exponent in the CLL estimate (1.2)

$$\begin{aligned} {\mathbb {P}}\big (\big |U_{1}-x\big |>\lambda \big )\leqslant C_{1}e^{-C_{2}\lambda ^{1+2H}}\qquad \forall \lambda >0. \end{aligned}$$

cannot be improved while still holds for all \(C_{b}^{\infty }\)-vector fields.

The lack-of-Gaussian tail phenomenon appears to be surprising at first glance. Since the driving process is Gaussian, it suggests that in a probabilistic sense the solution travels much faster than the driving process itself despite of the \(C_{b}^{\infty }\)-assumption. In the uniformly elliptic case, the intuition that “the solution process should behave more or less like the driving process” is simply not true. If one removes the \(C_{b}^{\infty }\)-assumption, it is of course possible for the tail of \(U_{t}\) to be as large as one wants (even with explosion in finite time). On the other hand, if the vector fields decay fast enough at infinity, one can make the tail of \(U_{t}\) as small as possible (as an extreme example, the solution will be uniformly bounded if the vector fields have compact supports). It is the case of suitably non-degenerate \(C_{b}^{\infty }\)-vector fields (e.g. uniformly elliptic) that makes the lack-of-Gaussian-tail phenomenon counterintuitive. It is not hard to construct examples of \(\phi \) satisfying the condition (1.3) of Theorem 1.1, such that the associated SDE (1.5) is uniformly elliptic (i.e. its coefficient matrix is uniformly positive definite with \(C_{b}^{\infty }\)-inverse).

Organisation In Sect. 2, we recall some basic properties of fBM. In Sect. 3, we develop the main ingredients for proving Theorem 1.1. In Sect. 4, we conclude with a few further questions.

2 Basic properties of fractional Brownian motion

In this section, we collect a minimal set of standard notions about fractional Brownian motion that are needed for our study. The reader is referred to [7, 15] (and the references therein) for more detailed discussions. We begin with its definition.

Definition 2.1

A one-dimensional fractional Brownian motion (fBM) with Hurst parameter \(H\in (0,1)\) is a mean-zero Gaussian process \(\{X_{t}:t\geqslant 0\}\) with covariance function

$$\begin{aligned} R(s,t)\triangleq {\mathbb {E}}[X_{s}X_{t}]=\frac{1}{2}(s^{2H}+t^{2H}-|t-s|^{2H}),\ \ \ s,t\geqslant 0. \end{aligned}$$

Throughout the rest, the time horizon is always fixed to be [0, 1]. A crucial property of fBM (indeed, of any continuous Gaussian process) is the notion of its Cameron-Martin space. There are two canonical (and non-identical) ways of defining it. Let \(\{X_{t}:t\in [0,1]\}\) be an fBM defined on some probability space \((\Omega ,{{{\mathcal {F}}}},{\mathbb {P}})\).

Definition 2.2

The non-intrinsic Cameron-Martin space, denoted as \({{{\mathcal {H}}}},\) is defined to be the Hilbert space completion of linear combinations of indicator functions \(\{\textbf{1}_{[0,t]}:t\in [0,1]\}\) with respect to the inner product

$$\begin{aligned} \langle \textbf{1}_{[0,s]},\textbf{1}_{[0,t]}\rangle _{{{{\mathcal {H}}}}}\triangleq R(s,t). \end{aligned}$$

The intrinsic Cameron–Martin space, denoted as \(\bar{{{{\mathcal {H}}}}}\), is the subspace of continuous paths \(h:[0,1]\rightarrow {\mathbb {R}}\) that admit the representation

$$\begin{aligned} h_{t}={\mathbb {E}}[ZX_{t}],\ \ \ t\in [0,1] \end{aligned}$$

for some Z belonging to the \(L^{2}\)-completion of linear combinations of \(\{X_{t}:t\in [0,1]\}\). It is also a Hilbert space with respect to the inner product

$$\begin{aligned} \langle h_{1},h_{2}\rangle _{\bar{{{{\mathcal {H}}}}}}\triangleq {\mathbb {E}}[Z_{1}Z_{2}], \end{aligned}$$

where \(Z_{i}\) is the unique \(L^{2}\)-element associated with \(h_{i}\) (\(i=1,2\)).

Note that \({{{\mathcal {H}}}}\) and \(\bar{{{{\mathcal {H}}}}}\) are different spaces (as sets). Nonetheless, they are isometrically isomorphic through the important notion of Paley–Wiener integral which we now define. For indicator functions, the map \({{{\mathcal {I}}}}_{1}:\textbf{1}_{[0,t]}\mapsto X_{t}\) is clearly an isometric embedding into \(L^{2}({\mathbb {P}})\). As a result, it extends to an isometric embedding \({{{\mathcal {I}}}}_{1}:{{{\mathcal {H}}}}\rightarrow L^{2}({\mathbb {P}})\) in a canonical way.

Definition 2.3

The embedding \({{{\mathcal {I}}}}_{1}:{{{\mathcal {H}}}}\rightarrow L^{2}({\mathbb {P}})\) is called the Paley-Wiener integral map associated with the fBM.

The following classical result gives a canonical identification between the two spaces \({{{\mathcal {H}}}}\) and \(\bar{{{{\mathcal {H}}}}}\).

Theorem 2.4

There is a canonical isometric isomorphism \({{{\mathcal {R}}}}:{{{\mathcal {H}}}}\rightarrow \bar{{{{\mathcal {H}}}}}\) defined by

$$\begin{aligned}{}[{{{\mathcal {R}}}}(h)]_{t}\triangleq {\mathbb {E}}[{{{\mathcal {I}}}}_{1}(h)X_{t}],\ \ \ t\in [0,1] \end{aligned}$$

for each \(h\in {{{\mathcal {H}}}}\).

We now recall a basic representation of the \({{{\mathcal {H}}}}\)-norm that plays a central role in our analysis (cf. [1, Theorem 2.5]).

Theorem 2.5

For every \(f\in {{{\mathcal {H}}}},\) one has

$$\begin{aligned} \Vert f\Vert _{{{{\mathcal {H}}}}}^{2}=\frac{1}{2}H(1-2H)\int _{{\mathbb {R}}^{2}}\frac{({\bar{f}}(x)-{\bar{f}}(y))^{2}}{|x-y|^{2-2H}}dxdy, \end{aligned}$$
(2.1)

where \({\bar{f}}(x)\triangleq f(x)\textbf{1}_{[0,1]}(x).\) In addition, \({{{\mathcal {H}}}}\) coincides with the space of \(f\in L^{2}([0,1])\) such that the right hand side of (2.1) is finite.

We shall stop the list of fBM properties for now; in later sections we will either quote or prove more whenever it becomes relevant to us.

We conclude this section by mentioning how the rough integral \(\int _{0}^{1}\phi (X_{t})dY_{t}\) is defined in the most obvious way. Let (XY) be a two-dimensional fBM (i.e. XY are i.i.d. fBMs) with Hurst parameter \(H\in (1/4,1).\) For each \(m\in {\mathbb {N}}\), let \(X^{(m)}\) be the m-th dyadic piecewise linear interpolation of X, i.e. \(X_{k/2^{m}}^{(m)}=X_{k/2^{m}}\) for all \(k=0,1,\ldots ,2^{m}\) and \(X_{t}^{(m)}\) is linear on each sub-interval \([(k-1)/2^{m},k/2^{m}]\). Define \(Y^{(m)}\) in the same way. Note that \(\int _{0}^{1}\phi (X_{t}^{(m)})dY_{t}^{(m)}\) is well-defined as a Riemann–Stieltjes integral for each m. It is shown in rough path theory that the limit of \(\int _{0}^{1}\phi (X_{t}^{(m)})dY_{t}^{(m)}\) exists a.s. and in \(L^{p}\) (for all \(p\geqslant 1\)) as \(m\rightarrow \infty \). The resulting random variable is the rough integral \(\int _{0}^{1}\phi (X_{t})dY_{t}.\)

Remark 2.6

For the sake of conciseness and readability, we have chosen not to get into any substantial definitions of rough paths or rough integration. This will not affect the main discussion, as for most of the time rough path analysis is not essentially needed for our purpose.

3 Proof of Theorem 1.1

Our main strategy of proving Theorem 1.1 consists of the following three steps:

Step one By conditioning on X, the integral \(\int _{0}^{1}\phi (X_{t})dY_{t}\) becomes Gaussian with (random) variance denoted as I(X). The tail probability on the left hand side of (1.4) is easily related to a suitable integral of I(X).

Step two By using the fractional Sobolev-norm representation (2.1), one can obtain a lower estimate of the tail probability \({\mathbb {P}}(I(X)>\lambda )\).

Step three. The lower tail estimate of I(X) translates to a corresponding lower tail estimate of the integral \(\int _{0}^{1}\phi (X_{t})dY_{t}\), in view of the relation obtained in the first step.

In the sequel, we develop the above three ingredients precisely. Throughout the rest of this section, unless otherwise stated, (XY) is a two-dimensional fBM with Hurst parameter \(H\in (1/4,1/2)\) and \(\phi :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a given fixed \(C_{b}^{\infty }\)-function satisfying the condition (1.3).

3.1 A fractional Sobolev-norm representation of the conditional variance

Our entire argument relies critically on the following fractional Sobolev-norm representation of the conditional variance of \(\int _{0}^{1}\phi (X_{t})dY_{t}\).

Proposition 3.1

Conditional on X,  the random variable \(\int _{0}^{1}\phi (X_{t})dY_{t}\) is Gaussian with mean zero and (random) variance

$$\begin{aligned}{} & {} {\mathbb {E}}\Bigg [\Bigg (\int _{0}^{1}\phi (X_{t})dY_{t}\big )^{2}\Bigg |X\Bigg ]\nonumber \\{} & {} \quad =\frac{H(1-2H)}{2}\int _{{\mathbb {R}}^{2}}\frac{\big (\phi (X_{t})\textbf{1}_{[0,1]}(t)-\phi (X_{s})\textbf{1}_{[0,1]}(s)\big ){}^{2}}{|t-s|^{2-2H}}dsdt. \end{aligned}$$
(3.1)

In particular, the integral on the right hand side is finite a.s.

Such a representation is clear at least at a formal level. Indeed, when freezing the X-path, the integral \(\int _{0}^{1}\phi (X_{t})dY_{t}\) resembles a Paley-Wiener integral with respect to Y. The relation (3.1) simply becomes (2.1) with \(f=\phi (X)\). However, some care is needed to make this precise since \(\int _{0}^{1}\phi (X_{t})dY_{t}\) is a rough path integral by its definition. We break down the proof of (3.1) into a couple of basic lemmas.

The first lemma shows that the conditional variance can be computed through piecewise linear approximation.

Lemma 3.2

Let \(X^{(m)}\) denote the m-th dyadic piecewise linear interpolation of X. Then one has

$$\begin{aligned} {\mathbb {E}}\Bigg [\Bigg (\int _{0}^{1}\phi (X_{t})dY_{t}\Bigg )^{2}\Bigg |X\Bigg ]=\lim _{m\rightarrow \infty }{\mathbb {E}}\Bigg [\Bigg (\int _{0}^{1}\phi (X_{t}^{(m)})dY_{t}\Bigg )^{2}\Bigg |X\Bigg ]\ \ \ \text {a.s.} \end{aligned}$$
(3.2)

Proof

For each fixed m, it is clear that the conditional distribution of \(\int _{0}^{1}\phi (X_{t}^{(m)})dY_{t}\) given X is Gaussian, more precisely,

$$\begin{aligned} {\mathbb {E}}\Bigg [\exp \Bigg (i\theta \int _{0}^{1}\phi (X_{t}^{(m)})dY_{t}\Bigg )\Bigg |X\Bigg ]=\exp \Bigg (-\frac{\theta ^{2}}{2}{\mathbb {E}}\Bigg [\Bigg (\int _{0}^{1}\phi (X_{t}^{(m)})dY_{t}\Bigg )^{2}\Bigg |X\Bigg ]\Bigg )\ \ \ \forall \theta \in {\mathbb {R}}. \end{aligned}$$

In addition, from the continuity of rough integrals (cf. [7, Theorem 10.50]) one has

$$\begin{aligned} \int _{0}^{1}\phi (X_{t}^{(m)})dY_{t}\rightarrow \int _{0}^{1}\phi (X_{t})dY_{t}\ \ \ \text {a.s.} \end{aligned}$$

as \(m\rightarrow \infty \) (see for example Lemma 6 in [8], or [16] for the \(H>\frac{1}{3}\) case). By the conditional dominated convergence theorem, one concludes that

$$\begin{aligned} {\mathbb {E}}\Bigg [\exp \Bigg (i\theta \int _{0}^{1}\phi (X_{t})dY_{t}\Bigg )\Bigg |X\Bigg ]=\exp \Bigg (-\frac{\theta ^{2}}{2}\lim _{m\rightarrow \infty }{\mathbb {E}}\Bigg [\Bigg (\int _{0}^{1}\phi (X_{t}^{(m)})dY_{t}\Bigg )^{2}\Bigg |X\Bigg ]\Bigg ).\nonumber \\ \end{aligned}$$
(3.3)

This relation indicates that \(\int _{0}^{1}\phi (X_{t})dY_{t}\Big |_{X}\) is Gaussian and the limit inside the above exponential must exist a.s. The finiteness of moments of \(\int _{0}^{1}\phi (X_{t})dY_{t}\) (which is easily implied by e.g. Fernique’s lemma for the fractional Brownian rough path; cf. [7, Theorem 15.33]) allows one to differentiate the relation (3.3) with respect to \(\theta \). The desired identity (3.2) drops out after twice differentiation at \(\theta =0\). \(\square \)

In order to relate the conditional rough integral (X being frozen) to a Paley–Wiener integral, one needs an embedding property of the Cameron-Martin space as well as a simple fact about piecewise linear approximations under Hölder norm. We summarise them in the next two lemmas.

Lemma 3.3

Let \(H\in (1/4,1/2)\). Then the inclusion \({{{\mathcal {C}}}}_{0}^{\gamma }\subseteq {{{\mathcal {H}}}}\) is a continuous embedding for every \(\gamma >1/2-H\), where \({{{\mathcal {C}}}}_{0}^{\gamma }\) denotes the space of \(\gamma \)-Hölder continuous paths \(h:[0,1]\rightarrow {\mathbb {R}}\) with \(h_{0}=0\).

Remark 3.4

The set inclusion \({{{\mathcal {C}}}}_{0}^{\gamma }\subseteq {{{\mathcal {H}}}}\) is stated in [15, p. 284] but its continuity is not clear over there.

Proof

In what follows, \(c_{H}\) denotes a universal constant depending only on H whose value may change from line to line but is of no importance. Let \(h\in {{{\mathcal {C}}}}_{0}^{\gamma }\). According to [15, p. 284], its \({{{\mathcal {H}}}}\)-norm is computed as \(\Vert h\Vert _{{{{\mathcal {H}}}}}=\Vert K^{*}h\Vert _{L^{2}([0,1])}\), where

$$\begin{aligned} (K^{*}h)(t)\triangleq c_{H}t^{1/2-H}\big (D_{1-}^{1/2-H}(s^{H-1/2}h_{s})\big )(t) \end{aligned}$$

and \(D_{1-}^{1/2-H}\) denotes the fractional derivative operator. Unwinding its definition explicitly, one has

$$\begin{aligned} \Vert h\Vert _{{{{\mathcal {H}}}}}^{2}&=c_{H}\int _{0}^{1}t^{1-2H}\Bigg [\frac{t^{H-1/2}h_{t}}{(1-t)^{1/2-H}}+\Bigg (\frac{1}{2}-H\Bigg )\int _{t}^{1}\frac{t^{H-1/2}h_{t}-s^{H-1/2}h_{s}}{(s-t)^{3/2-H}}ds\Bigg ]^{2}dt\\&\leqslant c_{H}\Bigg [\int _{0}^{1}\frac{h_{t}^{2}}{(1-t)^{1-2H}}dt+\int _{0}^{1}t^{1-2H}\Bigg (\int _{t}^{1}\frac{t^{H-1/2}h_{t}-s^{H-1/2}h_{s}}{(s-t)^{3/2-H}}ds\Bigg )^{2}dt\Bigg ]\\&=:c_{H}(A+B). \end{aligned}$$

It is straightforward to see that (using \(h_{0}=0\))

$$\begin{aligned} A\leqslant C_{H,\gamma }\cdot \Vert h\Vert _{\gamma }^{2}\ \ \ \text {where }\Vert h\Vert _{\gamma }\triangleq \sup _{s\ne t\in [0,1]}\frac{|h_{t}-h_{s}|}{|t-s|^{\gamma }}. \end{aligned}$$

To estimate B, one further writes

$$\begin{aligned} B&=\int _{0}^{1}t^{1-2H}\Bigg (\int _{t}^{1}\frac{t^{H-1/2}(h_{t}-h_{s})+(t^{H-1/2}-s^{H-1/2})h_{s}}{(s-t)^{3/2-H}}ds\Bigg )^{2}dt\\&\leqslant 2\int _{0}^{1}t^{1-2H}\Bigg [\Bigg (\int _{t}^{1}\frac{t^{H-1/2}(h_{t}-h_{s})}{(s-t)^{3/2-H}}ds\Bigg )^{2}\\&\quad +\,\Bigg (\int _{t}^{1}\frac{(t^{H-1/2}-s^{H-1/2})(h_{s}-h_{0})}{(s-t)^{3/2-H}}ds\Bigg )^{2}\Bigg ]dt\\&=:2(B_{1}+B_{2}). \end{aligned}$$

For the \(B_{1}\)-integral, note that

$$\begin{aligned}&\Bigg |\int _{t}^{1}\frac{t^{H-1/2}(h_{t}-h_{s})}{(s-t)^{3/2-H}}ds\Bigg | \leqslant \Vert h\Vert _{\gamma }\cdot \int _{t}^{1}\frac{t^{H-1/2}(s-t)^{\gamma }}{(s-t)^{3/2-H}}ds\\&=C_{H,\gamma }t^{H-1/2}(1-t)^{\gamma +H-1/2}\cdot \Vert h\Vert _{\gamma }. \end{aligned}$$

This easily implies

$$\begin{aligned} B_{1}\leqslant C_{H,\gamma }\Vert h\Vert _{\gamma }^{2}. \end{aligned}$$

Similarly, for the \(B_{2}\)-integral, one has

$$\begin{aligned}&\Bigg |\int _{t}^{1}\frac{(t^{H-1/2}-s^{H-1/2})(h_{s}-h_{0})}{(s-t)^{3/2-H}}ds\Bigg | \leqslant \Bigg |\int _{t}^{1}\frac{(t^{H-1/2}-s^{H-1/2})s^{\gamma }}{(s-t)^{3/2-H}}ds\Bigg |\cdot \Vert h\Vert _{\gamma }\\&=t^{2H+\gamma -1}\Bigg (\int _{1}^{1/t}\frac{(1-\rho ^{H-1/2})\rho ^{\gamma }}{(\rho -1)^{3/2-H}}d\rho \Bigg )\cdot \Vert h\Vert _{\gamma }\\&\leqslant C_{H,\gamma }t^{2H+\gamma -1}\cdot \Bigg (\frac{1}{t}\Bigg )^{\gamma +H-1/2}\Vert h\Vert _{\gamma }\\&=C_{H,\gamma }t^{H-1/2}\cdot \Vert h\Vert _{\gamma }. \end{aligned}$$

As a result, one also finds that

$$\begin{aligned} B_{2}\leqslant C_{H,\gamma }\Vert h\Vert _{\gamma }^{2}. \end{aligned}$$

\(\square \)

Lemma 3.5

Let \(x:[0,1]\rightarrow {\mathbb {R}}\) be an \(\gamma \)-Hölder continuous path. For each \(m\geqslant 1\), let \(x^{(m)}\) denote the m-th dyadic piecewise linear interpolation of x. Then one has

$$\begin{aligned} \Vert x^{(m)}-x\Vert _{\beta }\leqslant 4(2^{-m})^{\gamma -\beta }\Vert x\Vert _{\gamma } \end{aligned}$$
(3.4)

for all \(\beta <\gamma \).

Proof

This is straightforward calculation. Let \(s<t\) be given. We only consider the case when st belong to different dyadic sub-intervals, say \(s\in [t_{k-1},t_{k}]\) and \(t\in [t_{l},t_{l+1}]\) with \(k\leqslant l\) (the case when st belong to the same sub-interval is easier and left to the reader). Since \(x^{(m)}\) and x agree on the dyadic points, it is obvious that

$$\begin{aligned} x_{s,t}^{(m)}-x_{s,t}=\Bigg (x_{t_{l},t}^{(m)}-x_{t_{l},t}\Bigg )-\Bigg (x_{s,t_{k}}^{(m)}-x_{s,t_{k}}\Bigg ), \end{aligned}$$

where \(x_{s,t}\triangleq x_{t}-x_{s}\) and similarly for \(x_{s,t}^{(m)}.\) We simply use triangle inequality to bound all these four terms, i.e. one has

$$\begin{aligned} \Bigg |\frac{x_{t_{l},t}}{(t-s)^{\beta }}\Bigg |\leqslant \frac{(t-t_{l})^{\gamma }}{(t-s)^{\beta }}\Vert x\Vert _{\gamma }=\frac{(t-t_{l})^{\beta }(t-t_{l})^{\gamma -\beta }}{(t-s)^{\beta }}\Vert x\Vert _{\gamma }\leqslant (2^{-m})^{\gamma -\beta }\Vert x\Vert _{\gamma } \end{aligned}$$

and

$$\begin{aligned} \Bigg |\frac{x_{t_{l},t}^{(m)}}{(t-s)^{\beta }}\Bigg |=\frac{t-t_{l}}{2^{-m}}\cdot \frac{\Bigg |x_{t_{l},t_{l+1}}\Bigg |}{(t-s)^{\beta }}\leqslant \frac{t-t_{l}}{2^{-m}}\cdot \frac{(2^{-m})^{\gamma }}{(t-s)^{\beta }}\Vert x\Vert _{\gamma }\leqslant (2^{-m})^{\gamma -\beta }\Vert x_{\gamma }\Vert . \end{aligned}$$

The other two terms \(x_{s,t_{k}}\) and \(x_{s,t_{k}}^{(m)}\) are estimated in the same way. The desired inequality (3.4) thus follows. \(\square \)

We are now able to justify the representation (3.1) precisely.

Proof of Proposition 3.1

In view of (2.1), it suffices to show that

$$\begin{aligned} {\mathbb {E}}\Bigg [\Bigg (\int _{0}^{1}\phi (X_{t})dY_{t}\Bigg )^{2}\Bigg |X\Bigg ]={\mathbb {E}}\Bigg [{{{\mathcal {I}}}}_{1}(h)^{2}\Bigg ]\Bigg |_{h=\phi (X)}\ \ \ \text {a.s.} \end{aligned}$$
(3.5)

It is clear that \({{{\mathcal {I}}}}_{1}(\phi (z))=\int _{0}^{1}\phi (z_{t})dY_{t}\) when z is a deterministic piecewise linear path. By Lemma 3.2, one has

$$\begin{aligned} {\mathbb {E}}\Bigg [\Bigg (\int _{0}^{1}\phi (X_{t})dY_{t}\Bigg )^{2}\Bigg |X\Bigg ]=\lim _{m\rightarrow \infty }{\mathbb {E}}\Bigg [\Bigg (\int _{0}^{1}\phi (X_{t}^{(m)})dY_{t}\Bigg )^{2}\Bigg |X\Bigg ]=\lim _{m\rightarrow \infty }{\mathbb {E}}[{{{\mathcal {I}}}}_{1}(h)^{2}]\Bigg |_{h=\phi (X^{(m)})}. \end{aligned}$$

To reach the relation (3.5), one first observes that X has \(\gamma \)-Hölder sample paths a.s. for any \(\gamma <H\). Since \(H>1/4\), it is possible to choose \(\gamma \in (1/2-H,H)\). According to Lemma 3.5, by sacrificing \(\gamma \) a little bit one can ensure that \(\phi (X^{(m)})\rightarrow \phi (X)\) a.s. under \(\gamma \)-Hölder norm. It then follows from Lemma 3.3 that

$$\begin{aligned} \phi (X^{(m)})\rightarrow \phi (X)\ \text {a.s. in }{{{\mathcal {H}}}}. \end{aligned}$$

Consequently, by the Paley–Wiener isometry one finds that

$$\begin{aligned} \lim _{m\rightarrow \infty }{\mathbb {E}}[{{{\mathcal {I}}}}_{1}(h)^{2}]\Bigg |_{h=\phi (X^{(m)})}={\mathbb {E}}[{{{\mathcal {I}}}}_{1}(h)^{2}]\Bigg |_{h=\phi (X)}\ \ \ \text {a.s.} \end{aligned}$$

The relation (3.5) thus follows. \(\square \)

3.2 Lower tail estimate of the conditional variance

Recall that \(I(X)\triangleq {\mathbb {E}}[\int _{0}^{1}\phi (X_{t})dY_{t}|X]\). It is not hard to convince oneself that the tail behaviour of the rough integral \(\int _{0}^{1}\phi (X_{t})dY_{t}\) is closely related to that of I(X). We shall make this point quantitatively precise in Sect. 3.3 where we also complete the proof of Theorem 1.1. In this subsection, we establish the following key lower estimate on the tail probability of I(X).

Proposition 3.6

For any \(\alpha >H+1/2\), there exist positive constants \(C_{1},C_{2}\) depending only on H and \(\alpha \), such that

$$\begin{aligned} {\mathbb {P}}(I(X)>\lambda )\geqslant C_{1}\exp \Bigg (-C_{2}\lambda ^{\frac{2\alpha }{1-2H}}\Bigg ) \end{aligned}$$
(3.6)

for all large \(\lambda \).

Before developing the details, it is helpful to first explain the key idea behind the proof. One can think of the covariance function \(I(\cdot )\) as a positive functional on paths. For each given path \(h:[0,1]\rightarrow {\mathbb {R}}\), the function \(\lambda \mapsto I(\lambda h)\) possesses a suitable growth property as \(\lambda \rightarrow \infty \). The essential point in the argument is to construct a Cameron-Martin path h such that this function achieves a “maximal” growth rate. It turns out that such an h should have “worst” regularity within the Cameron-Martin space.

In what follows, we develop the major steps for proving Proposition 3.6.

3.2.1 Reduction of the double integral

Let us begin with a simple reduction of the problem. We introduce the following path functional

$$\begin{aligned} I(x)\triangleq \frac{H(1-2H)}{2}\int _{{\mathbb {R}}^{2}}\frac{\big (\phi (x_{t})\textbf{1}_{[0,1]}(t)-\phi (x_{s})\textbf{1}_{[0,1]}(s)\big ){}^{2}}{|t-s|^{2-2H}}dsdt \end{aligned}$$

where \(x:[0,1]\rightarrow {\mathbb {R}}\) is any continuous path, provided that the right hand side is finite. We also set

$$\begin{aligned} J(x)\triangleq \int _{0}^{1}\int _{0}^{t}\frac{\big (\phi (x_{t})-\phi (x_{s})\big ){}^{2}}{|t-s|^{2-2H}}dsdt. \end{aligned}$$
(3.7)

The lemma below reduces our problem to the study of the J-functional.

Lemma 3.7

One has

$$\begin{aligned} \sup _{x:[0,1]\rightarrow {\mathbb {R}}}\big |I(x)-H(1-2H)J(x)\big |<\infty . \end{aligned}$$

Proof

The integral I(x) can be decomposed into

$$\begin{aligned} I(x)=H(1-2H)\Bigg (J(x)+\int _{0}^{1}\Bigg (\int _{-\infty }^{0}+\int _{1}^{\infty }\Bigg )\frac{|\phi (x_{t})|^{2}}{|t-s|^{2-2H}}dsdt\Bigg ). \end{aligned}$$
(3.8)

It is easily seen that

$$\begin{aligned} \int _{0}^{1}\int _{-\infty }^{0}\frac{|\phi (x_{t})|^{2}}{|t-s|^{2-2H}}dsdt=\frac{1}{1-2H}\int _{0}^{1}t^{2H-1}|\phi (x_{t})|^{2}dt\leqslant C_{H}\Vert \phi \Vert _{\infty }^{2}<\infty . \end{aligned}$$

Similarly, the \(\int _{0}^{1}\int _{1}^{\infty }\)-integral in (3.8) is also bounded by a constant independent of x. The result thus follows. \(\square \)

Remark 3.8

As a consequence of Lemma 3.7, in order to prove Proposition 3.6 it is sufficient to establish the same inequality for J(X).

3.2.2 A Weierstrass-type Cameron–Martin path

The critical point in the argument is to construct a Cameron-Martin path \(h\in \bar{{{{\mathcal {H}}}}}\) such that the function \(\lambda \mapsto J(\lambda h)\) achieves a fastest growth rate as \(\lambda \rightarrow \infty .\) To motivate its construction, our ansatz is that h should have essentially the worst regularity within the Cameron-Martin space \(\bar{{{{\mathcal {H}}}}}\). It is a standard fact that \(\bar{{{{\mathcal {H}}}}}\) contains \(\alpha \)-Hölder continuous paths h (with \(h(0)=0\)) for all \(\alpha >H+1/2\) (cf. [4, Lemma 6.2]). In addition, there is a continuous embedding \(\bar{{{{\mathcal {H}}}}}\hookrightarrow C^{q\text {-var}}\) for all \(q>(H+1/2)^{-1}\) (cf. [9, Corollary 1]). These two properties together suggest that a natural candidate h should have Hölder regularity just slightly better than \(H+1/2\). A typical way of explicitly constructing Hölder functions is through a Weierstrass-type series.

Let \(\alpha \in (0,1)\) be a given fixed exponent. We define the path \(h_{\alpha }:{\mathbb {R}}\rightarrow {\mathbb {R}}\) by

$$\begin{aligned} h_{\alpha }(t)\triangleq \sum _{n=-\infty }^{\infty }2^{-n\alpha }\sin 2^{n}\pi t. \end{aligned}$$
(3.9)

The main properties of \(h_{\alpha }(t)\) that are essential to us are summarised in the following lemma.

Lemma 3.9

We write \(h_{\alpha }(t)=f_{\alpha }(t)+g_{\alpha }(t),\) where

$$\begin{aligned} f_{\alpha }(t)\triangleq \sum _{n=-\infty }^{0}2^{-n\alpha }\sin 2^{n}\pi t,\ g_{\alpha }(t)\triangleq \sum _{n=1}^{\infty }2^{-n\alpha }\sin 2^{n}\pi t. \end{aligned}$$

Then \(f_{\alpha }(t)\) is Lipschitz continuous and \(g_{\alpha }(t)\) is \(\alpha \)-Hölder continuous, more precisely,

$$\begin{aligned} \big |f_{\alpha }(t)-f_{\alpha }(s)\big |\leqslant L|t-s|,\ \big |g_{\alpha }(t)-g_{\alpha }(s)\big |\leqslant L'|t-s|^{\alpha }\ \ \ \forall s,t\in {\mathbb {R}} \end{aligned}$$
(3.10)

with some constants \(L,L'>0.\) In addition, \(g_{\alpha }(t)\) is 1-periodic on \({\mathbb {R}}\) and \(h_{\alpha }(t)\) satisfies the following scaling property:

$$\begin{aligned} h_{\alpha }(2^{m}t)\triangleq 2^{m\alpha }h_{\alpha }(t)\ \ \ \forall m\in {\mathbb {Z}},t\in {\mathbb {R}}. \end{aligned}$$
(3.11)

Proof

The 1-periodicity of \(g_{\alpha }(t)\) as well as the scaling property of \(h_{\alpha }(t)\) are straight forward by definition. To check the \(\alpha \)-Hölder continuity of \(g_{\alpha }(t),\) due to periodicity let us just assume that \(s<t\in [0,1].\) Then one has

$$\begin{aligned} g_{\alpha }(t)-g_{\alpha }(s)=2\sum _{n=1}^{\infty }2^{-n\alpha }\sin (2^{n-1}\pi (t-s))\cos (2^{n-1}\pi (t+s)), \end{aligned}$$
(3.12)

and in particular,

$$\begin{aligned} \big |g_{\alpha }(t)-g_{\alpha }(s)\big |&\leqslant 2\sum _{n=1}^{\infty }2^{-n\alpha }\big |\sin (2^{n-1}\pi (t-s))\big |. \end{aligned}$$

Let N be the unique non-negative integer such that \(2^{-(N+1)}<t-s\leqslant 2^{-N}\). It follows that

$$\begin{aligned} \big |g_{\alpha }(t)-g_{\alpha }(s)\big |\leqslant 2\Bigg (\sum _{n=1}^{N}2^{-n\alpha }\cdot 2^{n-1}\pi (t-s)+\sum _{n=N+1}^{\infty }2^{-n\alpha }\Bigg )\leqslant C2^{-N\alpha }\leqslant C'|t-s|^{\alpha }. \end{aligned}$$

Therefore, \(g_{\alpha }(t)\) is \(\alpha \)-Hölder continuous. The Lipschitz property of \(f_{\alpha }(t)\) follows from a similar estimate:

$$\begin{aligned} \big |f_{\alpha }(t)-f_{\alpha }(s)\big |\leqslant \sum _{n=-\infty }^{0}2^{-n\alpha }2^{n}\pi |t-s|=C''|t-s|, \end{aligned}$$

where \(C''\triangleq \pi \sum _{n=-\infty }^{0}2^{n(1-\alpha )}<\infty \) since \(\alpha \in (0,1)\). \(\square \)

We also need the following property regarding the continuity of \(g_{\alpha }(t)\) at \(t=0.\)

Lemma 3.10

One has

$$\begin{aligned} \underset{t\rightarrow 0^{+}}{{\overline{\lim }}}\frac{|g_{\alpha }(t)|}{t}=+\infty . \end{aligned}$$
(3.13)

Proof

We first recall the following elementary inequality (Jordan’s inequality)

$$\begin{aligned} \sin x\geqslant \frac{2}{\pi }x\ \ \ \forall x\in \big [0,\frac{\pi }{2}\big ], \end{aligned}$$
(3.14)

whose proof is straightforward. By taking \(t=2^{-m}\) (\(m\geqslant 1\)), it follows from (3.14) that

$$\begin{aligned} g_{\alpha }(2^{-m})&=\sum _{n=1}^{\infty }2^{-n\alpha }\sin 2^{n-m}\pi =\sum _{n=1}^{m-1}2^{-n\alpha }\sin 2^{n-m}\pi \\&\geqslant 2\sum _{n=1}^{m-1}2^{-n\alpha }2^{n-m}\geqslant 2\cdot 2^{-(m-1)\alpha }2^{(m-1)-m}=2^{-(m-1)\alpha }. \end{aligned}$$

As a consequence, one has

$$\begin{aligned} \frac{g_{\alpha }(2^{-m})}{2^{-m}}\geqslant \frac{2^{-(m-1)\alpha }}{2^{-m}}=2^{m(1-\alpha )+\alpha }\nearrow \infty \end{aligned}$$

as \(m\rightarrow \infty .\) The result thus follows. \(\square \)

Remark 3.11

To ensure that \(h_{\alpha }\in \bar{{{{\mathcal {H}}}}}\) (this is needed for the application of Cameron-Martin transformation later on), one can only choose the exponent \(\alpha \) to be arbitrarily close (but not exactly equal) to \(H+1/2\). This is the unfortunate reason why one needs to sacrifice the CLL exponent \(1+2H\) by an arbitrarily small amount in Theorem 1.1.

We will now apply Lemmas 3.9 and 3.10 to prove the following property of \(h_{\alpha }\) which will be the key non-degeneracy property on h.

Lemma 3.12

$$\begin{aligned} \inf _{k\in {\mathbb {N}}}\sup _{v_{1},v_{2}\in [0,1]}|h_{\alpha }(v_{1}+k)-h_{\alpha }(v_{2}+k)|>0. \end{aligned}$$

Proof

Since \(g_{\alpha }\) is 1-periodic, one has

$$\begin{aligned} h_{\alpha }(v+k)-h_{\alpha }(k)=f_{\alpha }(v+k)-f_{\alpha }(k)+g_{\alpha }(v). \end{aligned}$$

According to Lemma 3.13, there exists \(v_{0}>0\) such that

$$\begin{aligned} \big |g_{\alpha }(v_{0})\big |\geqslant 2Lv_{0}. \end{aligned}$$

Here L is the Lipschitz constant for \(f_{\alpha }\) (cf. (3.10)). As a result, one has

$$\begin{aligned} \big |h_{\alpha }(v_{0}+k)-h_{\alpha }(k)\big |&\geqslant \big |g_{\alpha }(v_{0})\big |-\big |f_{\alpha }(v_{0}+k)-f_{\alpha }(k)\big |\\&\geqslant 2Lv_{0}-Lv_{0}\geqslant Lv_{0}. \end{aligned}$$

The Lemma now follows by noting that L and \(v_{0}\) are independent of k. \(\square \)

3.2.3 Composition of \(\phi \) with a Weierstrass path

We will in fact need to use the above lemmas on the composition of \(h_{\alpha }\) with \(\phi \). We first state an elementary fact.

Lemma 3.13

Suppose that \(\phi :[a,b]\rightarrow {\mathbb {R}}\) has two continuous derivatives. Then for all \(x,y_{1},y_{2}\) such that \(|y_{i}-x|\leqslant \frac{|\phi ^{\prime }(x)|}{4(\Vert \phi ^{\prime \prime }\Vert _{\infty }+1)}\) for \(i=1,2\), one has

$$\begin{aligned} |\phi (y_{1})-\phi (y_{2})|\geqslant \frac{1}{4}|\phi ^{\prime }(x)||y_{1}-y_{2}|. \end{aligned}$$

Proof

The Lemma is trivial if \(\phi '(x)=0,\) so we assume \(\phi ^{\prime }(x)\ne 0\). Taylor’s theorem with second order remainder gives

$$\begin{aligned} \phi (y_{1})-\phi (y_{2})-\phi ^{\prime }(y_{2})(y_{1}-y_{2})&=\int _{y_{2}}^{y_{1}}\left( \int _{y_{2}}^{u}\phi ^{\prime \prime }(v)\textrm{d}v\right) \textrm{d}u\\ \phi ^{\prime }(y_{2})-\phi ^{\prime }(x)&=\int _{x}^{y_{2}}{\phi ''}(v)\textrm{d}v \end{aligned}$$

It follows that

$$\begin{aligned} |\phi (y_{1})-\phi (y_{2})-\phi ^{\prime }(y_{2})(y_{1}-y_{2})|&\leqslant \frac{\Vert \phi ^{\prime \prime }\Vert _{\infty }}{2}|y_{1}-y_{2}|^{2}\\ |\phi ^{\prime }(y_{2})-\phi ^{\prime }(x)|&\leqslant ||\phi ^{\prime \prime }||_{\infty }|y_{2}-x|. \end{aligned}$$

In addition, the reverse triangle inequality implies

$$\begin{aligned} |\phi (y_{1})-\phi (y_{2})|&\geqslant |\phi ^{\prime }(y_{2})|\cdot |y_{1}-y_{2}|-\frac{\Vert \phi ^{\prime \prime }\Vert _{\infty }}{2}|y_{1}-y_{2}|^{2}\\ |\phi ^{\prime }(y_{2})|&\geqslant |\phi ^{\prime }(x)|-||\phi ^{\prime \prime }||_{\infty }|y_{2}-x|. \end{aligned}$$

If \(|y_{2}-x|\leqslant \frac{|\phi ^{\prime }(x)|}{2(\Vert \phi ^{\prime \prime }\Vert _{\infty }+1)}\), from the second inequality one obtains that

$$\begin{aligned} |\phi ^{\prime }(y_{2})|\geqslant \frac{1}{2}|\phi ^{\prime }(x)|, \end{aligned}$$

and therefore

$$\begin{aligned} |\phi (y_{1})-\phi (y_{2})|\geqslant \frac{|\phi ^{\prime }(x)|}{2}|y_{1}-y_{2}|-\frac{\Vert \phi ^{\prime \prime }\Vert _{\infty }}{2}|y_{1}-y_{2}|^{2}. \end{aligned}$$

We now apply the inequality \(|y_{1}-y_{2}|\leqslant \frac{|\phi ^{\prime }(x)|}{2(\Vert \phi ^{\prime \prime }\Vert _{\infty }+1)}\) to get that

$$\begin{aligned} |\phi (y_{1})-\phi (y_{2})|\geqslant \frac{|\phi ^{\prime }(x)|}{4}|y_{1}-y_{2}|. \end{aligned}$$

\(\square \)

Recall from (1.3) that, there exists \(r>0\) such that

$$\begin{aligned} \eta :=\inf _{x\in {\mathbb {R}}}\sup _{y\in [x,x+r]}|\phi ^{\prime }(y)|>0 \end{aligned}$$

From now on we will set

$$\begin{aligned} \rho =\frac{2^{\alpha }r}{\inf _{k\in {\mathbb {N}}}\sup _{v_{1},v_{2}\in [0,1]}|h_{\alpha }(v_{1}+k)-h_{\alpha }(v_{2}+k)|}. \end{aligned}$$
(3.15)

Lemma 3.14

Let the constants L and \(L'\) be as in Lemma 3.9. Let \(\varepsilon \) be such that

$$\begin{aligned} L\varepsilon +L'\varepsilon ^{\alpha }<\frac{\eta }{4(\Vert \phi ^{\prime \prime }\Vert _{\infty }+1)\rho }. \end{aligned}$$

For all \(k\in {\mathbb {N}}\), there exists \(v_{k}\in [0,1]\) such that for all \(u\in (0,\varepsilon /2)\), \(v\in (v_{k}-\varepsilon /2,v_{k}+\varepsilon /2)\) and \(\mu \in (2^{-\alpha },1]\), one has

$$\begin{aligned} \big |\phi (\rho \mu h_{\alpha }(v+k))-\phi (\rho \mu h_{\alpha }(v+k-u))\big |\geqslant&\frac{\eta }{4}\big |h_{\alpha }(v+k)-h_{\alpha }(v+k-u)\big |. \end{aligned}$$

Proof

Note that for all \(k\in {\mathbb {N}}\), there exist \({\hat{v}}_{k},{\tilde{v}}_{k}\in [0,1]\) such that

$$\begin{aligned} \big |h_{\alpha }({\hat{v}}_{k}+k)-h_{\alpha }({\tilde{v}}_{k}+k)\big |=\sup _{v_{1},v_{2}\in [0,1]}\big |h_{\alpha }(v_{1}+k)-h_{\alpha }(v_{2}+k)\big |. \end{aligned}$$

From the definition of \(\rho ,\) one has

$$\begin{aligned} \rho \mu |h_{\alpha }({\hat{v}}_{k}+k)-h_{\alpha }({\tilde{v}}_{k}+k)|\geqslant r. \end{aligned}$$

By our non-degeneracy assumption (1.3) on \(\phi \), there must exist \(v_{k}\) between \({\hat{v}}_{k}\) and \({\tilde{v}}_{k},\) such that

$$\begin{aligned} |\phi ^{\prime }(\rho \mu h_{\alpha }(v_{k}+k))|\geqslant \eta . \end{aligned}$$

It follows from the Hölder-continuity of \(h_{\alpha }\) (cf. Lemma 3.9) and the definitions of vu and \(\varepsilon \) that

$$\begin{aligned} |\rho \mu h_{\alpha }(v_{k}+k)-\rho \mu h_{\alpha }(v+k-u)|&\leqslant \frac{\eta }{4(\Vert \phi ^{\prime \prime }\Vert _{\infty }+1)}\\ |\rho \mu h_{\alpha }(v_{k}+k)-\rho \mu h_{\alpha }(v+k)|&\leqslant \frac{\eta }{4(\Vert \phi ^{\prime \prime }\Vert _{\infty }+1)}. \end{aligned}$$

By Lemma 3.13, one arrives at

$$\begin{aligned} \big |\phi (\rho \mu h_{\alpha }(v+k))-\phi (\rho \mu h_{\alpha }(v+k-u))\big |\geqslant&\frac{\eta }{4}\big |\rho \mu h_{\alpha }(v+k)-\rho \mu h_{\alpha }(v+k-u)\big |. \end{aligned}$$

\(\square \)

3.2.4 The core step: growth of \(\lambda \mapsto J(\lambda h_{\alpha })\)

Now we assume that \(\alpha \in (H+1/2,1)\) is given fixed and define the path \(h_{\alpha }(t)\) by (3.9). Recall from Lemma 3.9 that \(h_{\alpha }=f_{\alpha }+g_{\alpha }\) where \(f_{\alpha }\) is Lipschitz and \(g_{\alpha }\) is 1-periodic, \(\alpha \)-Hölder continuous. In particular, \(h_{\alpha }|_{[0,1]}\in \bar{{{{\mathcal {H}}}}}\). We also recall that J is the path functional defined by (3.7). Below is the key lemma for the proof of Proposition 3.6.

Lemma 3.15

Let \(\rho \) be the constant defined by (3.15). There exists \(C>0\) depending on \(H,\phi \) and \(\alpha ,\) such that

$$\begin{aligned} J(\lambda \rho h_{\alpha })\geqslant C\lambda ^{\frac{1-2H}{\alpha }} \end{aligned}$$

for all \(\lambda >1\).

The rest of this subsection is devoted to the proof of Lemma 3.15. In what follows, \(\lambda >1\) is given fixed. Let N be the unique positive integer such that \(2^{(N-1)\alpha }<\lambda \leqslant 2^{N\alpha }.\) Set \(\mu \triangleq 2^{-N\alpha }\lambda \) and note that \(2^{-\alpha }<\mu \leqslant 1.\)

First of all, by the definition of J and the scaling property (3.11) of \(h_{\alpha }\), one can write

$$\begin{aligned} J(\lambda \rho h_{\alpha })&=\int _{0}^{1}\int _{0}^{t}\frac{\big |\phi (\rho \mu 2^{N\alpha }h_{\alpha }(t))-\phi (\rho \mu 2^{N\alpha }h_{\alpha }(s))\big |^{2}}{(t-s)^{2-2H}}dsdt\\&=\int _{0}^{1}\int _{0}^{t}\frac{\big |\phi (\rho \mu h_{\alpha }(2^{N}t))-\phi (\rho \mu h_{\alpha }(2^{N}s))\big |^{2}}{(t-s)^{2-2H}}dsdt\\&=\int _{0}^{1}\int _{0}^{t}\frac{\big |\phi (\rho \mu h_{\alpha }(2^{N}t))-\phi (\rho \mu h_{\alpha }(2^{N}(t-w)))\big |^{2}}{w^{2-2H}}dwdt. \end{aligned}$$

By applying change of variables \(v=2^{N}t,u=2^{N}w\) and using Fubini’s theorem, one further has

$$\begin{aligned} J(\lambda \rho h_{\alpha })&=2^{-2NH}\int _{0}^{2^{N}}\int _{0}^{v}\frac{\big |\phi (\rho \mu h_{\alpha }(v))-\phi (\rho \mu h_{\alpha }(v-u))\big |^{2}}{u^{2-2H}}dudv\\&=2^{-2NH}\int _{0}^{2^{N}}\frac{du}{u^{2-2H}}\int _{u}^{2^{N}}\big |\phi (\rho \mu h_{\alpha }(v))-\phi (\rho \mu h_{\alpha }(v-u))\big |^{2}dv. \end{aligned}$$

Due to positivity of the integrand, it follows that

$$\begin{aligned}&J(\lambda \rho h_{\alpha }) \geqslant 2^{-2NH}\int _{0}^{1}\frac{du}{u^{2-2H}}\int _{1}^{2^{N}}\big |\phi (\rho \mu h_{\alpha }(v))-\phi (\rho \mu h_{\alpha }(v-u))\big |^{2}dv\nonumber \\&=2^{-2NH}\int _{0}^{1}\frac{du}{u^{2-2H}}\sum _{k=1}^{2^{N}-1}\int _{k}^{k+1}\big |\phi (\rho \mu h_{\alpha }(v))-\phi (\rho \mu h_{\alpha }(v-u))\big |^{2}dv\nonumber \\&=2^{-2NH}\sum _{k=1}^{2^{N}-1}\int _{0}^{1}\frac{du}{u^{2-2H}}\int _{0}^{1}\big |\phi (\rho \mu h_{\alpha }(v+k))-\phi (\rho \mu h_{\alpha }(v+k-u))\big |^{2}dv. \end{aligned}$$
(3.16)

The crucial point is to demonstrate that the above double integral is uniformly positive with respect to k. By Lemma 3.14, there exists a \(\varepsilon >0\) and a \(v_{k}\in [0,1]\) for each \(k\in {\mathbb {N}},\) such that for all \(u\in (0,\varepsilon /2)\) and \(v\in (v_{k}-\varepsilon /2,v_{k}+\varepsilon /2)\) and \(\mu \in (2^{-\alpha },1]\), one has

$$\begin{aligned} \big |\phi (\rho \mu h_{\alpha }(v+k))-\phi (\rho \mu h_{\alpha }(v+k-u))\big |\geqslant&\frac{\eta }{4}\big |\rho \mu h_{\alpha }(v+k)-\rho \mu h_{\alpha }(v+k-u)\big |. \end{aligned}$$

For later purpose, we shall make \(\varepsilon \) smaller so that \(\varepsilon =2^{-M}\) with a suitable positive integer M. With such a choice of \(\varepsilon ,\) it follows that

$$\begin{aligned} J(\lambda \rho h_{\alpha })\geqslant \frac{\eta ^{2}\rho \mu }{16}2^{-2NH}\sum _{k=1}^{2^{N}-1}\int _{0}^{\varepsilon }\frac{du}{u^{2-2H}}\int _{v_{k}-\varepsilon }^{v_{k}+\varepsilon }\big |h_{\alpha }(v+k)-h_{\alpha }(v+k-u)\big |{}^{2}dv.\nonumber \\ \end{aligned}$$
(3.17)

In order to estimate the dv-integral, we consider the decomposition \(g_{\alpha }=g_{\alpha }^{0}+g_{\alpha }^{M}\) where M is the number in the definition of \(\varepsilon \) and

$$\begin{aligned} g_{\alpha }^{0}(v)\triangleq \sum _{n=1}^{M-1}2^{-n\alpha }\sin 2^{n}\pi t,\ g_{\alpha }^{M}(v)\triangleq \sum _{n=M}^{\infty }2^{-n\alpha }\sin 2^{n}\pi t. \end{aligned}$$

Under this decomposition, one can write

$$\begin{aligned}&h_{\alpha }(v+k)-h_{\alpha }(v+k-u) =f_{\alpha }(v+k)-f_{\alpha }(v+k-u)\\&\quad +g_{\alpha }^{0}(v)-g_{\alpha }^{0}(v-u)+g_{\alpha }^{M}(v)-g_{\alpha }^{M}(v-u). \end{aligned}$$

Note that both \(f_{\alpha }\) and \(g_{\alpha }^{0}\) are Lipschitz:

$$\begin{aligned} \big |f_{\alpha }(v+k)-f_{\alpha }(v+k-u)+g_{\alpha }^{0}(v)-g_{\alpha }^{0}(v-u)\big |\leqslant C_{2}u. \end{aligned}$$

By using the simple inequality \((a+b)^{2}\geqslant a^{2}/2-b^{2}\), it follows that

$$\begin{aligned} J(\lambda \rho h_{\alpha })&\geqslant \frac{\eta ^{2}\rho \mu }{4}2^{-2NH}\sum _{k=1}^{2^{N}-1}\int _{0}^{\varepsilon }\frac{du}{u^{2-2H}}\int _{v_{k}-\varepsilon }^{v_{k}+\varepsilon }\Bigg (\frac{1}{2}(g_{\alpha }^{M}(v)-g_{\alpha }^{M}(v-u))^{2}-C_{2}^{2}u^{2}\Bigg )dv\nonumber \\&=\frac{\eta ^{2}\rho \mu }{4}2^{-2NH}\sum _{k=1}^{2^{N}-1}\int _{0}^{\varepsilon }\frac{du}{u^{2-2H}}\Bigg (\frac{1}{2}\int _{v_{k}-\varepsilon }^{v_{k}+\varepsilon }\big (g_{\alpha }^{M}(v)-g_{\alpha }^{M}(v-u)\big ){}^{2}dv-2\varepsilon C_{2}^{2}u^{2}\Bigg ). \end{aligned}$$
(3.18)

To proceed further, one needs to lower bound the inner dv-integral on the right hand side of (3.18). This is done in the lemma below.

Lemma 3.16

Let \(v_{k}\) and \(\varepsilon =2^{-M}\) be chosen as before. Then one has

$$\begin{aligned} \int _{v_{k}-\varepsilon }^{v_{k}+\varepsilon }\big (g_{\alpha }^{M}(v)-g_{\alpha }^{M}(v-u)\big ){}^{2}dv\geqslant 4^{\alpha }\varepsilon u^{2\alpha } \end{aligned}$$
(3.19)

for all \(u\in (0,\varepsilon )\).

Proof

By simple trigonometric identities, one has

$$\begin{aligned} g_{\alpha }^{M}(v)-g_{\alpha }^{M}(v-u)=2\sum _{n=M}^{\infty }2^{-n\alpha }\sin (2^{n-1}\pi u)\cos 2^{n}\pi \big (v-\frac{u}{2}\big ). \end{aligned}$$

It follows that

$$\begin{aligned}&\big (g_{\alpha }^{M}(v)-g_{\alpha }^{M}(v-u)\big )^{2}\nonumber \\&\quad =4\sum _{n\geqslant M}2^{-2n\alpha }\sin ^{2}(2^{n-1}\pi u)\cos ^{2}2^{n}\pi \big (v-\frac{u}{2}\big )\nonumber \\&\qquad +\,4\sum _{m\ne n\geqslant M}2^{-(m+n)\alpha }\sin (2^{m-1}\pi u)\sin (2^{n-1}\pi u)\cos 2^{m}\pi \big (v-\frac{u}{2}\big )\cos 2^{n}\pi \big (v-\frac{u}{2}\big )\nonumber \\&\quad =2\sum _{n\geqslant M}2^{-2n\alpha }\sin ^{2}2^{n-1}\pi u+2\sum _{n\geqslant M}2^{-2n\alpha }\sin ^{2}(2^{n-1}\pi u)\cos 2^{n}\pi (2v-u)\nonumber \\&\qquad +\,4\sum _{m\ne n\geqslant M}2^{-(m+n)\alpha }\sin (2^{m-1}\pi u)\sin (2^{n-1}\pi u)\cos 2^{m}\pi \big (v-\frac{u}{2}\big )\cos 2^{n}\pi \big (v-\frac{u}{2}\big ). \end{aligned}$$
(3.20)

When one integrates the v-variable over \((v_{k}-\varepsilon ,v_{k}+\varepsilon )\), the second and last summations vanish. Indeed,

$$\begin{aligned} \int _{v_{k}-\varepsilon }^{v_{k}+\varepsilon }\cos 2^{n}\pi (2v-u)dv=\frac{1}{2^{n+1}\pi }\Bigg (\sin 2^{n+1}\pi \Bigg (v_{k}-\frac{u}{2}+\varepsilon \Bigg )-\sin 2^{n+1}\pi \Bigg (v_{k}-\frac{u}{2}-\varepsilon \Bigg )\Bigg ). \end{aligned}$$

Recall that \(\varepsilon =2^{-M}\) and \(n\geqslant M\). It is easily seen that \(2^{n+1}\pi \cdot 2\varepsilon \in 2\pi {\mathbb {Z}}\) and thus the last expression is zero. Similarly, the dv-integral of the last summation on the right hand side of (3.20) vanishes as well. As a consequence, one has

$$\begin{aligned} \int _{v_{k}-\varepsilon }^{v_{k}+\varepsilon }\big (g_{\alpha }^{M}(v)-g_{\alpha }^{M}(v-u)\big )^{2}dv=4\varepsilon \sum _{n\geqslant M}2^{-2n\alpha }\sin ^{2}2^{n-1}\pi u. \end{aligned}$$
(3.21)

To lower bound the last expression, given any \(u\in (0,\varepsilon )\) let \(K\geqslant M\) be the unique positive integer such that \(2^{-K-1}\leqslant u<2^{-K}.\) Then one has

$$\begin{aligned}&\sum _{n\geqslant M}2^{-2n\alpha }\sin ^{2}2^{n-1}\pi u \geqslant 2^{-2K\alpha }\sin ^{2}2^{K-1}\pi u\geqslant 2^{-2K\alpha }(2^{K}u)^{2}\ \ \ (\text {by Jordan's inequality})\\&=2^{2K(1-\alpha )}u^{2}\geqslant \frac{1}{4^{1-\alpha }}u^{2\alpha }. \end{aligned}$$

By substituting this back into (3.21), one obtains that

$$\begin{aligned} \int _{v_{k}-\varepsilon }^{v_{k}+\varepsilon }\big (g_{\alpha }^{M}(v)-g_{\alpha }^{M}(v-u)\big )^{2}dv\geqslant 4^{\alpha }\varepsilon u^{2\alpha }. \end{aligned}$$

This gives the desired estimate. \(\square \)

Returning to the main estimate, let \(\delta <\varepsilon \) be another parameter to be chosen. By substituting the estimate (3.21) into (3.18) and further localise the du-integral over \((0,\delta )\), one finds that

$$\begin{aligned} J(\lambda \rho h_{\alpha })&\geqslant \frac{\eta ^{2}\rho \mu }{4}2^{-2NH}\sum _{k=1}^{2^{N}-1}\int _{0}^{\delta }\frac{du}{u^{2-2H}}\Bigg (\frac{1}{2}4^{\alpha }\varepsilon u^{2\alpha }-2\varepsilon C_{2}^{2}u^{2}\Bigg )du\\&=\frac{\eta ^{2}\rho \mu }{4}2^{-2NH}\cdot (2^{N}-1)\cdot \Bigg (\frac{4^{\alpha }\varepsilon }{2(2\alpha +2H-1)}\delta ^{2\alpha +2H-1}-\frac{2\varepsilon C_{2}^{2}}{2H+1}\delta ^{2H+1}\Bigg ). \end{aligned}$$

Since \(\alpha <1\), it is clear that one can choose \(\delta \) to be small enough (and then fixed) so that

$$\begin{aligned} \frac{4^{\alpha }\varepsilon }{2(2\alpha +2H-1)}\delta ^{2\alpha +2H-1}-\frac{2\varepsilon C_{2}^{2}}{2H+1}\delta ^{2H+1}=:C_{3}>0. \end{aligned}$$

As a consequence,

$$\begin{aligned} J(\lambda \rho h_{\alpha })\geqslant \frac{\eta ^{2}\rho \mu }{4}2^{-2NH}\cdot (2^{N}-1)\cdot C_{3}\geqslant C_{4}2^{N(1-2H)}, \end{aligned}$$

where \(C_{4}\) is a constant independent of N. Recalling the definition of N at the very beginning, one concludes that

$$\begin{aligned} J(\lambda \rho h_{\alpha })\geqslant C_{4}\lambda ^{\frac{1-2H}{\alpha }}. \end{aligned}$$

The proof of Lemma 3.15 is now complete.

3.2.5 Localisation of J(X)

To ease notation, we simply rewrite the inequality (3.15) as

$$\begin{aligned} J(\lambda h_{\alpha })\geqslant C\lambda ^{\frac{1-2H}{\alpha }}. \end{aligned}$$
(3.22)

This is seen by absorbing \(\rho \) into the parameter \(\lambda \) and adjusting the constant C. In order to establish the main estimate (3.6) for J(X), we shall localise X around a tubular neighbourhood of \(\lambda h_{\alpha }\) and make use of Cameron–Martin transformation. To this end, we first establish the following continuity estimate for J(x). Recall that \(\alpha \in (H+1/2,1)\) is given fixed.

Lemma 3.17

Let \(\delta ,\sigma ,\beta \) be three parameters such that

$$\begin{aligned} 0<\delta<2H-\frac{1}{2},\ \max \Bigg \{\frac{1+\delta -3H}{\alpha },0\Bigg \}<\sigma<1,\ \frac{1-2H}{2\alpha }<\beta <1. \end{aligned}$$
(3.23)

Then there exists a constant \(C>0\) depending on \(H,\alpha ,\delta ,\sigma ,\beta \) and \(\phi \), such that

$$\begin{aligned} \big |J(u)-J(v)\big |\leqslant C\big (\Vert u-v\Vert _{H-\delta }^{2}+\Vert u-v\Vert _{H-\delta }\Vert v\Vert _{\alpha }^{\sigma }+\Vert u-v\Vert _{H-\delta }^{\beta }\Vert v\Vert _{\alpha }^{2\beta }\big )\nonumber \\ \end{aligned}$$
(3.24)

for all continuous paths \(u,v:[0,1]\rightarrow {\mathbb {R}}\) satisfying \(\Vert u-v\Vert _{H-\delta }\leqslant 1.\)

Proof

Given two paths \(u,v:[0,1]\rightarrow {\mathbb {R}}\), one has

$$\begin{aligned} J(u)-J(v)&=\int _{0}^{1}\int _{0}^{t}\frac{1}{(t-s)^{2-2H}}\big ((\phi (u_{t})-\phi (u_{s}))-(\phi (v_{t})-\phi (v_{s}))\big )\\&\quad \times \big ((\phi (u_{t})-\phi (u_{s}))+(\phi (v_{t})-\phi (v_{s}))\big )dsdt \end{aligned}$$

By writing

$$\begin{aligned} \phi (u_{t})-\phi (u_{s})=\int _{0}^{1}\phi '(u_{s}+\theta u_{s,t})u_{s,t}d\theta \ \ \ (u_{s,t}\triangleq u_{t}-u_{s}) \end{aligned}$$

and similarly for \(\phi (v_{t})-\phi (v_{s})\), it is easily checked that

$$\begin{aligned} J(u)-J(v)=\int _{0}^{1}\int _{0}^{t}\frac{(A_{s,t}+B_{s,t})(A_{s,t}+B_{s,t}+2C_{s,t})}{(t-s)^{2-2H}}dsdt, \end{aligned}$$
(3.25)

where

$$\begin{aligned} A_{s,t}&\triangleq \Bigg (\int _{0}^{1}\phi '(u_{s}+\theta u_{s,t})d\theta \Bigg )(u_{s,t}-v_{s,t}),\\ B_{s,t}&\triangleq \Bigg (\int _{0}^{1}\phi '(u_{s}+\theta u_{s,t})d\theta -\int _{0}^{1}\phi '(v_{s}+\theta v_{s,t})d\theta \Bigg )v_{s,t},\\ C_{s,t}&\triangleq \Bigg (\int _{0}^{1}\phi '(v_{s}+\theta v_{s,t})d\theta \Bigg )v_{s,t} \end{aligned}$$

respectively. We now expand the product inside the integral in (3.25) and estimate each term separately.

The AA-term. One has

$$\begin{aligned} J_{AA}&\triangleq \int _{0}^{1}\int _{0}^{t}\frac{A_{s,t}^{2}}{(t-s)^{2-2H}}dsdt\leqslant \Vert \phi '\Vert _{\infty }^{2}\int _{0}^{1}\int _{0}^{t}\frac{|u_{s,t}-v_{s,t}|^{2}}{(t-s)^{2-2H}}dsdt\\&\leqslant \Vert \phi '\Vert _{\infty }^{2}\Vert u-v\Vert _{H-\delta }^{2}\int _{0}^{1}\int _{0}^{t}(t-s)^{4H-2-2\delta }dsdt\\&=C_{H,\delta }\Vert \phi '\Vert _{\infty }^{2}\Vert u-v\Vert _{H-\delta }^{2}, \end{aligned}$$

provided that \(\delta \) is chosen such that \(H>\frac{1+2\delta }{4}\).

The AB- and AC-terms. Here we make use of the assumption that \(\Vert u-v\Vert _{H-\delta }\leqslant 1\). In particular,

$$\begin{aligned} |u_{s,t}-v_{s,t}|\leqslant \Vert u-v\Vert _{H-\delta }|t-s|^{H-\delta }\leqslant 1\ \ \ \forall s,t\in [0,1] \end{aligned}$$

and thus \(A_{s,t}\) is uniformly bounded. Since

$$\begin{aligned} (\phi (u_{t})-\phi (u_{s}))-(\phi (v_{t})-\phi (v_{s}))=A_{s,t}+B_{s,t}, \end{aligned}$$

it follows that \(B_{s,t}\) is also uniformly bounded (say, by \(C_{1}\)). Let \(\sigma \) be a fixed number such that

$$\begin{aligned} \max \big \{\frac{1+\delta -3H}{\alpha },0\big \}<\sigma <1. \end{aligned}$$
(3.26)

Then one has

$$\begin{aligned} J_{AB}&\triangleq \int _{0}^{1}\int _{0}^{t}\frac{|A_{s,t}|\cdot |B_{s,t}|}{(t-s)^{2-2H}}dsdt\nonumber \\&\leqslant \int _{0}^{1}\int _{0}^{t}\frac{\Vert \phi '\Vert _{\infty }|u_{s,t}-v_{s,t}|\cdot C_{1}^{1-\sigma }|B_{s,t}|^{\sigma }}{(t-s)^{2-2H}}dsdt\nonumber \\&\leqslant C_{1}^{1-\sigma }\int _{0}^{1}\int _{0}^{t}\frac{\Vert \phi '\Vert _{\infty }|u_{s,t}-v_{s,t}|\cdot (2\Vert \phi '\Vert _{\infty })^{\sigma }|v_{s,t}|^{\sigma }}{(t-s)^{2-2H}}dsdt\nonumber \\&\leqslant C_{2}\Vert u-v\Vert _{H-\delta }\Vert v\Vert _{\alpha }^{\sigma }\int _{0}^{1}\int _{0}^{t}(t-s)^{H-\delta +\sigma \alpha +2H-2}dsdt\nonumber \\&=C_{3}\Vert u-v\Vert _{H-\delta }\Vert v\Vert _{\alpha }^{\sigma }, \end{aligned}$$
(3.27)

where the last integral is convergent due to the constraint (3.26) on \(\sigma \). The estimate of the AC-term is the same as (3.27).

The BB- and BC-terms. By the definitions of \(B_{s,t}\) and \(C_{s,t}\), one has

$$\begin{aligned} \big |B_{s,t}\big |&\leqslant C\Vert \phi ''\Vert _{\infty }\Vert u-v\Vert _{H-\delta }\Vert v\Vert _{\alpha }|t-s|^{\alpha },\\ \big |C_{s,t}\big |&\leqslant C\Vert \phi '\Vert _{\infty }\Vert v\Vert _{\alpha }|t-s|^{\alpha }. \end{aligned}$$

We again observe that \(B_{s,t}\) and \(C_{s,t}\) are both uniformly bounded under the assumption \(\Vert u-v\Vert _{H-\delta }\leqslant 1\). Let \(\beta \) be a fixed number such that

$$\begin{aligned} \frac{1-2H}{2\alpha }<\beta <1. \end{aligned}$$
(3.28)

In a similar way leading to (3.27), one has

$$\begin{aligned} J_{BB}+J_{BC}&\triangleq \int _{0}^{1}\int _{0}^{t}\frac{|B_{s,t}|\cdot (|B_{s,t}|+|C_{s,t}|)}{(t-s)^{2-2H}}dsdt\\&\leqslant C_{1}\Vert u-v\Vert _{H-\delta }^{\beta }\Vert v\Vert _{\alpha }^{2\beta }\int _{0}^{1}\int _{0}^{t}(t-s)^{2\beta \alpha +2H-2}dsdt\\&=C_{2}\Vert u-v\Vert _{H-\delta }^{\beta }\Vert v\Vert _{\alpha }^{2\beta }, \end{aligned}$$

where the finiteness of the last integral follows from the constraint (3.28) on \(\beta .\)

By putting the above results together, one obtains the desired continuity estimate (3.24). \(\square \)

As an application of Lemma 3.17, the next result enables one to localise the tail event for J(X) on a tubular neighbourhood of \(\lambda h_{\alpha }\).

Lemma 3.18

Let \(\delta ,\varepsilon ,\sigma ,\beta \) be given parameters such that \(\delta \in (0,2H-1/2)\), \(\varepsilon \in (0,1)\) and

$$\begin{aligned} \max \Bigg \{\frac{1+\delta -3H}{\alpha },0\Bigg \}<\sigma<\frac{1-2H}{\alpha },\ \frac{1-2H}{2\alpha }<\beta <\frac{1-2H}{(2-\varepsilon )\alpha }. \end{aligned}$$
(3.29)

Then there exists a positive constant C independent of \(\lambda \), such that

$$\begin{aligned} \big \{\Vert X-\lambda h_{\alpha }\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon }\big \}\subseteq \big \{ J(X)>C\lambda ^{\frac{1-2H}{\alpha }}\big \} \end{aligned}$$
(3.30)

for all large \(\lambda \).

Proof

We apply Lemma 3.17 to the case when \(u=X,v=\lambda h_{\alpha }\). Let \(\delta ,\sigma ,\beta \) be parameters satisfying the constraint (3.23). Suppose that \(\Vert X-\lambda h_{\alpha }\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon }\) (in particular, \(\leqslant 1\)). It follows from (3.22) and (3.24) that

$$\begin{aligned} J(X)&\geqslant J(\lambda h_{\alpha })-\big |J(X)-J(\lambda h_{\alpha })\big |\nonumber \\&\geqslant C_{1}\lambda ^{\frac{1-2H}{\alpha }}-C_{2}\big (1+\lambda ^{\sigma }\Vert h_{\alpha }\Vert ^{\sigma }+\lambda ^{-\varepsilon \beta }\lambda ^{2\beta }\Vert h_{\alpha }\Vert _{\alpha }^{2\beta }\big ), \end{aligned}$$
(3.31)

To ensure that (3.31) is bounded from below by \(C_{3}\lambda ^{\frac{1-2H}{\alpha }},\) one only needs to further impose that

$$\begin{aligned} \sigma<\frac{1-2H}{\alpha },\ (2-\varepsilon )\beta <\frac{1-2H}{\alpha }, \end{aligned}$$

which leads to the constraint (3.29) (in combination with (3.23)). The relation (3.30) thus follows. \(\square \)

In order to prove the lower estimate (3.6) for J(X), we also need the following small ball inequality for fBM under Hölder norm (cf. [10, Theorem 2.2]).

Lemma 3.19

Given any \(\delta \in (0,H)\), there exists a positive constant \(C_{\delta }>0\) such that

$$\begin{aligned} {\mathbb {P}}\big (\Vert X\Vert _{H-\delta }\leqslant x^{\delta }\big )\geqslant e^{-C_{\delta }/x}\ \ \ \forall x\in (0,1]. \end{aligned}$$

We are now in a position to prove Proposition 3.6.

Proof of Proposition 3.6

Let \(\delta ,\varepsilon ,\sigma ,\beta \) be given parameters satisfying the constraints in Lemma 3.18. According to the relation (3.30),

$$\begin{aligned} {\mathbb {P}}\big (J(X)>C\lambda ^{\frac{1-2H}{\alpha }}\big )\geqslant {\mathbb {P}}\big (\Vert X-\lambda h_{\alpha }\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon }\big ) \end{aligned}$$

for all large \(\lambda .\) On the other hand, since \(h_{\alpha }\in \bar{{{{\mathcal {H}}}}}\), by the Cameron-Martin theorem (cf. [17, Theorem 1.3]) one has

$$\begin{aligned} {\mathbb {P}}\big (\Vert X-\lambda h_{\alpha }\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon }\big )&={\mathbb {E}}\big [\exp \big (\lambda {{{\mathcal {I}}}}_{1}(l_{\alpha })-\frac{1}{2}\lambda ^{2}\Vert l_{\alpha }\Vert _{{{{\mathcal {H}}}}}^{2}\big );\Vert X\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon }\big ]\\&\geqslant e^{-\lambda ^{2}\Vert l_{\alpha }\Vert _{{{{\mathcal {H}}}}}^{2}/2}{\mathbb {P}}\big (\Vert X\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon },{{{\mathcal {I}}}}_{1}(l_{\alpha })>0\big ), \end{aligned}$$

where \(l_{\alpha }\in {{{\mathcal {H}}}}\) is the element corresponding to the intrinsic Cameron-Martin path \(h_{\alpha }.\) Since

$$\begin{aligned} (\Vert X\Vert _{H-\delta },{{{\mathcal {I}}}}_{1}(l_{\alpha })){\mathop {=}\limits ^{\text {law}}}(\Vert X\Vert _{H-\delta },-{{{\mathcal {I}}}}_{1}(l_{\alpha })), \end{aligned}$$

one has

$$\begin{aligned}&{\mathbb {P}}\big (\Vert X\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon },{{{\mathcal {I}}}}_{1}(l_{\alpha })>0\big ) \\&\quad ={\mathbb {P}}\big (\Vert X\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon },{{{\mathcal {I}}}}_{1}(l_{\alpha })<0\big ).\\&\quad =\frac{1}{2}{\mathbb {P}}\big (\Vert X\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon }\big ). \end{aligned}$$

In addition, according to Lemma 3.19 with \(x=\lambda ^{-\varepsilon /\delta },\)

$$\begin{aligned} {\mathbb {P}}\big (\Vert X\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon }\big )\geqslant e^{-C_{\delta }\lambda ^{\varepsilon /\delta }}\ \ \ \forall \lambda \geqslant 1. \end{aligned}$$

As a consequence, one finds that

$$\begin{aligned} {\mathbb {P}}\big (\Vert X-\lambda h_{\alpha }\Vert _{H-\delta }\leqslant \lambda ^{-\varepsilon }\big )\geqslant \frac{1}{2}e^{-\lambda ^{2}\Vert l_{\alpha }\Vert _{{{{\mathcal {H}}}}}^{2}/2}e^{-C_{\delta }\lambda ^{\varepsilon /\delta }}. \end{aligned}$$

Now we choose \(\varepsilon \) to be such that \(\varepsilon /\delta <2\). It follows that

$$\begin{aligned} {\mathbb {P}}\big (J(X)>C\lambda ^{\frac{1-2H}{\alpha }}\big )\geqslant C_{1}e^{-C_{2}\lambda ^{2}} \end{aligned}$$

for all large \(\lambda .\) By renaming \(C\lambda ^{\frac{1-2H}{\alpha }}\) as \(\lambda \), one arrives at the tail estimate

$$\begin{aligned} {\mathbb {P}}\big (J(X)>\lambda \big )\geqslant C_{1}e^{-C_{3}\lambda ^{\frac{2\alpha }{1-2H}}}, \end{aligned}$$

which then leads to (3.6) in view of Remark 3.8.\(\square \)

3.3 Lower tail estimate of the rough line integral

We now complete the last step of proving Theorem 1.1. The point is to see how the tail estimate of I(X) in Proposition 3.6 translates to a corresponding tail estimate of the rough integral \(\int _{0}^{1}\phi (X_{t})dY_{t}\).

Proof of Theorem 1.1

We begin by conditioning on X:

$$\begin{aligned} {\mathbb {P}}\Bigg (\Bigg |\int _{0}^{1}\phi (X_{t})dY_{t}\Bigg |>\lambda \Bigg )={\mathbb {E}}\Bigg [{\mathbb {P}}\Bigg (\Bigg |\int _{0}^{1}\phi (X_{t})dY_{t}\Bigg |>\lambda \Bigg |X\Bigg )\Bigg ]. \end{aligned}$$

It is seen in Sect. 3.1 that conditional on X,  the rough integral \(\int _{0}^{1}\phi (X_{t})dY_{t}\) is Gaussian with mean zero and variance I(X). As a result, one has

$$\begin{aligned} {\mathbb {E}}\Bigg [{\mathbb {P}}\Bigg (\Bigg |\int _{0}^{1}\phi (X_{t})dY_{t}\Bigg |>\lambda \Bigg |X\Bigg )\Bigg ]={\mathbb {E}}^{X}\Bigg [{\mathbb {P}}^{Z}\Bigg (|Z|>\frac{\lambda }{\sqrt{I(X)}}\Bigg )\Bigg ]\geqslant C_{1}{\mathbb {E}}^{X}\Bigg [e^{-C_{2}\frac{\lambda ^{2}}{I(X)}}\Bigg ]. \end{aligned}$$

Here Z is a standard Gaussian random variable that is independent of X and \({\mathbb {P}}^{Z}\) denotes the probability with respect to the randomness of Z. To reach the last inequality, we have used the following simple estimate for the Gaussian density:

$$\begin{aligned} {\mathbb {P}}(|Z|>r)\geqslant C_{1}e^{-C_{2}r^{2}}\ \ \ \forall r>0. \end{aligned}$$
(3.32)

By further conditioning on \(\{I(X)>r\}\), one has

$$\begin{aligned} {\mathbb {E}}^{X}\big [e^{-C_{2}\frac{\lambda ^{2}}{I(X)}}\big ]&\geqslant e^{-C_{2}\frac{\lambda ^{2}}{r}}{\mathbb {P}}^{X}(I(X)>r)\geqslant e^{-C_{2}\frac{\lambda ^{2}}{r}}e^{-C_{4}r^{\frac{2\alpha }{1-2H}}}, \end{aligned}$$
(3.33)

where the last inequality follows from Proposition 3.6.

To proceed further, let us define

$$\begin{aligned} f(r)\triangleq C_{2}\frac{\lambda ^{2}}{r}+C_{4}r^{\frac{2\alpha }{1-2H}},\ \ \ r>0. \end{aligned}$$

A simple calculation shows that f(r) is decreasing on \((0,r_{*}]\) and is increasing on \([r_{*},\infty )\), where

$$\begin{aligned} r_{*}\triangleq \Bigg (\frac{C_{2}(1-2H)\lambda ^{2}}{2\alpha C_{4}}\Bigg )^{\frac{1-2H}{1+2\alpha -2H}}. \end{aligned}$$

Substituting this \(r_{*}\) for r in (3.33) yields

$$\begin{aligned} {{\mathbb {E}}^{X}\big [e^{-C_{2}\frac{\lambda ^{2}}{I(X)}}\big ]}{\geqslant C_{5}e^{-C_{6}\lambda ^{\frac{4\alpha }{1+2\alpha -2H}}}.} \end{aligned}$$
(3.34)

Note that

$$\begin{aligned} \alpha>H+\frac{1}{2}\iff \frac{4\alpha }{1+2\alpha -2H}>1+2H. \end{aligned}$$

Given \(\gamma >1+2H,\) by taking \(\alpha >H+1/2\) to be such that

$$\begin{aligned} \gamma =\frac{4\alpha }{1-2H+2\alpha }, \end{aligned}$$

the desired lower tail estimate (1.4) follows immediately from (3.34).

The proof of Theorem 1.1 is now complete.\(\square \)

4 Further questions

As we mentioned in the introduction, the CLL upper estimate (1.2) indeed holds with \(\gamma =1+2H\). However, our lower estimate (1.4) for the rough integral \(\int _{0}^{1}\phi (X_{t})dY_{t}\) only holds with a Weibull exponent \(\gamma \) arbitrarily close to \(1+2H\). It is not clear whether one could achieve the critical exponent \(\gamma =1+2H\) for the lower estimate under the current methodology. The main issue is that we do not know if \(h_{H+1/2}\) belongs to \(\bar{{{{\mathcal {H}}}}}\) (cf. (3.9) for the definition of \(h_{H+1/2}\)).

On the other hand, the current analysis relies on the decoupling between X and Y in a crucial way in order to use conditional Gaussianity. It is tempting to ask how the current result could be extended to the more general SDE setting (e.g. still driven by fBM). In the rough regime of \(H\in (1/4,1/2)\), we conjecture that the lack of Gaussian tail is a “generic” phenomenon for elliptic, non-commutative, \(C_{b}^{\infty }\)-vector fields.

Last but not the least, in the setting of Theorem 1.1, it is easily seen that any non-constant, periodic, \(C_{b}^{\infty }\)-function \(\phi \) will satisfy the condition (1.3). On the other hand, it is trivial that the rough integral \(\int _{0}^{1}\phi (X_{t})dY_{t}\) has Gaussian tail if \(\phi =\text {const. }\) In other words, in the periodic setting with a fixed driving fBM, there is a clear alternative of either having Gaussian tail or the CLL Weibull tail (but no other possibilities!). It would be interesting to see, at least for periodic \(C_{b}^{\infty }\)-vector fields, whether such an alternative phenomenon will continue to take place in the SDE context.