1 Introduction

Recent years have seen a notable development in the statistical analysis of time-continuous stochastic processes beyond the somewhat classical case of an Itô semimartingale driven by a Brownian motion. Generalisations of that class of processes are in fact manifold, and one can mention for example the analysis of integrals with respect to fractional Brownian motion (e.g. Bibinger 2020; Brouste and Fukasawa 2018), the discussion of Lévy-driven moving averages (e.g. Basse-O’Connor et al. 2017, 2018), inference on the solution of stochastic PDEs (e.g. Bibinger and Trabs 2020; Chong 2020; Kaino and Uchida 2021) and the behaviour of integrals with respect to stable pure-jump processes (e.g. Heiny and Podolskij 2021; Todorov 2015). All of the above results are concerned with high-frequency observations of the respective processes, always in the case of regularly spaced observations in time.

On the other hand, it is well understood that the underlying assumption of a regular spacing constitutes an ideal setting that simplifies the theoretical statistical analysis but is typically not met in practical applications. For this reason, there has always been a lot of interest in understanding the impact of irregular sampling schemes on the proposed statistical methods. For semimartingales driven by Brownian motion one can mention Hayashi et al. (2011) and Mykland and Zhang (2012) among others, with an almost complete treatment in Chapter 14 of Jacod and Protter (2012). The case of Brownian semimartingales with jumps is treated for example in Bibinger and Vetter (2015) and Martin and Vetter (2019). All of the aforementioned papers deal with exogenous sampling schemes, i.e. when the observation times are essentially independent from the underlying processes. There is also limited research on endogenous observation times, mostly modelled via hitting times. See for example Fukasawa and Rosenbaum (2012) in the continuous case or Vetter and Zwingmann (2017) when additional jumps are present.

In this paper, we are discussing the case of a near-stable jump semimartingale observed at irregular times, i.e. the underlying process is given by

$$\begin{aligned} X_t=X_0+\int _{0}^{t}\alpha _s \textrm{d}s+\int _{0}^{t}\sigma _{s-}\textrm{d}L_s+Y_t, \quad t>0, \end{aligned}$$
(1.1)

and the observation times follow a version of the restricted discretisation scheme from Jacod and Protter (2012), essentially providing observation times independent of X. Loosely speaking and made precise below, L is driven by a \(\beta \)-stable process and Y comprises the residual jumps while \(\alpha \) and \(\sigma \) are appropriately chosen adapted processes. Our goal in this work is to provide a consistent estimator for \(\beta \) and to establish an associated central limit theorem. Statistical inference on \(\beta \) has already been conducted in Todorov (2015, 2017) for regular observations while Jacod and Todorov (2018) provides the theory in a general model where microstructure noise is present and dominates the statistical analysis.

A first glance, our strategy to estimate \(\beta \) somewhat resembles the procedure from Todorov (2015), but there are some notable challenges that might occur in other situations as well. First, we are computing an empirical characteristic function \(\widetilde{L}^n(p,u)\) which is constructed from local increments of X (and with an auxiliary parameter p), but it is important here to rescale any of these increments relative to the length of the underlying time period. Secondly, one can show convergence of this empirical distribution function to a function \(L(p,u,\beta )\) which not only is a function of u, p and the unknown \(\beta \) but also depends specifically on the distribution of the discretisation scheme. Unlike in Todorov (2015), where a consistent estimator for \(\beta \) is obtained via a suitable functional of empirical characteristic functions computed at arbitrary values u and v, we have to use sequences \(u_n\) and \(v_n\) converging to zero plus an asymptotic expansion to obtain a consistent estimator. This procedure also leads to a drop in the rate of convergence in the associated central limit theorem.

The remainder of this work is as follows: Sect. 2 deals with the assumptions on X as well as on the discretisation scheme. In Sect. 3 we establish our statistical method and we also present the main results on the asymptotic properties both of \(\widetilde{L}^n(p,u)\) and of \(\hat{\beta }(p,u_n,v_n)\). A thorough simulation study is provided in Sect. 4 where we also discuss issues connected with the estimation of the asymptotic variance in the normal approximation. All proofs are gathered in Sect. 5.

2 Setting

Throughout this work, we adopt the setting from Todorov (2015) and assume that we are given a univariate pure-jump semimartingale as defined in (1.1), i.e. that we observe

$$\begin{aligned} X_t=X_0+\int _{0}^{t}\alpha _s \textrm{d}s+\int _{0}^{t}\sigma _{s-}\textrm{d}L_s+Y_t \end{aligned}$$

where L and Y are pure-jump Itô semimartingales and \(\alpha \) and \(\sigma \) are càdlàg. All processes are defined on some filtered probability space \((\Omega ,\mathcal {F},(\mathcal {F}_t)_{t\ge 0},\mathbb {P})\).

Specific assumptions on these processes will be given below, and we start with conditions on the jump processes L and Y. Below, \(\kappa (x)\) denotes a truncation function, i.e. it is the identity in a neighbourhood around zero, odd, bounded and equals zero for large values of |x|. We also set \(\kappa '(x)=x-\kappa (x)\), and whenever we discuss the characteristic triplet of a Lévy process it is to be understood as with respect to this choice of the truncation function.

Condition 2.1

We impose the following conditions on the processes L and Y:

  1. (a)

    L is a Lévy process with characteristic triplet (0, 0, F) where the Lebesgue density of the Lévy measure \(F(\textrm{d}x)\) is given by

    $$\begin{aligned} h(x)=\frac{A}{|x|^{1+\beta }}+\tilde{h}(x) \end{aligned}$$

    for some \(\beta \in (1,2)\) and some \(A > 0\). The function \(\tilde{h}(x)\) satisfies

    $$\begin{aligned} |\tilde{h}(x)|\le \frac{C}{|x|^{1+\beta '}} \end{aligned}$$

    for some \(\beta ' < 1\) and all \(|x|\le x_0\), for some \(x_0 > 0\).

  2. (b)

    Y is a finite variation jump process of the form

    $$\begin{aligned} Y_t=\int _0^t \int _{\mathbb {R}} x \mu ^Y(\textrm{d}s,\textrm{d}x) \end{aligned}$$

    where \(\mu ^Y(\textrm{d}s,\textrm{d}x)\) denotes the jump measure of Y and its compensator is given by \(\textrm{d}s \otimes \nu _s^Y(\textrm{d}x)\). The process

    $$\begin{aligned} \left( \int _{\mathbb {R}}\left( |x|^{\beta '}\wedge 1\right) \nu _t^Y(\textrm{d}x)\right) _{t\ge 0} \end{aligned}$$

    is locally bounded for the parameter \(\beta '\) from (a).

Condition 2.1 should be read in such a way that the pure-jump Lévy process L is essentially \(\beta \)-stable while all other jumps (both in L and in Y) are of much smaller activity and will be dominated by the \(\beta \)-stable part at high frequency. Note that dependence between L and Y is possible, and this will hold for the jump parts of \(\alpha \) and \(\sigma \) as well.

Condition 2.2

The processes \(\alpha \) and \(\sigma \) are Itô semimartingales of the form

$$\begin{aligned} \alpha _t&=\alpha _0 +\int _{0}^{t}b_s^\alpha \textrm{d}s+\int _{0}^{t}\eta _s^\alpha \textrm{d}W_s+\int _{0}^{t}\widetilde{\eta }_s^\alpha \textrm{d}\widetilde{W}_s+\int _{0}^{t}\overline{\eta }_s^\alpha \textrm{d}\overline{W}_s\\&\quad +\int _{0}^{t}\int _E\kappa (\delta ^\alpha (s,x)) \underline{\widetilde{\mu }}(\textrm{d}s,\textrm{d}x)+\int _{0}^{t}\int _E\kappa '(\delta ^\alpha (s,x))\underline{\mu }(\textrm{d}s,\textrm{d}x),\\ \sigma _t&=\sigma _0 +\int _{0}^{t}b_s^\sigma \textrm{d}s+\int _{0}^{t}\eta _s^\sigma \textrm{d}W_s+\int _{0}^{t}\widetilde{\eta }_s^\sigma \textrm{d}\widetilde{W}_s+\int _{0}^{t}\overline{\eta }_s^\sigma \textrm{d}\overline{W}_s\\&\quad +\int _{0}^{t}\int _E\kappa (\delta ^\sigma (s,x))\underline{\widetilde{\mu }}(\textrm{d}s,\textrm{d}x) +\int _{0}^{t}\int _E\kappa '(\delta ^\sigma (s,x))\underline{\mu }(\textrm{d}s,\textrm{d}x) \end{aligned}$$

where

  1. (a)

    \(|\sigma _t|\) and \(|\sigma _{t-}|\) are strictly positive;

  2. (b)

    W, \(\widetilde{W}\) and \({\overline{W}}\) are independent Brownian motions, \(\underline{\mu }\) is a Poisson random measure on \(\mathbb {R}_+\times E\) with compensator \(\textrm{d}t\otimes \lambda (\textrm{d}x)\) for some \(\sigma \)-finite measure \(\lambda \) on a Polish space E and \(\underline{\widetilde{\mu }}\) is the compensated jump measure;

  3. (c)

    \(\delta ^\alpha (t,x)\) and \(\delta ^\sigma (t,x)\) are predictable with \(|\delta ^\alpha (t,x)|+|\delta ^\sigma (t,x)|\le \gamma _k(x)\) for all \(t\le T_k\), where \(\gamma _k(x)\) is a deterministic function on \(\mathbb {R}\) with \(\int _E\left( |\gamma _k(x)|^r\wedge 1\right) \lambda (\textrm{d}x)<\infty \) for some \(0\le r < 2 \) and \(T_k\) is a sequence of stopping times increasing to \(+\infty \);

  4. (d)

    \(b^\alpha , b^\sigma \) are locally bounded while \(\eta ^\alpha , \eta ^\sigma , \widetilde{\eta }^\alpha , \widetilde{\eta }^\sigma , \overline{\eta }^\alpha \) and \(\overline{\eta }^\sigma \) are càdlàg.

These assumptions on \(\alpha \) and \(\sigma \) are extremely mild and covered by most processes used in the literature.

Our goal in the following is to estimate \(\beta \) based on irregular observations over the finite time interval [0, 1], say, and we will work in a setting where the observation times are typically random. To incorporate this additional randomness into the model we assume that the probability space contains a larger \(\sigma \)-field \(\mathcal {G}\), and we keep using \(\mathcal {F}\) to denote the \(\sigma \)-field with respect to which X is measurable. The following condition is loosely connected with the restricted discretisation schemes introduced in Chapter 14.1 of Jacod and Protter (2012) but with a slightly different predictability assumption and additional moment conditions. It allows for some dependence of the observations times and X (via \(\lambda \)) but usually depends on external randomness through the \(\phi _i^n\) as well.

Condition 2.3

For each \(n\in \mathbb {N}\) we observe the process X at stopping times \(0=\tau ^n_0<\tau _1^n<\tau _2^n<\cdots \) with \(\tau _0^n=0, \tau _1^n=\Delta _n\phi _1^n\) and

$$\begin{aligned} \tau _i^n=\tau _{i-1}^n+\Delta _n \phi _{i}^n\lambda _{\tau _{i-2}^n}\quad \text {for all } i \ge 2 \end{aligned}$$

where \(\Delta _n \rightarrow 0\) and

  1. (a)

    \(\lambda _t\) is a strictly positive Itô semimartingale w.r.t. the filtration \((\mathcal {F}_t)_{t\ge 0}\) and fulfills the same conditions as \(\sigma _t\) stated in Assumption 2.2;

  2. (b)

    \((\phi _i^n)_{i\ge 1}\) is a family of i.i.d. random variables with respect to the \(\sigma \)-field \(\mathcal {G}\), and it is independent of \(\mathcal {F}\);

  3. (c)

    \(\phi _i^n\sim \phi \) for a strictly positive random variable \(\phi \) with \(\mathbb {E}[\phi ]=1\), and for all \(p > -2\) the moments \(\mathbb {E}\left[ \phi ^p\right] \) exist.

For all \(t>0\) we define \(({\mathcal {F}}_t^n)_{t \ge 0}\) to be the smallest filtration containing \(({\mathcal {F}}_t)_{t \ge 0}\) and with respect to which all \(\tau _i^n\) are stopping times. We also let \(N_n(t)\) denote the number of observation times until t, i.e.

$$\begin{aligned} N_n(t) = \sum _{i\ge 1}\mathbbm {1}_{\{\tau _i^n \le t \}}, \end{aligned}$$

and of particular importance for us is the case \(t=1\) because \(N_n(1)\) is the (random) number of observations over the trading day [0, 1] from which we construct the relevant statistics later on. Note that due to \(\Delta _n \rightarrow 0\) we are in a high-frequency situation where the time between two observations converges to zero while \(N_n(1)\) diverges to infinity (both in a probabilistic sense).

3 Results

The essential idea from Todorov (2015) is to base the estimation of the unknown activity index on the estimation of the characteristic function of a certain stable distribution. We will essentially proceed in a similar way but with some subtle changes because the underlying sampling scheme is not regular anymore. On one hand, we have to account for the fact that the time between successive observations is not constant, while on the other hand the characteristic function not only involves this particular stable distribution but also the unknown distribution \(\phi \) from Assumption 2.3.

Let us become more specific here. We assume that the probability space is large enough to allow for a representation as in Todorov and Tauchen (2012), namely that the pure-jump Lévy process L can be decomposed as

$$\begin{aligned} L_t = S_t + \acute{S}_t- \grave{S}_t \end{aligned}$$
(3.1)

where all processes on the right-hand side are (possibly dependent) Lévy processes with a characteristic triplet of the form (0, 0, F) for a Lévy measure of the form \(F(\textrm{d}x) = F(x) \textrm{d}x\). For S the Lévy density satisfies \(F(x) = {A}|x|^{-(1+\beta )}\) while the Lévy densities of \(\acute{S}\) and \(\grave{S}\) are \(F(x)=|\tilde{h}(x)|\) and \(F(x) = 2|\tilde{h}(x)|\mathbbm {1}_{\{\tilde{h}(x)<0\}}\), respectively. Then S is strictly \(\beta \)-stable, and its characteristic function satisfies

$$\begin{aligned} \mathbb {E}[\cos (uS_t)]=\mathbb {E}[\exp (iuS_t)]=\exp (-A_\beta u^\beta t) \end{aligned}$$

for some constant \(A_\beta >0\) and any \(u,t > 0.\)

As a result of the previous decomposition (3.1) and since

$$\begin{aligned} |\tilde{h}(x)|\le \frac{C}{|x|^{1+\beta '}} \end{aligned}$$

for some \(\beta ' < \beta \) and all \(|x|\le x_0\) holds due to Condition 2.1, it is clear that the jump behaviour of L and thus of X is governed by the \(\beta \)-stable process S for small time intervals. This observation is the key to our following estimation procedure: Based on the high-frequency observations of X we will first estimate a function \(L(p,u,\beta )\) which, as noted before, is related to the characteristic function of S but involves the unknown distribution of \(\phi \) as well. Here, p and u are additional parameters that can be chosen by the statistician. In the second step we will essentially use a Taylor expansion of L (as a function of u) around zero to finally come up with an estimator for \(\beta \).

In the following, we denote with

$$\begin{aligned} \widetilde{\Delta _i^nX}=\frac{\Delta _n}{\tau _i^n-\tau _{i-1}^{n}}\left( X_{\tau _i^n}-X_{\tau _{i-1}^{n}}\right) \end{aligned}$$

the ith increment of the process X, but where we have included an additional rescaling to account for the different lengths of the intervals in an irregular sampling scheme, and we occasionally also use \(\widetilde{\Delta _i^n S}\) for the rescaled increment of the \(\beta \)-stable S. For any \(p > 0\) and \(u > 0\) we then set

$$\begin{aligned} \widetilde{L}^n(p,u)=\frac{1}{N_n(1)-k_n-2}\sum _{i=k_n+3}^{N_n(1)}\cos \left( u\frac{\widetilde{\Delta _i^n X}-\widetilde{\Delta _{i-1}^nX} }{(\widetilde{V}_i^n(p))^{1/p}}\right) \end{aligned}$$

where the auxiliary sequence \(k_n\) satisfies \(k_n \rightarrow \infty \) and \(k_n \Delta _n \rightarrow 0\) and where

$$\begin{aligned} \widetilde{V}_i^n(p)=\frac{1}{k_n}\sum _{j=i-k_n-1}^{i-2}|\widetilde{\Delta _j^n X}-\widetilde{\Delta _{j-1}^n X}|^p, \quad i=k_n+3,\ldots ,N_n(1), \end{aligned}$$

is used to estimate the unknown local volatility \(\sigma \).

At first, it seems somewhat odd to include \(\Delta _n\) in the definition of \(\widetilde{\Delta _i^nX}\) because this quantity cannot be observed in practice. We will base our statistical procedure in the following on \(\widetilde{L}^n(p,u)\), however, and it is obvious from its definition that it is in fact independent of \(\Delta _n\) as the latter appears as a factor both in the numerator and in the denominator. Thus we are safe to work with \(\widetilde{\Delta _i^nX}\), and its definition makes it easier to compare its results with the standard increment \(\Delta _i^n X = X_{\tau _i^n}-X_{\tau _{i-1}^{n}}\). These obviously coincide in the case of a regular sampling scheme. Note also that, even though its asymptotic condition is stated in terms of \(\Delta _n\), the choice of \(k_n\) can in practice be based on the size of \(N_n(1)\) which essentially grows as \(\Delta _n^{-1}\).

The main part of the upcoming analysis is devoted to the study of the asymptotic behaviour of \(\widetilde{L}^n(p,u)\). Its definition together with the previous discussion suggests that its limit should involve the characteristic function of S, but it also becomes apparent that the limit cannot be independent of \(\phi \). We will prove in the following that the first-order limit is

$$\begin{aligned} L(p,u,\beta )=\mathbb {E}\left[ \exp \left( -u^\beta C_{p,\beta }((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta })\right) \right] \end{aligned}$$

with the constant \(C_{p,\beta }\) being defined via

$$\begin{aligned} \mu _{p,\beta }=\mathbb {E}[|S_1|^p]^{\frac{\beta }{p}},\quad \kappa _{p,\beta }=\mathbb {E}\left[ \left( (\phi ^{(1)})^{1-\beta } +(\phi ^{(2)})^{1-\beta }\right) ^\frac{p}{\beta }\right] ^\frac{\beta }{p}, \quad C_{p,\beta }=\frac{A_\beta }{\mu _{p,\beta }\kappa _{p,\beta }}>0, \end{aligned}$$
(3.2)

and where \(\phi ^{(1)}\) and \(\phi ^{(2)}\) denote two independent copies with the same distribution as \(\phi \), defined on an appropriate probability space. For simplicity, we still use \(\mathbb {E}[\cdot ]\) to denote the expectation on this generic space. The first main theorem then reads as follows:

Theorem 3.1

Suppose that Conditions 2.12.3 are in place and let \(k_n\sim C_1 \Delta _n^{-\varpi }\) for some \(C_1> 0\) and some \(\varpi \in (0,1)\). Then we have

$$\begin{aligned} \widetilde{L}^n(p,u) {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}L(p,u,\beta ) \end{aligned}$$

for any fixed \(u > 0\) and any choice of \(0< p < \beta /2\).

While this result is interesting in itself, at first glance it does not help much for the estimation of \(\beta \) because \(L(p,u,\beta )\) depends in a complicated way on the unknown distribution of \(\phi \). If we utilize the familiar approximation \(\exp (y) = 1+y+ o(y)\) for \(y \rightarrow 0\), however, it seems reasonable to hope that the approximation

$$\begin{aligned} L(p,u,\beta ) \approx 1 + \mathbb {E}\left[ -u^\beta C_{p,\beta }((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta })\right] = 1 - u^\beta C_{p,\beta }\kappa _{\beta ,\beta } \end{aligned}$$
(3.3)

holds for any choice of a small \(u > 0\), which now is much easier to handle. Namely, an estimator for \(\beta \) is then based on an appropriate combination of two estimators \(\widetilde{L}^n(p,u_n)\) and \(\widetilde{L}^n(p,v_n)\) with \(u_n \rightarrow 0\) and \(v_n = \rho u_n\) for some \(\rho > 0\). Precisely, we set

$$\begin{aligned} \hat{\beta }(p,u_n,v_n)=\frac{\log (- (\widetilde{L}^n(p,u_n)-1))-\log (- (\widetilde{L}^n(p,v_n)-1))}{\log (u_n/v_n)} \end{aligned}$$

which obviously is symmetric upon exchanging \(u_n\) and \(v_n\).

Remark 3.2

We will often choose \(\rho = 1/2\) in which case \(\hat{\beta }(p,u_n,v_n) \le 2\) can be shown. This is of course a desirable property as it resembles the bound for the stability index \(\beta \) itself, but it also bears some restrictions regarding the quality of a limiting normal approximation for values of \(\beta \) close to 2. See Figs. 1 and 2 below.

Before we discuss the asymptotic behaviour of the estimator \(\hat{\beta }(p,u_n,v_n)\) we will state a bivariate central limit theorem for \(\widetilde{L}^n(p,u_n) - L(p,u_n,\beta )\) and \(\widetilde{L}^n(p,v_n) - L(p,v_n,\beta )\) with \(u_n\) and \(v_n\) chosen as above.

Theorem 3.3

Suppose that Conditions 2.12.3 are in place and let \(k_n\sim C_1 \Delta _n^{-\varpi }\) for some \(C_1> 0\) and some \(\varpi \in (0,1)\) as well as \(u_n\sim C_2 \Delta _n^{\varrho }\) for some \(C_2 > 0\) and \(\varrho \in (0,1)\). Suppose further that

$$\begin{aligned} \beta '<\frac{\beta }{2}, \quad \frac{1}{3}\vee \frac{1}{8\varrho }<p<\frac{\beta }{2}, \quad \varpi \ge \frac{2}{3}, \quad \frac{1}{3\beta }<\varrho< \frac{1}{\beta }, \quad \frac{1}{\beta }<\frac{\varpi }{p}-\varrho , \quad 2\varpi -\varrho \beta <1 \end{aligned}$$

hold. Then

$$\begin{aligned} \left( \frac{\sqrt{N_n(1)}}{u_n^{\beta /2}}(\widetilde{L}^n(p,u_n)- L(p,u_n,\beta )),\frac{\sqrt{N_n(1)}}{v_n^{\beta /2}}(\widetilde{L}^n(p,v_n)-L(p,v_n,\beta ))\right) \end{aligned}$$

converges \({\mathcal {F}}\)-stably in law to a limit \((X',Y')\) which is jointly normal distributed (independent of \({\mathcal {F}}\)) with mean 0 and covariance matrix \(\mathcal {C'}\) given by

$$\begin{aligned} \mathcal {C}_{11}'=\mathcal {C}_{22}'=C_{p,\beta }\kappa _{\beta ,\beta }(4-2^\beta ), \quad \mathcal {C}_{12}'=\mathcal {C}_{21}'=C_{p,\beta }\kappa _{\beta ,\beta }\frac{2+2\rho ^\beta - (1+\rho )^\beta -|1-\rho |^\beta }{\rho ^{\beta /2}}. \end{aligned}$$

Remark 3.4

The above choice of the parameters \(\varrho \), \(\varpi \) and p is feasible even if we do not know \(\beta \). It can easily be seen that e.g. \(\varrho =\frac{1}{3}, \varpi =\frac{2}{3}\) and any \(p \in (\frac{3}{8},\frac{1}{2})\) satisfies the conditions in Theorem 3.3.

The following result is the main theorem of this work and it provides the central limit theorem for the estimator \(\hat{\beta }(p,u_n,v_n)\). Its proof builds heavily on Theorem 3.3.

Theorem 3.5

Under the conditions of Theorem 3.3 we have the \({\mathcal {F}}\)-stable convergence in law

$$\begin{aligned} u_n^{\beta /2}\sqrt{N_n(1)}(\hat{\beta }(p,u_n,v_n)-\beta )~{\mathop {\longrightarrow }\limits ^{{\mathcal {L}}-(s)}}~X \end{aligned}$$

where X is a normal distributed random variable (independent of \({\mathcal {F}}\)) with mean 0 and variance

$$\begin{aligned} v^2 = \frac{(\rho ^\beta +1)(4-2^\beta )-2(2+2\rho ^\beta - (1+\rho )^\beta -|1-\rho |^\beta )}{\kappa _{\beta ,\beta }\rho ^\beta \log (1/\rho )^2 C_{p,\beta }}. \end{aligned}$$

A simple corollary is the consistency of \(\hat{\beta }(p,u_n,v_n)\) as an estimator for \(\beta \).

Corollary 3.6

Under the conditions of Theorem 3.3 we have

$$\begin{aligned} \hat{\beta }(p,u_n,v_n) {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\beta . \end{aligned}$$

For a feasible application of Theorem 3.5 we need a consistent estimator for the variance of the limiting normal distribution which essentially boils down to the estimation of \(\kappa _{\beta , \beta }\). This problem will be discussed in the next section, alongside a thorough analysis of the finite sample properties of \(\hat{\beta }(p,u_n,v_n)\).

Table 1 Results for \(\rho =1/2\) and \(\Delta _n^{-1}=1000\) are shown

4 Simulation study

Table 2 Results for \(\rho =1/2\) and \(\Delta _n^{-1}=10{,}000\) are shown

This chapter deals with the numerical assessment of the finite sample properties of \(\hat{\beta }(p,u_n,v_n)\), and we also include a discussion regarding the estimation of the variance in the central limit theorem in order to obtain a feasible result. In the following, let W be a standard Brownian motion and L a symmetric stable process with a Lévy density

$$\begin{aligned} h(x)=\frac{1}{|x|^{1+\beta }} \end{aligned}$$

for some \(\beta \in (1,2)\). We then set

$$\begin{aligned} \alpha _t = \int _{0}^{t} 2(1-\alpha _s) \textrm{d}s + 2 \int _{0}^{t} \textrm{d}{W}_s, \quad \sigma _t = \int _{0}^{t} \alpha _s \textrm{d}{W}_s, \end{aligned}$$

and assume that we observe

$$\begin{aligned} X_t=X_0+\int _{0}^{t}\alpha _s \textrm{d}s+\int _{0}^{t}\sigma _{s-}\textrm{d}L_s \end{aligned}$$

which obviously fulfills Conditions 2.1 and 2.2. For the observation scheme we choose

$$\begin{aligned} \lambda _t = \int _{0}^{t} (5-\lambda _s) \textrm{d}s + \int _{0}^{t} \textrm{d}\widetilde{W}_s, \quad \phi = \frac{\phi ' \vee 0.1}{\mathbb {E}\left[ \phi ' \vee 0.1\right] } , \end{aligned}$$
(4.1)

with \(\phi ' \sim \text {Exp}(1)\) and with the starting values of the processes being \(\alpha _0 = \sigma _0 = X_0 = \lambda _0 = 1\). We assume that \(\widetilde{W}\) is a standard Brownian motion as well, independent of W. The purpose of the minimum in the definition of \(\phi \) in (4.1) is to ensure the (negative) moment condition from Assumption 2.3 (c) to hold, which (as can be seen from additional simulations) not only seems to be relevant in theory but in practice as well. Note also that the choice of \(\lambda _0=1\) combined with the mean reversion of \(\lambda \) to 5 leads to pronounced changes in the distribution of the \(\tau _i^n\) over time. For the simulation of X we use a standard Euler scheme, utilising Proposition 1.7.1 in Samorodnitsky and Taqqu (1994) to obtain symmetric stable random variables.

4.1 Consistency and normal approximation in finite samples

We begin the assessment of the finite sample properties with a discussion of the consistency result \(\hat{\beta }(p,u_n,v_n)-\beta {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}0\) and the quality of the associated normal approximation. For this we set \(u_n = N_n(1)^{-1/3}\), \(k_n = N_n(1)^{2/3}\), \(p=1/2\), essentially in accordance with Remark 3.4. Below, we present the results for \(\beta \in \{1.1,1.3,1.5,1.7,1.9\}\) and \(\rho \in \{1/2,2\}\), generating \(N = 1000\) samples, and we discuss both \(\Delta _n^{-1} = 1000\) and \(\Delta _n^{-1} = 10{,}000\). Note that our choice of \(\lambda \) in (4.1) yields about \(N_n(1)\approx 520\) observations in the first case, whereas \(N_n(1)\approx 5200\) in the second one.

Fig. 1
figure 1

QQ-plots in the case \(N=1000\), \(\Delta _n^{-1}=1000\) and \(\beta \in \{1.1,1.3,1.5,1.7,1.9\}\) are shown, with \(\beta \) increasing from the top to the bottom. The left hand side provides the plots for \(\rho =0.5\), the right hand side the ones for \(\rho =2\)

Fig. 2
figure 2

QQ-plots in the case \(N=1000\), \(\Delta _n^{-1}=10{,}000\) and \(\beta \in \{1.1,1.3,1.5,1.7,1.9\}\) are shown, with \(\beta \) increasing from the top to the bottom. The left-hand side provides the plots for \(\rho =0.5\), the right-hand side the ones for \(\rho =2\)

Table 1 shows mixed results regarding consistency but nevertheless allows us to draw some conclusions: First, we see for a choice of \(\rho =1/2\) that the estimator for \(\beta \) behaves correctly on average whereas the (relative) difference between the true and the estimated variances grows with \(\beta \). For \(\rho =2\) note first that \(\hat{\beta }(p,u_n,v_n)\) is symmetric in \(u_n\) and \(v_n\). Thus, choosing \(\rho =2\) and keeping \(u_n\) fixed is the same as choosing \(u_n\) twice as large and keeping \(\rho =1/2\). Hence Table 1 confirms empirically what is already known from the construction of the estimator: It relies on \(u_n \rightarrow 0\), so a larger choice of \(\rho \) induces an additional bias. On the other hand, the rate of convergence to the normal distribution improves as \(u_n\) becomes larger, and this may explain why the empirical variance seems to be much closer to the theoretical one if we choose the larger \(\rho = 2\).

We follow this discussion with Table 2 which is constructed in the same manner as above for \(\Delta _n^{-1}=10{,}000\), and we basically see improvement across the board. Now, the bias for both \(\rho =1/2\) and \(\rho =2\) is very small, even for large values for \(\beta \), and we can also observe that the approximated variance specifically for \(\beta \in \{1.5,1.7,1.9\}\) is much closer to the theoretical one than previously.

In a second step, we present some QQ-plots to visualise the quality of the approximating normal distribution from Theorem 3.5 for which we use the same configuration of parameters as discussed earlier. Due to the choice of \(\rho =1/2\) and \(\rho =2\) Remark 3.2 applies. As noted before, this condition prevents the normal approximation from working well in the right tails, and as expected Fig. 1 shows that this effect becomes more pronounced as \(\beta \) gets closer to the upper bound. For \(\Delta _n^{-1}=1000\), this dubious tail behaviour already starts to appear for \(\beta =1.5\). Nevertheless, it should be noted that the quality of the distributional approximation increases visibly with the higher sample size \(\Delta _n^{-1}=10{,}000\) for both choices of \(\rho \). Figure 2 shows for instance that \(\beta =1.5\) is not really critical anymore, i.e. an increasing sample size allows for an accurate approximation of larger values of \(\beta \). Also, a slight improvement from \(\rho =1/2\) to \(\rho =2\) can be noted, in line with the previous discussion regarding the rate of convergence.

A natural way to circumvent the problem in the upper tail is to apply a transformation such as \(x \mapsto \log (2-x)\) which maps \(\hat{\beta }(p,u_n,v_n)\) onto the entire real line, as there is no obvious lower bound for \(\hat{\beta }(p,u_n,v_n)\). From Theorem 3.5 and the delta method one obtains

$$\begin{aligned} u_n^{\beta /2}\sqrt{N_n(1)}(\log (2-\hat{\beta }(p,u_n,v_n))- \log (2-\beta )){\mathop {\longrightarrow }\limits ^{{\mathcal {L}}}}{\mathcal {N}}(0,v^2/(2-\beta )^2), \end{aligned}$$

with \(v^2\) as above, and Fig. 3 shows a clear improvement compared with the original normal approximation. Here and in the section below, we focus specifically on the case \(\beta =1.7\) to illustrate the benefits and shortcomings of the proposed methods in a situation where the original estimator does not behave too well.

Fig. 3
figure 3

QQ-plot for \(\log (2-\hat{\beta }(p,u_n,v_n))- \log (2-\beta )\) in the case \(N=10{,}000\), \(\Delta _n^{-1}=1000\), \(\beta =1.7\) and \(\rho = 0.5\)

4.2 Bias correction and variance estimation

A natural way to further improve the finite sample properties is to conduct a bias correction for the higher order terms. Our current approach to estimate \(\beta \) relies on the approximation (3.3) while a more precise one would e.g. be a third order expansion of the form

$$\begin{aligned} L(p,u_n,\beta ) -1&= C_{p,\beta } u_n^\beta \bigg (-\kappa _{\beta ,\beta }+\mathbb {E}\bigg [ \frac{C_{p,\beta }u_n^\beta ((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta })^2}{2}\bigg ] \\&\quad - \mathbb {E}\bigg [ \frac{(C_{p,\beta }u_n^\beta )^2 ((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta })^3}{6}\bigg ] \bigg ). \end{aligned}$$

An estimator for \(\beta \) is then given by

$$\begin{aligned} \overline{\beta }(p,u_n,v_n)=\frac{\log (-(\widetilde{L}^n(p,u_n)-1 - \widehat{deb}(u_n)))-\log (-(\widetilde{L}^n(p,v_n)-1 - \widehat{deb}(v_n)))}{\log (u_n/v_n)} \end{aligned}$$

where e.g. \(\widehat{deb}(u_n)\) estimates

$$\begin{aligned} \mathbb {E}\left[ \frac{(C_{p,\beta }u_n^\beta ((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta }))^2}{2}\right] - \mathbb {E}\left[ \frac{(C_{p,\beta }u_n^\beta ((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta }))^3}{6}\right] . \end{aligned}$$

Ideally, such a correction would allow for a bigger choice of \(u_n\) in finite samples, thus leading to a better rate of convergence.

To define a consistent estimator for \(\kappa _{\zeta ,\beta }^{\zeta /\beta }=\mathbb {E}\left[ \left( (\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta }\right) ^{\zeta /\beta }\right] \), \(\zeta > 0\), we use \(\tau _i^n-\tau _{i-1}^n = \Delta _n \phi _{i}^n\lambda _{\tau _{i-2}^n}\) and the càdlàg property of \(\lambda \). A natural way to construct an estimator for \(\kappa _{\zeta , \beta }\) is then to build it from sums of adjacent increments of the \(\tau _i^n\), rescaled by the length of the total time interval relative to the number of increments, and using an appropriate power. Whenever \(\beta \) needs to be included, it is replaced by its consistent estimator \(\hat{\beta }_n = {\hat{\beta }}(p, u_n, v_n)\). The consistency of such an estimator for \(\kappa _{\zeta , \beta }^{\zeta /\beta }\) is formally given in the following lemma. Its proof is rather straightforward but lengthy and therefore omitted.

Lemma 4.1

Let \(r_n \sim C_3 \Delta _n^{-\psi }\) for some \(\psi \in (0,1)\). Then, for \(\zeta >0\), we have

$$\begin{aligned} \widehat{\kappa }_n=\frac{1}{N_n(1)-r_n-2}\sum _{i=r_n+3}^{N_n(1)} \chi _i^n {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\kappa _{\zeta , \beta }^{\zeta /\beta } \end{aligned}$$

where

$$\begin{aligned} \chi _i^n = \left( \left( \frac{r_n}{\tau _{i-2}^n-\tau _{i-2-r_n}^n}\right) ^{1-\hat{\beta }_n} \left( \left( \tau _{i}^n-\tau _{i-1}^n\right) ^{1-\hat{\beta }_n}+\left( \tau _{i-1}^n -\tau _{i-2}^n\right) ^{1-\hat{\beta }_n}\right) \right) ^{\zeta /\hat{\beta }_n}. \end{aligned}$$

As an example, we discuss the case of \(\Delta _n^{-1}=1000\) with \(\beta = 1.7\), \(\rho = 0.5\) and \(r_n = N_n(1)^{4/5}\), but we now choose \(u_n = N_n(1)^{-0.28}\) and keep all other variables unchanged. In this case, mean and empirical variance become 1.7042 and 1.7970, both improving the corresponding values from Table 1. Also, the corresponding QQ-plot in Fig. 4 clearly shows a better approximation of the limiting normal distribution, still with the original problems in the right tail.

Fig. 4
figure 4

QQ-plots in the case \(N=1000\), \(\Delta _n^{-1}=1000\), \(\beta =1.7\) and \(\rho = 0.5\). On the left: The standard estimator \(\hat{\beta }(p,u_n,v_n)\). On the right: The bias-corrected estimator \(\overline{\beta }(p,u_n,v_n)\)

So far, we have used the theoretical variance to draw the QQ-plots, and we have seen that the boundedness of \(\hat{\beta }(p,u_n,v_n)\) causes the empirical distribution to not match the limiting normal distribution in the upper tails. In practice, however, the true limiting variance

$$\begin{aligned} v^2 = \frac{(\rho ^\beta +1)(4-2^\beta )-2(2+2\rho ^\beta - (1+\rho )^\beta -|1-\rho |^\beta )}{\kappa _{\beta ,\beta }\rho ^\beta \log (1/\rho )^2 C_{p,\beta }} \end{aligned}$$

in Theorem 3.5 is unknown and needs to be estimated. We will present two ways to estimate it. For a full plug-in estimator note from eq. (3.3) in Todorov (2015) that

$$\begin{aligned} \frac{A_\beta }{\mu _{p,\beta }} = \left( \frac{2^p\Gamma ((1+p)/2)\Gamma (1-p/\beta )}{\sqrt{\pi }\Gamma (1-p/2)}\right) ^{-\beta /p} \end{aligned}$$

holds. Hence, the only unknown quantities in the variance are \(\beta \) and \(\kappa _{\zeta , \beta }\) (with \(\zeta =p, \beta \)), so Corollary 3.6 and Lemma 4.1 provide us with a consistent plug-in estimator for the limiting variance.

An alternative estimator can be obtained without an actual need to directly estimate \(\kappa _{\zeta , \beta }\). To this end, note that Theorem 3.5 proves the variance of \(\hat{\beta }(p,u_n,v_n)-\beta \) to be proportional to the reciprocal of \(u_n^{\beta } \kappa _{\beta , \beta } C_{p, \beta }\), and we see from (3.3) and Theorem 3.3 that the latter can be consistently estimated by \(1-\widetilde{L}^n(p,u_n)\). Plugging in \(\hat{\beta }(p,u_n,v_n)\) for the unknown \(\beta \) then gives a second estimator for the unknown limiting variance.

Figure 5 contains the QQ-plots for the bias-corrected estimator \(\overline{\beta }(p,u_n,v_n)\) when standardised with the two different estimators for the limiting variance. For a direct comparison we have chosen the same setting as in Fig. 4, i.e. we have \(\Delta _n^{-1}=1000\), \(\beta = 1.7\), \(\rho = 0.5\) and \(u_n = N_n(1)^{-0.28}\), with the only difference being that we have run \(N=10{,}000\) simulations. We have further chosen \(r_n = N_n(1)^{4/5}\) for the full plug-in estimator with a variance estimation using Lemma 4.1. Both normal approximations work fine with a slight edge towards the estimation based on \(1-\widetilde{L}^n(p,u_n)\) which is better in the lower tails. Note that the clear improvement in comparison to Fig. 4 can be explained by the fact that for values of \(\hat{\beta }(p,u_n,v_n)\) very close to the upper bound of 2 the estimator for the variance takes values very close to 0.

Fig. 5
figure 5

QQ-plots in the case \(N=10{,}000\), \(\Delta _n^{-1}=1000\), \(\beta =1.7\) and \(\rho = 0.5\). On the left: The empirical distribution of the bias-corrected \(\overline{\beta }(p,u_n,v_n)- \beta \), standardised using the full plug-in estimator from Lemma 4.1. On the right: The empirical distribution of the bias-corrected \(\overline{\beta }(p,u_n,v_n)- \beta \), with a variance estimation based on \(1-\widetilde{L}^n(p,u_n)\)

5 Proofs

5.1 Prerequisites on localisation

As usual one starts with localisation results, i.e. with results that allow to prove the main theorems under conditions which are slightly stronger than Conditions 2.1 and 2.2 for the processes involved and also stronger than Condition 2.3 on the sampling scheme. We begin with the additional assumptions on the processes.

Condition 5.1

In addition to Conditions 2.1 and 2.2 we assume that

  1. (a)

    \(|\sigma _t|\) and \(|\sigma _t|^{-1}\) are uniformly bounded;

  2. (b)

    \(|\delta ^\alpha (t,x)|+|\delta ^\sigma (t,x)|\le \gamma (x)\) for all \(t>0\), where \(\gamma (x)\) is a deterministic bounded function on \(\mathbb {R}\) with \(\int _E|\gamma (x)|^r\lambda (\textrm{d}x)<\infty \) for some \(0\le r<2\);

  3. (c)

    \(b^\alpha , b^\sigma , \eta ^\alpha , \eta ^\sigma , \widetilde{\eta }^\alpha , \widetilde{\eta }^\sigma , \overline{\eta }^\alpha \) and \(\overline{\eta }^\sigma \) are bounded;

  4. (d)

    the process \(\left( \int _{\mathbb {R}}\left( |x|^{\beta '}\wedge 1\right) \nu _t^Y(\textrm{d}x)\right) _{t\ge 0}\) is bounded and the jumps of Y are bounded;

  5. (e)

    the jumps of \(\acute{S}\) and \(\grave{S}\) are bounded.

Similar properties are assumed to hold for \(\lambda \) from Condition 2.3.

The following lemma gives the formal result why we can assume in the following that the strengthened Condition 5.1 holds, namely because we are interested in X on the bounded interval [0, 1] only and eventually \(E_p > 1\) for a localising sequence, at least with a probability converging to 1. Its proof closely resembles the one of Lemma 4.4.9 in Jacod and Protter (2012) which is why we refer the reader to part 3) of their proof.

Lemma 5.2

Let X be a process fulfilling Conditions 2.1 and 2.2. Then, for each \(p>0\) there exists a stopping time \(E_p\) and a process X(p) such that X(p) and its components, \(\alpha (p)\), \(\sigma {(p)}\) and Y(p), fulfill Assumption 5.1, and it also holds that \(X(p)_t = X_t\) for all \(t<E_p\). The sequence of stopping times can be chosen such that \(E_p\nearrow \infty \) almost surely when \(p\rightarrow \infty \).

For all proofs concerning the asymptotics of \(\widetilde{L}^n(p,u)\) and \(\hat{\beta }(p,u_n,v_n)\) it becomes important that the process \(\lambda _t\) driving the observation times \(\tau _i^n\) is bounded from above and below. This means that we need a stronger assumption than just Condition 2.3 as well, and we also need to assume that for a given n the number of observations until any fixed T is bounded by a constant times \(\Delta _n^{-1} T\).

Condition 5.3

In addition to Condition 2.3 there exists some \(C>1\) such that

  1. (a)

    The process \(\lambda \) fulfills the same assumptions as \(\sigma \) in Condition 5.1, and in particular we have for all \(t>0\)

    $$\begin{aligned} \frac{1}{C} \le \lambda _t \le C. \end{aligned}$$
  2. (b)

    For any given n and any \(T > 0\) we have

    $$\begin{aligned} N_n(T) \le C \Delta _n^{-1} T. \end{aligned}$$

Strengthening Condition 2.3 ultimatively results in changing the entire observation scheme which makes it somewhat harder to formally prove that such an assumption is indeed adequate. We begin with a result on the boundedness of \(\lambda \) as in part (a) above, and for every n let \(F_n\) be a random variable which not just depends on n but also on the process X and on the discretisation scheme via \(\lambda \) and the variables \(\phi _i^n\). Likewise, a possible stable limit F of \(F_n\) is assumed to depend on the same factors and is realised on an extension \((\widetilde{\Omega },\widetilde{\mathcal {G}},\widetilde{\mathbb {P}})\) of the original probability space \((\Omega ,\mathcal {G},\mathbb {P})\). Furthermore, for each \(C > 1\) we define \(\lambda ^{(C)}_t\) in such a way that \(\frac{1}{C} \le \lambda ^{(C)}_t \le C\) holds. We then set \(E_C\) to be the stopping time from Lemma 5.2 with C replacing p, and this lemma can be applied because by Condition 2.3, \(\lambda \) is assumed to satisfy the same structural properties as \(\sigma \).

Lemma 5.4

Assume that Assumption 2.3 holds and construct, for each \(C>1\), each stopping time \(E_C\) and each process \(\lambda ^{(C)}\), a new discretisation scheme, i.e. new stopping times \(\{\tau _i^{n,C}:i \ge 0\}\) and a new \(N_n^C(T)\) as in Condition 2.3 but with the process \(\lambda ^{(C)}\) instead of \(\lambda \). Define a sequence of associated random variables \(F_n(C)\) similar to \(F_n\) as well but with the process \(\lambda ^{(C)}\) replacing \(\lambda \), \(\{\tau _i^{n,C}:i \ge 0\}\) replacing \(\{\tau _i^{n}:i \ge 0\}\) and \({N}_n^C(T)\) replacing \(N_n(T)\), and likewise for F(C) on \((\widetilde{\Omega },\widetilde{\mathcal {G}},\widetilde{\mathbb {P}})\). If for each \(C>1\) it holds that

$$\begin{aligned} F_n(C)~{\mathop {\longrightarrow }\limits ^{{\mathcal {L}}-(s)}}~F(C) \end{aligned}$$
(5.1)

and if furthermore

$$\begin{aligned} F_n(C)\mathbbm {1}_{\{E_C>T \}} = F_n \mathbbm {1}_{\{E_C>T \}} \text {~~ and ~~} F(C)\mathbbm {1}_{\{E_C>T \}} = F \mathbbm {1}_{\{E_C>T \}} \end{aligned}$$
(5.2)

then we have \(F_n~{\mathop {\longrightarrow }\limits ^{{\mathcal {L}}-(s)}}~F\).

Proof

Let \(\widetilde{\mathbb {E}}\) be the expectation w.r.t. \(\widetilde{\mathbb {P}}\). We need to prove

$$\begin{aligned}&\limsup _{n \rightarrow \infty }\left| \mathbb {E}\left[ Y f(F_n)\right] - \widetilde{\mathbb {E}}\left[ Yf(F)\right] \right| = 0 \end{aligned}$$

where Y is any bounded random variable on \((\Omega ,\mathcal {G})\) and f is any bounded continuous function, and using

$$\begin{aligned}&\limsup _{n \rightarrow \infty }\left| \mathbb {E}\left[ Y f(F_n)\right] - \widetilde{\mathbb {E}}\left[ Yf(F)\right] \right| = \limsup _{C \rightarrow \infty } \limsup _{n \rightarrow \infty } \left| \mathbb {E}\left[ Y f(F_n)\right] - \widetilde{\mathbb {E}}\left[ Yf(F)\right] \right| \\&\quad \le \limsup _{C \rightarrow \infty } \limsup _{n \rightarrow \infty }\left| \mathbb {E}\left[ Y f(F_n)\right] - \mathbb {E}\left[ Yf(F_n(C))\right] \right| \\&\qquad + \limsup _{C \rightarrow \infty } \limsup _{n \rightarrow \infty }\left| \mathbb {E}\left[ Y f(F_n(C))\right] - \widetilde{\mathbb {E}}\left[ Yf(F(C))\right] \right| \\&\qquad + \limsup _{C \rightarrow \infty } \limsup _{n \rightarrow \infty } \left| \widetilde{\mathbb {E}}\left[ Y f(F(C))\right] - \widetilde{\mathbb {E}}\left[ Yf(F)\right] \right| \end{aligned}$$

it is sufficient to prove that each of the three summands vanishes. For the first one, by boundedness of Y and f and using (5.2), it is obvious that

$$\begin{aligned} \left| \mathbb {E}\left[ Y (f(F_n)-f(F_n(C)))\right] \right| = \left| \mathbb {E}\left[ Y (f(F_n)-f(F_n(C))) \mathbbm {1}_{\{ E_C\le T \}}\right] \right| \le K \mathbb {P}\left( E_C\le T\right) . \end{aligned}$$

Here and below, K always denotes a generic positive constant. Thus,

$$\begin{aligned} \limsup _{C \rightarrow \infty } \limsup _{n \rightarrow \infty }\left| \mathbb {E}\left[ Y f(F_n)\right] - \mathbb {E}\left[ Yf(F_n(C))\right] \right| \le K \limsup _{C \rightarrow \infty } \mathbb {P}\left( E_C\le T\right) = 0, \end{aligned}$$

and the same proof applies for the third term. Finally, note that

$$\begin{aligned} \limsup _{n \rightarrow \infty }\left| \mathbb {E}\left[ Y f(F_n(C))\right] - \widetilde{\mathbb {E}}\left[ Yf(F(C))\right] \right| = 0 \end{aligned}$$

for each fixed C is an immediate consequence of (5.1). \(\square \)

By construction \(\lambda _t\) and \(\lambda ^{(C)}_t\) coincide on the set \(\{E_C\le T\}\) for all \(0\le t \le T\). As our estimators only deal with observations up to a fixed time horizon T (in our specific case the convenient but arbitrary \(T=1\)) it is clear that condition (5.2) is indeed met. Therefore we may assume for the following proofs that part (a) of Condition 5.3 is in force and only prove (5.1) under this strengthened assumption.

Finally, we need to explain why we can assume that part b) of Condition 5.3 holds as well. Here we refer to part 2) of the proof of Lemma 9 in Jacod and Todorov (2018) where a family of discretisation schemes with the desired properties is constructed and where each member of the family coincides with the original sampling scheme up to some random time \(S^n_{\ell _n}\). As it is shown that these times converge to infinity almost surely, the same argument as before allows us to assume part b) of Condition 5.3 without loss of generality.

For further information on random discretisation schemes one can consult Section 14.1 in Jacod and Protter (2012) where a slightly different version of Lemma 5.4 and other important properties of objects connected to these schemes are proven. We want to name one of those properties in particular because we will use it repeatedly in the following chapters: (14.1.10) in Jacod and Protter (2012) proves that for all \(t\ge 0\) we have

$$\begin{aligned} \Delta _n N_n(t) {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\int _{0}^{t}\frac{1}{\lambda _s}\textrm{d}s \end{aligned}$$
(5.3)

which basically allows us to treat the random \(N_n(t)\) like the deterministic \(\Delta _n^{-1}\) in most asymptotic considerations.

5.2 A crucial decomposition

The proofs of Theorems 3.1 and 3.3 rely on a simple decomposition which allows us to identify the terms that play a dominant role in the asymptotic treatment. Precisely, we have

$$\begin{aligned} \widetilde{L}^n(p,u)-L(p,u,\beta )=\frac{1}{N_n(1)-k_n-2} \left( R_1^n(u)+R_2^n(u)+Z^n(u)+R_3^n(u)+R_4^n(u)\right) \end{aligned}$$
(5.4)

where

$$\begin{aligned} z_i(u)&=\cos \left( u\frac{\sigma _{\tau _{i-2}^n} \left( \widetilde{\Delta _{i}^nS}-\left( \frac{\lambda _{\tau _{i-2}^n}}{\lambda _{\tau _{i-3}^n}}\right) ^{\frac{1}{\beta }-1}\widetilde{\Delta _{i-1}^nS} \right) }{\widetilde{V}_i^n(p)^{1/p}}\right) \\&\quad -\mathbb {E}_{i-2}^n\left[ \exp \left( -\frac{A_\beta u^\beta {|\sigma _{\tau _{i-2}^n}|}^\beta |\lambda _{\tau _{i-2}^n}|^{1-\beta } ((\phi _{i}^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })}{\Delta _n^{-1}\widetilde{V}_i^n(p)^{\beta /p}}\right) \right] ,\\ Z^n(u)&=\sum _{i=k_n+3}^{N_n(1)}z_i(u), \end{aligned}$$

drives the asymptotics while the residual terms are given by

$$\begin{aligned} r_i^1(u)&=\cos \left( u\frac{\widetilde{\Delta _i^nX}-\widetilde{\Delta _{i-1}^nX}}{\widetilde{V}_i^n(p)^{1/p}}\right) -\cos \left( u\frac{\sigma _{\tau _{i-2}^n} (\widetilde{\Delta _{i}^nS}-\widetilde{\Delta _{i-1}^nS})}{\widetilde{V}_i^n(p)^{1/p}}\right) ,\\ r_i^2(u)&=\cos \left( u\frac{\sigma _{\tau _{i-2}^n}(\widetilde{\Delta _{i}^nS} -\widetilde{\Delta _{i-1}^nS})}{\widetilde{V}_i^n(p)^{1/p}}\right) -\cos \left( u\frac{\sigma _{\tau _{i-2}^n}\left( \widetilde{\Delta _{i}^nS}- \left( \frac{\lambda _{\tau _{i-2}^n}}{\lambda _{\tau _{i-3}^n}}\right) ^{\frac{1}{\beta }-1} \widetilde{\Delta _{i-1}^nS}\right) }{\widetilde{V}_i^n(p)^{1/p}}\right) \\ r_i^3(u)&=\mathbb {E}_{i-2}^n\left[ \exp \left( -\frac{A_\beta u^\beta {|\sigma _{\tau _{i-2}^n}|}^\beta |\lambda _{\tau _{i-2}^n}|^{1-\beta } ((\phi _{i}^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })}{\Delta _n^{-1}\widetilde{V}_i^n(p)^{\beta /p}}\right) \right] \\&\quad -\mathbb {E}_{i-2}^n\left[ \exp \left( -\frac{C_{p,\beta } u^\beta {|\sigma _{\tau _{i-2}^n}|}^\beta |\lambda _{\tau _{i-2}^n}|^{1-\beta } ((\phi _{i}^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })}{(|\overline{\sigma \lambda } |_i^p)^{\beta /p}}\right) \right] ,\\ r_i^4(u)&=\mathbb {E}_{i-2}^n\left[ \exp \left( -\frac{C_{p,\beta } u^\beta {|\sigma _{\tau _{i-2}^n}|}^\beta |\lambda _{\tau _{i-2}^n}|^{1-\beta } ((\phi _{i}^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })}{ (|\overline{\sigma \lambda }|_i^p)^{\beta /p}}\right) \right] -L(p,u,\beta ),\\ R_j^n(u)&=\sum _{i=k_n+3}^{N_n(1)}r_i^j(u) \text { for } j\in \{1,2,3,4\}. \end{aligned}$$

Here we have set

$$\begin{aligned} |\overline{\sigma \lambda }|_i^p=\frac{1}{k_n}\sum _{j=i-k_n-1}^{i-2} |\sigma _{\tau _{j-2}^n}|^p|\lambda _{\tau _{j-2}^n}|^{\frac{p}{\beta }-p} \end{aligned}$$

and we use the short hand notation \(\mathbb {E}_i^n[\cdot ]\) in place of \(\mathbb {E}[\cdot |{\mathcal {F}}^n_{\tau _i^n}]\). We also introduce the notation

$$\begin{aligned}&\overline{V}_i^n(p)=\frac{1}{k_n}\sum _{j=i-k_n-1}^{i-2}\mathbb {E}^n_{j-2} \left[ |\widetilde{\Delta _j^nX}-\widetilde{\Delta _{j-1}^nX}|^p\right] , \quad k_n+3\le i \le N_n(1), \end{aligned}$$

and set

$$\begin{aligned}&\overline{z}_i(u)=\cos \left( u\frac{\lambda _{\tau _{i-2}^n}^{1-1/\beta }\widetilde{\Delta _i^n S}-\lambda _{\tau _{i-3}^n}^{1-1/\beta }\widetilde{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta } \mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) -L(p,u,\beta ), \qquad \overline{Z}^n(u)=\sum _{i=k_n+3}^{N_n(1)}\overline{z}_i(u). \end{aligned}$$

We will start with a discussion of the asymptotic orders of the residuals for which we always assume that Conditions 5.1 and 5.3 as well as \(k_n\sim C_1 \Delta _n^{-\varpi }\) for some \(C_1> 0\) and some \(\varpi \in (0,1)\) are in place. Naturally, we need some preparation to obtain asymptotic negligibility and we will start with a lemma containing a series of bounds for moments of certain increments of (often integrated and rescaled) processes. We will not give proof of this result but refer to Todorov (2015, 2017). In fact, the techniques used for the proof will for most parts resemble the ones given therein. The main difference is that our arguments often involve the additional process \(\lambda \) which sometimes complicates matters considerably.

Recall the decomposition (3.1), and we further write

$$\begin{aligned} S_t=S_t^{(1)}+S_t^{(2)} \end{aligned}$$
(5.5)

with \(S_t^{(1)}=\int _{0}^{t}\int _\mathbb {R}\kappa (x)\widetilde{\mu }(\textrm{d}s,\textrm{d}x)\), where \(\widetilde{\mu }(\textrm{d}s,\textrm{d}x)\) denotes the compensated jump measure of S, and \(S_t^{(2)}=\int _{0}^{t}\int _\mathbb {R}\kappa '(x){\mu }(\textrm{d}s,\textrm{d}x)\).

Lemma 5.5

Let \(i \ge 2\) be arbitrary.

  1. (a)

    For every \(p \in (-1,\beta )\) we have

    $$\begin{aligned}&\mathbb {E}_{i-2}^n\left[ \left| \Delta _n^{-1/\beta }(\widetilde{\Delta _{i}^nS}-\widetilde{\Delta _{i-1}^nS)}\right| ^p\right] \le K, \\&\mathbb {E}_{i-2}^n\left[ \left| \Delta _n^{-1/\beta }(\lambda _{\tau _{i-2}^n}^{1-1/\beta }\widetilde{\Delta _{i}^nS} -\lambda _{\tau _{i-3}^n}^{1-1/\beta }\widetilde{\Delta _{i-1}^nS)}\right| ^p\right] = \kappa _{p,\beta }^{p/\beta }\mu _{p,\beta }^{p/\beta }, \end{aligned}$$

    with the constants from (3.2).

  2. (b)

    For every \(p\in (0,\beta )\) we have

    $$\begin{aligned} \left| \mathbb {E}_{i-2}^n\left[ \left| \Delta _n^{-1/\beta }(\widetilde{\Delta _{i}^nS}-\widetilde{\Delta _{i-1}^nS)}\right| ^p\right] - \lambda _{\tau _{i-2}^n}^{p/\beta - p}\kappa _{p,\beta }^{p/\beta }\mu _{p,\beta }^{p/\beta } \right| \le K \Delta _n^{1/2}. \end{aligned}$$
  3. (c)

    For every \(q\in (0,2]\), every \(\iota > 0\) and every \(l \in \{0,1\}\) we have

    $$\begin{aligned} \mathbb {E}_{i-2}^n\left[ \left| \frac{\Delta _n}{\tau _{i-2+l}^n-\tau _{i-1+l}^n}\int _{\tau _{i-2+l}^n}^{\tau _{i-1+l}^n}(\sigma _{u-} -\sigma _{\tau _{i-2}^n})\textrm{d}S_u^{(1)}\right| ^q\right] \le K \Delta _n^{q/2+q/\beta \wedge 1-\iota }. \end{aligned}$$
  4. (d)

    For every \(q\in (0,2]\) and every \(l \in \{0,1\}\) we have

    $$\begin{aligned} \mathbb {E}_{i-2}^n\left[ \left| \frac{\Delta _n}{\tau _{i-1+l}^n-\tau _{i-2+l}^n}\int _{\tau _{i-2+l}^n}^{\tau _{i-1+l}^n}(\sigma _{u-} -\sigma _{\tau _{i-2}^n})\textrm{d}S_u^{(2)}\right| ^q\right] \le K \Delta _n^{q/2+1}. \end{aligned}$$
  5. (e)

    For every \(q> 0\), every \(\iota > 0\) and every \(l \in \{0,1\}\) we have

    $$\begin{aligned} \mathbb {E}_{i-2}^n\left[ \left| \frac{\Delta _n}{\tau _{i-1+l}^n-\tau _{i-2+l}^n}\int _{\tau _{i-2+l}^n}^{\tau _{i-1+l}^n}(\sigma _{u-} -\sigma _{\tau _{i-2}^n}) \textrm{d}\acute{S}_u\right| ^q\right] \le K\Delta _n^{q/2+{(q/\beta ')\wedge 1}-\iota }, \end{aligned}$$

    and the same relation holds with \(\grave{S}\) instead of \(\acute{S}\).

  6. (f)

    For every \(q > 0\) and every \(l \in \{0,1\}\) we have

    $$\begin{aligned} \mathbb {E}_{i-2}^n\left[ \left| \frac{\Delta _n}{\tau _{i-1+l}^n-\tau _{i-2+l}^n}\Delta _{i-1+l}^nY\right| ^q\right] \le K\Delta _n^{(q/\beta ')\wedge 1}. \end{aligned}$$
  7. (g)

    For every \(q\in (0,2]\) we have

    $$\begin{aligned} \mathbb {E}_{i-2}^n\left[ \left| \frac{\Delta _n}{\tau _{i}^n-\tau _{i-1}^n}\int _{\tau _{i-1}^n}^{\tau _{i}^n}\alpha _{u}\textrm{d}u-\frac{\Delta _n}{\tau _{i-1}^n-\tau _{i-2}^n}\int _{\tau _{i-2}^n}^{\tau _{i-1}^n}\alpha _{u}\textrm{d}u\right| ^q\right] \le \Delta _n^{3q/2}. \end{aligned}$$

A second lemma, again without proof, discusses bounds for moments of increments of semimartingales. Again, it has some resemblance to results in Todorov (2015, 2017) but its proof is slightly more involved due to the random observation scheme.

Lemma 5.6

Let A be a semimartingale satisfying the same properties as \(\sigma \) in Assumption 5.1. Then, for any \(-1<p<1\) and any \(y>0\) and with K possibly depending on y we have

$$\begin{aligned} \left| \mathbb {E}_i^n\left[ |A_{\tau ^n_{i+j}}|^p-|A_{\tau ^n_{i}}|^p\right] \right|&\le K j \Delta _n, \\ \mathbb {E}_i^n\left[ \left| |A_{\tau ^n_{i+j}}|^p-|A_{\tau ^n_{i}}|^p\right| ^y\right]&\le K (j\Delta _n)^{y/2\wedge 1}. \end{aligned}$$

After the presentation of these auxiliary claims, we focus on results that directly simplify the discussion of the asymptotic negligibility of the residual terms. We begin with a result that helps in the treatment of \(R_1^n\).

Lemma 5.7

Let \(\iota >0\) and \(0<p<\frac{\beta }{2}\) be arbitrary. Then, for any \(i \ge 2\), we have

$$\begin{aligned} \Delta _n^{-p/\beta }\mathbb {E}_{i-2}^n\left[ \left| |\widetilde{\Delta _i^nX} -\widetilde{\Delta _{i-1}^nX}|^p-|\sigma _{\tau _{i-2}^n}|^p| \widetilde{\Delta _{i}^nS}-\widetilde{\Delta _{i-1}^nS}|^p\right| \right] \le K\alpha _n \end{aligned}$$

with \(\alpha _n=\Delta _n^{\frac{\beta }{2}\frac{p+1}{\beta +1} \wedge ((\frac{p}{\beta '}\wedge 1)-\frac{p}{\beta })\wedge \frac{1}{2}-\iota }\).

Proof

The proof relies on bounds for moments of several stochastic integrals, mostly connected with the jump process \(L_t\) and its parts. Using (3.1) and (5.5) we may write \(\widetilde{\Delta _i^nX} -\widetilde{\Delta _{i-1}^nX} = \chi _i^{(n,1)} + \chi _i^{(n,2)} + \chi _i^{(n,3)}\) with

$$\begin{aligned} \chi _i^{(n,1)}&=\sigma _{\tau _{i-2}^n}\left( \widetilde{\Delta _i^n S} -\widetilde{\Delta _{i-1}^n S} \right) ,\\ \chi _i^{(n,2)}&=\frac{\Delta _n}{\tau _{i}^n-\tau _{i-1}^n}\int _{\tau _{i-1}^n}^{\tau _{i}^n}(\sigma _{u-} -\sigma _{\tau _{i-2}^n})\textrm{d}S_u^{(1)}-\frac{\Delta _n}{\tau _{i-1}^n-\tau _{i-2}^n} \int _{\tau _{i-2}^n}^{\tau _{i-1}^n}(\sigma _{u-}-\sigma _{\tau _{i-2}^n})\textrm{d}S_u^{(1)}\\&\quad +\frac{\Delta _n}{\tau _{i}^n-\tau _{i-1}^n}\int _{\tau _{i-1}^n}^{\tau _{i}^n} \alpha _{u}\textrm{d}u-\frac{\Delta _n}{\tau _{i-1}^n-\tau _{i-2}^n}\int _{\tau _{i-2}^n}^{\tau _{i-1}^n}\alpha _{u}\textrm{d}u,\\ \chi _i^{(n,3)}&=\frac{\Delta _n}{\tau _{i}^n-\tau _{i-1}^n} \int _{\tau _{i-1}^n}^{\tau _{i}^n}(\sigma _{u-}-\sigma _{\tau _{i-2}^n})\textrm{d}S_u^{(2)} -\frac{\Delta _n}{\tau _{i-1}^n-\tau _{i-2}^n}\int _{\tau _{i-2}^n}^{\tau _{i-1}^n} (\sigma _{u-}-\sigma _{\tau _{i-2}^n})\textrm{d}S_u^{(2)}\\&\quad +\frac{\Delta _n}{\tau _{i}^n-\tau _{i-1}^n}\Delta _i^nY-\frac{\Delta _n}{\tau _{i-1}^n-\tau _{i-2}^n}\Delta _{i-1}^nY\\&\quad +\frac{\Delta _n}{\tau _{i}^n -\tau _{i-1}^n}\int _{\tau _{i-1}^n}^{\tau _{i}^n}\sigma _{u-}\textrm{d}\acute{S}_u -\frac{\Delta _n}{\tau _{i-1}^n-\tau _{i-2}^n}\int _{\tau _{i-2}^n}^{\tau _{i-1}^n}\sigma _{u-}\textrm{d}\acute{S}_u \\&\quad -\frac{\Delta _n}{\tau _{i}^n-\tau _{i-1}^n}\int _{\tau _{i-1}^n}^{\tau _{i}^n} \sigma _{u-}\textrm{d}\grave{S}_u +\frac{\Delta _n}{\tau _{i-1}^n-\tau _{i-2}^n} \int _{\tau _{i-2}^n}^{\tau _{i-1}^n}\sigma _{u-}\textrm{d}\grave{S}_u. \end{aligned}$$

We obviously have

$$\begin{aligned}&\Delta _n^{-p/\beta } \mathbb {E}_{i-2}^n\left[ \left| |\chi _i^{(n,1)} + \chi _i^{(n,2)} + \chi _i^{(n,3)}|^p - |\chi _i^{(n,1)}|^p \right| \right] \\&\quad \le \Delta _n^{-p/\beta } \mathbb {E}_{i-2}^n\left[ \left| |\chi _i^{(n,1)} + \chi _i^{(n,2)} + \chi _i^{(n,3)}|^p - |\chi _i^{(n,1)} + \chi _i^{(n,2)}|^p \right| \right] \\&\qquad + \Delta _n^{-p/\beta } \mathbb {E}_{i-2}^n\left[ \left| | \chi _i^{(n,1)} + \chi _i^{(n,2)}|^p - |\chi _i^{(n,1)}|^p \right| \right] , \end{aligned}$$

and since \(p< \beta /2 < 1\) holds, the inequality

$$\begin{aligned} \Delta _n^{-p/\beta } \mathbb {E}_{i-2}^n\left[ |\chi _i^{(n,3)} |^p\right] \le K \Delta _n^{p/\beta '\wedge 1-p/\beta }, \end{aligned}$$
(5.6)

which is any easy consequence of parts (d), (e) and (f) of Lemma 5.5, is enough to fully treat the first term on the right hand side. For the second term, we will use parts (a), (c) and (g) of Lemma 5.5 plus the algebraic inequality

$$\begin{aligned} \left| |a+b|^p-|a|^p\right| \le K|a|^{p-1}|b|\mathbbm {1}_{\{|a|>\epsilon ,| b|<\frac{1}{2}\epsilon \}}+|b|^p(\mathbbm {1}_{\{|a|\le \epsilon \}}+\mathbbm {1}_{\{|b|> \frac{1}{2}\epsilon \}}), \end{aligned}$$

which holds for any \(\epsilon >0\) and \(p\in (0,1]\) and a constant K that does not depend on \(\epsilon \) and which we apply with \(a = \Delta _n^{-1/\beta } \chi _i^{(n,1)}\) and \(b=\Delta _n^{-1/\beta } \chi _i^{(n,2)}\). We start with the latter two terms and let \(0< \epsilon < 1\) be arbitrary. Then Markov inequality in combination with Hölder inequality first gives

$$\begin{aligned}&\mathbb {E}_{i-2}^n\left[ |\Delta _n^{-1/\beta }\chi _i^{(n,2)}|^p \mathbbm {1}_{\{|\Delta _n^{-1/\beta } \chi _i^{(n,1)}|\le \epsilon \}}\right] \\&\quad \le \left( \mathbb {E}_{i-2}^n\left[ |\Delta _n^{-1/\beta } \chi _i^{(n,1)}|^{-1+\iota }\right] \epsilon ^{1-\iota }\right) ^{1-\frac{p}{\beta }} \left( \mathbb {E}_{i-2}^n\left[ |\Delta _n^{-1/\beta } \chi _i^{(n,2)}|^\beta \right] \right) ^\frac{p}{\beta } \\&\quad \le K \epsilon ^{1-p/\beta -\iota } (\Delta _n^{\beta /2-\iota })^{p/\beta } \le K \epsilon ^{1-p/\beta -\iota }\Delta _n^{p/2-\iota } \end{aligned}$$

(with a slight abuse of notation but remember that \(\iota > 0\) can be chosen arbitrarily) and then

$$\begin{aligned}&\mathbb {E}_{i-2}^n\left[ |\Delta _n^{-1/\beta }\chi _i^{(n,2)}|^p\mathbbm {1}_{\{|\Delta _n^{-1/\beta }\chi _i^{(n,2)}|> \frac{1}{2}\epsilon \}}\right] \\&\quad \le \left( \frac{\mathbb {E}_{i-2}^n\left[ |\Delta _n^{-1/\beta }\chi _i^{(n,2)}|^\beta \right] }{(\frac{1}{2}\epsilon )^\beta }\right) ^{1-\frac{p}{\beta }} \left( \mathbb {E}_{i-2}^n\left[ |\Delta _n^{-1/\beta }\chi _i^{(n,2)}|^\beta \right] \right) ^{\frac{p}{\beta }} \le K \epsilon ^{-(\beta -p)} \Delta _n^{\beta /2-\iota } \end{aligned}$$

as well. Setting \(\epsilon =\Delta _n^{\frac{1}{2}\frac{\beta }{\beta +1}}\) gives \(K \Delta _n^{\frac{\beta }{2}\frac{p+1}{\beta +1}-\iota }\) as the upper bound in both terms.

Finally, we have to distinguish between \(p > 1/\beta \) and \(p \le 1/\beta \). In the first case a simple application of Hölder inequality gives

$$\begin{aligned} \Delta _n^{-p/\beta } \mathbb {E}_{i-2}^n\left[ |\chi _i^{(n,1)}|^{p-1} |\chi _i^{(n,2)}| \right]&\le \Delta _n^{-p/\beta } \left( \mathbb {E}_{i-2}^n \left[ |\chi _i^{(n,1)}|^{\frac{(p-1)\beta }{\beta -1}}\right] \right) ^{\frac{\beta -1}{\beta }} \left( \mathbb {E}_{i-2}^n| \chi _i^{(n,2)}|^\beta \right) ^{\frac{1}{\beta }} \\&\le K \Delta _n^{-1/\beta + 1/2+1/\beta -\iota } = K \Delta _n^{1/2-\iota } \end{aligned}$$

for our specific choice of \(\iota > 0\). Note that (a) in Lemma 5.5 was indeed applicable as \(p > 1/\beta \) ensures \((p-1)\beta /(\beta -1) > -1\). In the second case, we set \(\epsilon \) as above and use Markov inequality with \(r = \frac{1+\iota }{\beta } - p\). Then

$$\begin{aligned}&\Delta _n^{-p/\beta } \mathbb {E}_{i-2}^n\left[ |\chi _i^{(n,1)}|^{p-1} |\chi _i^{(n,2)} | \mathbbm {1}_{\{|\Delta _n^{-1/\beta } \chi _i^{(n,1)}|> \epsilon \}}\right] \\&\quad \le \Delta _n^{-(p+r)/\beta } \mathbb {E}_{i-2}^n\left[ |\chi _i^{(n,1)} |^{p+r-1} |\chi _i^{(n,2)}| \right] \epsilon ^{-r}, \end{aligned}$$

and as now \(p+r > 1/\beta \) by construction, the same proof as in the first case proves this term to be of the order \(\Delta _n^{1/2-\iota } \epsilon ^{-r}\). Then

$$\begin{aligned} \frac{1}{2}-\iota - r \frac{\beta }{2(\beta +1)} = \frac{1}{2}- \frac{1}{2(\beta +1)} + \frac{p\beta }{2(\beta +1)} -\iota = \frac{\beta }{2}\frac{p+1}{\beta +1}-\iota \end{aligned}$$

(with the same abuse of notation) ends the proof. \(\square \)

We also need to control the denominators in \(R_1^n\) and \(R_2^n\) to make sure that they are bounded away from zero with high probability. To this end, we need two auxiliary results, and in both cases we let \(i \ge k_n + 3\) and \(0<p<\frac{\beta }{2}\) be arbitrary. The first result deals with the variables \(\overline{V}_i^n(p)\) which we introduced before, and it will be used for the treatment of \(R_3^n\) later on as well. Its proof is omitted as it is essentially the same as the one for equation (9.4) in Todorov (2015) and exploits standard inequalities for discrete martingales.

Lemma 5.8

Let \(k_n \sim C_1 \Delta _n^{-\varpi }\) for some \(C_1 > 0\) and \(\varpi \in (0,1)\). Then for all \(1\le x<\frac{\beta }{p}\) we have

$$\begin{aligned} \Delta _n^{-xp/\beta }\mathbb {E}\left[ {|\widetilde{V}_i^n(p)-\overline{V}_i^n(p)|}^x \right] \le K_xk_n^{-x/2} \end{aligned}$$

where the constant \(K_x\) might depend on x.

The second result deals with the set

$$\begin{aligned} \mathcal {C}_i^n=\left\{ |\Delta _n^{-p/\beta }\widetilde{V}_i^n(p)-|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta } \kappa _{p,\beta }^{p/\beta }|>\frac{1}{2}|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta } \right\} \end{aligned}$$

and bounds its probability.

Lemma 5.9

For every fixed \(\iota > 0\) we have

$$\begin{aligned} \mathbb {P}(\mathcal {C}_i^n)\le K_\iota k_n^{-\beta /2p+\iota } \end{aligned}$$

where the constant \(K_\iota \) might depend on \(\iota \).

Proof

Lemmas 5.5 and 5.7 give

$$\begin{aligned}&\left| \Delta _n^{-p/\beta }\mathbb {E}_{i-2}^n\left[ \left| \widetilde{\Delta _i^nX} -\widetilde{\Delta _{i-1}^nX}\right| ^p\right] -|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{j-2}^n} |^{\frac{p}{\beta }-p}\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| \nonumber \\&\quad \le \Delta _n^{-p/\beta }\mathbb {E}_{i-2}^n\left[ \left| \left| \widetilde{\Delta _i^nX} -\widetilde{\Delta _{i-1}^nX}\right| ^p- |\sigma _{\tau _{i-2}^n}|^p\left| \widetilde{\Delta _i^nS} -\widetilde{\Delta _{i-1}^nS}\right| ^p\right| \right] \nonumber \\&\qquad +\left| \mathbb {E}_{i-2}\left[ \Delta _n^{-p/\beta }|\sigma _{\tau _{i-2}^n}|^p \left| \widetilde{\Delta _i^nS}-\widetilde{\Delta _{i-1}^nS}\right| ^p\right] -|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{j-2}^n}|^{\frac{p}{\beta }-p} \mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| \nonumber \\&\quad \le K\alpha _n + K |\sigma _{\tau _{i-2}^n}|^p \Delta _n^{1/2}, \end{aligned}$$
(5.7)

so

$$\begin{aligned} |\Delta _n^{-p/\beta }\overline{V}_i^n(p)-|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }| \le K\alpha _n + K |\overline{\sigma }|^p_i \Delta _n^{1/2}, \end{aligned}$$
(5.8)

with \(|\overline{\sigma }|^p_i\) being defined as \(|\overline{\sigma \lambda }|^p_i\) but with \(\lambda = 1\). Now, it is a simple consequence of Conditions 5.1 and 5.3 that both \(|\overline{\sigma \lambda }|^p_i\) and \(|\overline{\sigma }|^p_i\) are uniformly bounded from above and below. Using \(\alpha _n\rightarrow 0\) and \(\Delta _n^{1/2}\rightarrow 0\) there then exists some \(n_0\in \mathbb {N}\) such that

$$\begin{aligned} \mathbb {P}\left( \left| \Delta _n^{-p/\beta }\overline{V}_i^n(p)-|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta } \kappa _{p,\beta }^{p/\beta }\right| >\frac{1}{4}|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta } \kappa _{p,\beta }^{p/\beta } \right) = 0 \end{aligned}$$

for all \(n\ge n_0\). For these n,

$$\begin{aligned} \mathbb {P}({\mathcal {C}_i^n})&\le \mathbb {P}\left( \Delta _n^{-p/\beta }|\widetilde{V}_i^n (p)-\overline{V}_i^n(p)|>\frac{1}{4}|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta } \kappa _{p,\beta }^{p/\beta }\right) \\&~~~~+\mathbb {P}\left( |\Delta _n^{-p/\beta }\overline{V}_i^n(p)-|\overline{\sigma \lambda }|^p_i \mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }|>\frac{1}{4}|\overline{\sigma \lambda }|^p_i \mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right) \le K_\iota k_n^{-\beta /2p+\iota } \end{aligned}$$

from Lemma 5.8, with a choice of x arbitrarily close to \(\frac{\beta }{p}\). The claim follows. \(\square \)

Finally, we provide two lemmas that simplify the discussion of \(R_4^n\). The first one gives an alternative representation for the limiting variable \(L(p,u,\beta )\).

Lemma 5.10

It holds that

$$\begin{aligned} \mathbb {E}^n_{i-2}\left[ \cos \left( u\frac{\lambda _{\tau _{i-2}^n}^{1-1/\beta }\widetilde{\Delta _i^n S}-\lambda _{\tau _{i-3}^n}^{1-1/\beta }\widetilde{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta } \mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) \right] =L(p,u,\beta ). \end{aligned}$$

Proof

For the sake of simplicity, let us assume that the probability space can be even further enlarged to accommodate three independent random variables \(S^{(1)}\), \(S^{(2)}\) and \(S^{(3)}\), independent of \({\mathcal {G}}\), all with the same distribution as \(S_1\), i.e. distributed as a Lévy process with characteristic triplet (0, 0, F) at time 1, \(F(\textrm{d}x) = F(x) \textrm{d}x\) with \(F(x) = {A}|x|^{-(1+\beta )}\). Using standard properties of stable processes (see e.g. Section 1.2 in Samorodnitsky and Taqqu 1994), for constants \(\sigma _1, \sigma _2 \in \mathbb {R}\) we have

$$\begin{aligned} \sigma _1 S^{(1)} + \sigma _2 S^{(2)} \sim (|\sigma _1|^\beta +|\sigma _2|^\beta )^{1/\beta } S^{(3)}, \end{aligned}$$

and for our original process \(S_t\) the stability relation

$$\begin{aligned}&(S_{t+r}-S_{t})\sim r^{1/\beta }S_1 \text { for all } r, t \ge 0 \end{aligned}$$

holds as well. Because the increments of the process \((S_t)_{t\ge \tau _{i-2}^n}\) are independent of \(\tau _{i}^n-\tau _{i-1}^n= \Delta _n \phi _i^n \lambda _{\tau _{i-2}^n}\) the conditional distribution of \(\Delta _{i}^nS\) given \((\tau _{i}^n-\tau _{i-1}^n)=a\) equals the one of \(a^{1/\beta }S^{(1)}\). Thus, for all Borel sets M we obtain e.g.

$$\begin{aligned} \mathbb {E}\left[ \mathbbm {1}_M\left( \frac{\Delta _{i}^nS}{\tau _{i}^n-\tau _{i-1}^n}\right) \right]&=\int _\mathbb {R}\mathbb {E}\left[ \mathbbm {1}_M\left( \frac{\Delta _{i}^nS}{\tau _{i}^n-\tau _{i-1}^n}\right) \Bigg \vert \left( \tau _{i}^n-\tau _{i-1}^n\right) =a\right] \mathbb {P}^{(\tau _{i}^n-\tau _{i-1}^n)}(\textrm{d}a)\\&=\mathbb {E}\left[ \mathbbm {1}_M\left( (\tau _{i}^n-\tau _{i-1}^n)^{1/\beta -1}\right) S^{(1)}\right] \end{aligned}$$

using that we assume that all moments of \((\phi _i^n)^{q}\) for \(q\in (-2,0)\) exist, as well as \(1/\beta -1>-1\). Put differently,

$$\begin{aligned}&\Delta _n^{-1/\beta }\widetilde{\Delta _{i}^nS}\sim \Delta _n^{1-1/\beta } (\tau _{i}^n-\tau _{i-1}^n)^{1/\beta -1}S^{(1)}\sim (\lambda _{\tau _{i-2}^n} \phi _i^n)^{1/\beta -1}S^{(1)}, \\&\Delta _n^{-1/\beta }\widetilde{\Delta _{i-1}^nS}\sim \Delta _n^{1-1/\beta } (\tau _{i-1}^n-\tau _{i-2}^n)^{1/\beta -1}S^{(2)}\sim (\lambda _{\tau _{i-3}^n}\phi _{i-1}^n)^{1/\beta -1}S^{(2)}, \end{aligned}$$

where \(\phi _{i-1}^n\), \(\phi _i^n\), \(S^{(1)}\), \(S^{(2)}\) are all independent of \(\mathcal {F}_{\tau _{i-2}^n}\) and of each other. Thus

$$\begin{aligned}&\mathbb {E}^n_{i-2}\left[ \cos \left( u\frac{\lambda _{\tau _{i-2}^n}^{1-1/\beta }\widetilde{\Delta _i^n S}-\lambda _{\tau _{i-3}^n}^{1-1/\beta }\widetilde{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta }\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) \right] \nonumber \\&\quad =\mathbb {E}_{i-2}^n\left[ \cos \left( u\frac{((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })^{1/\beta }S^{(3)}}{\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) \right] \nonumber \\&\quad =\mathbb {E}\left[ \exp \left( -u^\beta \frac{A_{\beta }}{\mu _{p,\beta }\kappa _{p,\beta }}((\phi _{i}^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right) \right] \nonumber \\&\quad =\mathbb {E}\left[ \exp \left( -u^\beta C_{p,\beta }((\phi _{i}^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right) \right] =L(p,u,\beta ) \end{aligned}$$
(5.9)

after successive conditioning. \(\square \)

Using Lemma 5.10 it is clear that the treatment of \(R_4^n\) hinges on the question how well \(|\overline{\sigma \lambda }|^p_i\) can be approximated by \(|\sigma _{\tau _{i-2}^n}|^p |\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\).

Lemma 5.11

Let \(k_n \sim C_1 \Delta _n^{-\varpi }\) for some \(C_1 > 0\) and \(\varpi \in (0,1)\). Then for all \(y>1\) we have

$$\begin{aligned} \left| \mathbb {E}_{i-k_n-3}^n\left[ |\overline{\sigma \lambda }|^p_i-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n} |^{\frac{p}{\beta }-p}\right] \right|&\le K k_n\Delta _n,\\ \mathbb {E}_{i-k_n-3}\left[ \left| |\overline{\sigma \lambda }|^p_i-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n} |^{\frac{p}{\beta }-p}\right| ^y\right]&\le K(k_n\Delta _n)^{y/2\wedge 1}. \end{aligned}$$

Proof

We start first with a proof of

$$\begin{aligned}&\left| \mathbb {E}_{i}^n\left[ |\sigma _{\tau ^n_{i+j}}|^p|\lambda _{\tau _{i+j}^n} |^{\frac{p}{\beta }-p}-|\sigma _{\tau ^n_{i}}|^p|\lambda _{\tau _{i}^n} |^{\frac{p}{\beta }-p}\right] \right| \le K j\Delta _n \end{aligned}$$

and

$$\begin{aligned}&\mathbb {E}_{i}^n\left[ \left| |\sigma _{\tau ^n_{i+j}}|^p|\lambda _{\tau _{i+j}^n} |^{\frac{p}{\beta }-p}-|\sigma _{\tau ^n_{i}}|^p|\lambda _{\tau _{i}^n}|^{\frac{p}{\beta }-p} \right| ^y\right] \le K (j\Delta _n)^{\frac{y}{2}\wedge 1}. \end{aligned}$$
(5.10)

For the first claim, note that the decomposition

$$\begin{aligned}&|\sigma _{\tau ^n_{i+j}}|^p|\lambda _{\tau _{i+j}^n}|^{\frac{p}{\beta }-p} -|\sigma _{\tau ^n_{i}}|^p|\lambda _{\tau _{i}^n}|^{\frac{p}{\beta }-p}\\&\quad = |\sigma _{\tau ^n_{i}}|^p(|\lambda _{\tau _{i+j}^n}|^{\frac{p}{\beta }-p} - |\lambda _{\tau _{i}^n}|^{\frac{p}{\beta }-p}) + |\lambda _{\tau _{i}^n}|^{\frac{p}{\beta }-p} (|\sigma _{\tau ^n_{i+j}}|^p - |\sigma _{\tau ^n_{i}}|^p) \\&\qquad + (|\lambda _{\tau _{i+j}^n}|^{\frac{p}{\beta }-p} - |\lambda _{\tau _{i}^n}|^{\frac{p}{\beta }-p}) (|\sigma _{\tau ^n_{i+j}}|^p - |\sigma _{\tau ^n_{i}}|^p) \end{aligned}$$

holds. Now, we can apply Lemma 5.6 for each of the three terms, for the third one together with Cauchy–Schwarz inequality, and using boundedness of \(\sigma \) and \(\lambda \) from below and above plus the fact that \(1< \beta < 2\) and \(p< \beta /2 < 1\) guarantee the exponents to lie between \(-1\) and 1. A similar reasoning works for the second claim.

We then obtain

$$\begin{aligned}&\left| \mathbb {E}_{i-k_n-3}^n[|\overline{\sigma \lambda }|^p_i-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}]\right| \\&\quad =\left| \mathbb {E}_{i-k_n-3}^n\left[ \frac{1}{k_n}\sum _{j=i-k_n-1}^{i-2}(|\sigma _{\tau _{j-2}^n}|^p|\lambda _{\tau _{j-2}^n}|^{\frac{p}{\beta }-p}-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p})\right] \right| \\&\quad \le \frac{1}{k_n}\sum _{j=i-k_n-1}^{i-2}\left| \mathbb {E}_{i-k_n-3}^n\left[ |\sigma _{\tau _{j-2}^n}|^p|\lambda _{\tau _{j-2}^n}|^{\frac{p}{\beta }-p}-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\right] \right| \\&\quad \le \frac{1}{k_n}\sum _{j=i-k_n-1}^{i-2}K(i-j)\Delta _n \le Kk_n\Delta _n \end{aligned}$$

easily, and by convexity of \(x\mapsto x^y\) on \(\mathbb {R}_+\)

$$\begin{aligned}&\mathbb {E}_{i-k_n-3}^n\left[ \left| |\overline{\sigma \lambda }|^p_i-|\sigma _{\tau _{i-2}^n}|^p| \lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\right| ^y\right] \\&\quad =\mathbb {E}_{i-k_n-3}^n\left[ \left| \frac{1}{k_n}\sum _{j=i-k_n-1}^{i-2} (|\sigma _{\tau _{j-2}^n}|^p|\lambda _{\tau _{j-2}^n}|^{\frac{p}{\beta }-p}-|\sigma _{\tau _{i-2}^n} |^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p})\right| ^y\right] \\&\quad \le \frac{1}{k_n}\sum _{j=i-k_n-1}^{i-2}\mathbb {E}_{i-k_n-3}^n\left[ \left| |\sigma _{\tau _{j-2}^n}|^p| \lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\right| ^y\right] \end{aligned}$$

gives the claim. \(\square \)

5.3 Bounding the residual terms

In what follows, let \(u > 0\), \(0<p<\frac{\beta }{2}\) and \(\iota > 0\) be arbitrary but fixed, and we always assume that \(k_n\sim C_1 \Delta _n^{-\varpi }\) for some \(C_1 > 0\) and \(\varpi \in (0,1)\). In the following, we also use the notation \(n = \Delta _n^{-1}\) for convenience.

Lemma 5.12

We have

$$\begin{aligned} \frac{1}{n-k_n-2}\mathbb {E}\left[ \left| R_1^n(u)\right| \right] \le K \left( k_n^{-\beta /2p+\iota }\vee u^{\beta '}\Delta _n^{(\beta -\beta ')/\beta } \vee u\Delta _n^{1/2-\iota } \right) \end{aligned}$$

where the constant K might depend on p, \(\beta \) and \(\iota \) but not on u.

Proof

We decompose \(r_i^1(u_n)=r_i^1(u_n) \mathbbm {1}_{\mathcal {C}_i^n}+r_i^1(u_n)\mathbbm {1}_{({\mathcal {C}_i^n})^C}\), and as \(\cos (x)\) is bounded we have for any \(i \ge k_n+3\) by Lemma 5.9

$$\begin{aligned} \mathbb {E}_{i-2}^n\left[ \left| r_i^1(u)\mathbbm {1}_{{\mathcal {C}_i^n}}\right| \right] \le K \mathbb {P}({\mathcal {C}_i^n}) \le K k_n^{-\beta /2p+\iota }. \end{aligned}$$

Thus, using Assumption 5.3 we obtain

$$\begin{aligned} \frac{1}{n-k_n-2}\mathbb {E}\left[ \sum _{i=k_n+3}^{N_n(1)}\left| r_i^1(u) \mathbbm {1}_{{\mathcal {C}_i^n}}\right| \right]&= \frac{1}{n-k_n-2}\mathbb {E}\left[ \sum _{i=k_n+3}^{\lfloor C \Delta _n^{-1} \rfloor }\left| \mathbbm {1}_{\{N_n(1)\ge i\}}r_i^1(u) \mathbbm {1}_{{\mathcal {C}_i^n}}\right| \right] \nonumber \\&\le \frac{1}{n-k_n-2}\sum _{i=k_n+3}^{\lfloor C \Delta _n^{-1} \rfloor }\mathbb {E}\left[ |r_i^1(u) \mathbbm {1}_{{\mathcal {C}_i^n}}|\right] \le K k_n^{-\beta /2p+\iota }. \end{aligned}$$
(5.11)

On the other hand, on \(({\mathcal {C}_i^n})^C\) the relation

$$\begin{aligned} \frac{1}{2}|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta } \le \Delta _n^{-p/\beta }\widetilde{V}_i^n(p)\le \frac{3}{2}|\overline{\sigma \lambda }|^p_i \mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta } \end{aligned}$$

holds. Since \(|\overline{\sigma \lambda }|^p_i\) is bounded from above and below by Assumption 5.1, \(\Delta _n^{-p/\beta }\widetilde{V}_i^n(p)\) is now likewise with a constant possibly depending on p and \(\beta \). Let us use the notation from the proof of Lemma 5.7 and write

$$\begin{aligned} \widetilde{\Delta _i^nX} -\widetilde{\Delta _{i-1}^nX} = \chi _i^{(n,1)} + \chi _i^{(n,2)} + \chi _i^{(n,3)}, \quad \sigma _{\tau _{i-2}^n}(\widetilde{\Delta _{i}^nS}-\widetilde{\Delta _{i-1}^nS})=\chi _i^{(n,1)}. \end{aligned}$$

Using the boundedness of \(\Delta _n^{-p/\beta }\widetilde{V}_i^n(p)\) on \(({\mathcal {C}_i^n})^C\) and the inequality \(|\cos (x)-\cos (y)|\le 2|x-y|^p\) for all \(x,y\in \mathbb {R}\) and \(p \in (0,1]\) we have

$$\begin{aligned}&\mathbb {E}_{i-2}^n\left[ \left| r_i^1(u)\mathbbm {1}_{({\mathcal {C}_i^n})^C}\right| \right] \\&\quad \le \mathbb {E}_{i-2}^n\left[ \left| \cos \left( u \frac{\chi _i^{(n,1)} + \chi _i^{(n,2)} + \chi _i^{(n,3)}}{(\widetilde{V}_i^n(p))^{1/p}}\right) -\cos \left( u\frac{\chi _i^{(n,1)} + \chi _i^{(n,2)}}{(\widetilde{V}_i^n(p))^{1/p}}\right) \right| \mathbbm {1}_{({\mathcal {C}_i^n})^C}\right] \\ {}&\qquad +\mathbb {E}_{i-2}^n\left[ \left| \cos \left( u \frac{\chi _i^{(n,1)} + \chi _i^{(n,2)}}{(\widetilde{V}_i^n(p))^{1/p}}\right) -\cos \left( u\frac{\chi _i^{(n,1)}}{(\widetilde{V}_i^n(p))^{1/p}}\right) \right| \mathbbm {1}_{({\mathcal {C}_i^n})^C}\right] \\&\quad \le K \left( \mathbb {E}_{i-2}^n\left[ \left| u \Delta _n^{-1/\beta } \chi _i^{(n,3)}\right| ^{\beta '}\right] +\mathbb {E}_{i-2}^n \left[ \left| u \Delta _n^{-1/\beta } \chi _i^{(n,2)}\right| \right] \right) . \end{aligned}$$

We then get

$$\begin{aligned} \mathbb {E}_{i-2}^n\left[ \left| u \Delta _n^{-1/\beta } \chi _i^{(n,2)}\right| \right] \le K u \Delta _n^{1/2-\iota }, \quad \mathbb {E}_{i-2}^n\left[ \left| u \Delta _n^{-1/\beta } \chi _i^{(n,3)}\right| ^{\beta '}\right] \le K u^{\beta '} \Delta _n^{\frac{\beta -\beta '}{\beta }}, \end{aligned}$$

using parts (c) and (g) of Lemma 5.5 as well as (5.6) which holds for any \(0< p < 2\). The claim now follows from the same reasoning as in (5.11), with an additional step of successive conditioning. \(\square \)

Lemma 5.13

We have

$$\begin{aligned} \frac{1}{n-k_n-2}\mathbb {E}\left[ |R_2^n(u_n)|\right] \le K ( k_n^{-\beta /2p+\iota } \vee u\Delta _n^{1/2}) \end{aligned}$$

where the constant K might depend on p, \(\beta \) and \(\iota \) but not on u.

Proof

We get

$$\begin{aligned}&\frac{1}{n-k_n-2}\mathbb {E}\left[ \sum _{i=k_n+3}^{N_n(1)}\left| r_i^2(u) \mathbbm {1}_{{\mathcal {C}_i^n}}\right| \right] \le K k_n^{-\beta /2p+\iota } \end{aligned}$$

with the same arguments that led to (5.11). Similar arguments as in the previous proof plus Assumption 5.1, boundedness of \(\Delta _n^{-p/\beta }\widetilde{V}_i^n(p)\) on \(({\mathcal {C}_i^n})^C\) and \(\beta > 1\) to ensure the existence of moments give

$$\begin{aligned}&\mathbb {E}_{i-2}^n\left[ \left| r_i^2(u)\mathbbm {1}_{({\mathcal {C}_i^n})^C}\right| \right] \\&\quad =\mathbb {E}_{i-2}^n\left[ \left| \cos \left( u\frac{\sigma _{\tau _{i-2}^n}(\widetilde{\Delta _{i}^nS} -\widetilde{\Delta _{i-1}^nS})}{\widetilde{V}_i^n(p)^{1/p}}\right) -\cos \left( u \frac{\sigma _{\tau _{i-2}^n}\left( \widetilde{\Delta _{i}^nS}-\left( \frac{\lambda _{ \tau _{i-2}^n}}{\lambda _{\tau _{i-3}^n}}\right) ^{\frac{1}{\beta }-1}\widetilde{ \Delta _{i-1}^nS}\right) }{\widetilde{V}_i^n(p)^{1/p}}\right) \right| \mathbbm {1}_{({\mathcal {C}_i^n})^C}\right] \\&\quad \le K u\left| \frac{\sigma _{\tau _{i-2}^n}}{\Delta _n^{-1/\beta }\widetilde{V}_i^n (p)^{1/p}}\right| \mathbb {E}_{i-2}^n\left[ \left| \Delta _n^{-1/\beta }\widetilde{\Delta _{i-1}^n S}-\left( \frac{\lambda _{\tau _{i-2}^n}}{\lambda _{\tau _{i-3}^n}}\right) ^{\frac{1}{\beta }-1} \Delta _n^{-1/\beta }\widetilde{\Delta _{i-1}^nS}\right| \mathbbm {1}_{({\mathcal {C}_i^n})^C}\right] \\&\quad \le Ku \left| \left| \lambda _{\tau _{i-3}^n}\right| ^{\frac{1}{\beta }-1} -\left| \lambda _{\tau _{i-2}^n}\right| ^{\frac{1}{\beta }-1} \right| . \end{aligned}$$

The expectation of the right hand side is bounded by \(K u \Delta _n^{1/2}\), using (5.10) and successive conditioning. We then obtain

$$\begin{aligned}&\frac{1}{n-k_n-2}\mathbb {E}\left[ \sum _{i=k_n+3}^{N_n(1)}\left| r_i^2(u) \mathbbm {1}_{({\mathcal {C}_i^n})^C} \right| \right] \le K u \Delta _n^{1/2} \end{aligned}$$

as in the previous proof. \(\square \)

Lemma 5.14

We have

$$\begin{aligned} \frac{1}{n-k_n-2}\mathbb {E}\left[ |R_3^n(u)|\right] \le K (u^\beta \alpha _n \vee u^\beta k_n^{-1/2} \vee k_n^{-\beta /2p+\iota }) \end{aligned}$$

where the constant K might depend on p, \(\beta \) and \(\iota \) but not on u.

Proof

Again we obtain

$$\begin{aligned}&\frac{1}{n-k_n-2}\mathbb {E}\left[ \sum _{i=k_n+3}^{N_n(1)}\left| r_i^3(u) \mathbbm {1}_{{\mathcal {C}_i^n}}\right| \right] \le K k_n^{-\beta /2p+\iota } \end{aligned}$$

as in (5.11), this time using the boundedness of \(x \mapsto \exp (-x)\) on the positive halfline. We then use a first order Taylor expansion of the (random) function

$$\begin{aligned} f^n_{i,u}(x)=\exp \left( -\frac{A_{\beta }u^\beta |\sigma _{\tau _{i-2}^n}|^\beta |\lambda _{\tau _{j-2}^n} |^{1-\beta }((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })}{x^{\beta /p}}\right) \mathbbm {1}_{({\mathcal {C}_i^n})^C} \end{aligned}$$
(5.12)

(defined for \(x > 0\)) and get

$$\begin{aligned}&\mathbb {E}_{i-2}^n\left[ \exp \left( -\frac{A_\beta u^\beta |\sigma _{\tau _{i-2}^n}|^\beta | \lambda _{\tau _{j-2}^n}|^{1-\beta } ((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })}{\Delta _n^{-1}\widetilde{V}_i^n(p)^{\beta /p}}\right) \right] \mathbbm {1}_{({\mathcal {C}_i^n})^C}\\&\qquad -\mathbb {E}_{i-2}^n\left[ \exp \left( -\frac{C_{p,\beta } u^\beta |\sigma _{\tau _{i-2}^n}|^\beta |\lambda _{\tau _{j-2}^n}|^{1-\beta } ((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })}{(|\overline{\sigma \lambda }|^p_i)^{\beta /p}}\right) \right] \mathbbm {1}_{({\mathcal {C}_i^n})^C}\\&\quad =\left( \Delta _n^{-p/\beta }\widetilde{V}_i^n(p)-|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta } \kappa _{p,\beta }^{p/\beta }\right) \mathbb {E}_{i-2}\left[ (f^{n}_{i,u})'(\epsilon _i^n)\right] \mathbbm {1}_{({\mathcal {C}_i^n})^C} \end{aligned}$$

for some \({\mathcal {F}}_{\tau _{i-2}^n}\)-measurable \(\epsilon _i^n\) between \(\Delta _n^{-p/\beta } \widetilde{V}_i^n(p)\) and \(|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\). An easy computation proves

$$\begin{aligned} \mathbb {E}_{i-2}^n\left[ |(f^n_{i,u})'(X)|\right] \le K \frac{ u^\beta }{X^{p/\beta +1}} \quad \text {and} \quad \mathbb {E}_{i-2}^n\left[ |(f^n_{i,u})''(X)|\right] \le K \frac{ u^\beta }{X^{p/\beta +2}} \end{aligned}$$
(5.13)

for any positive \({\mathcal {F}}_{\tau _{i-2}^n}\)-measurable random variable X where the constant K does not depend on u. Using the boundedness of \(\Delta _n^{-p/\beta }\widetilde{V}_i^n(p)\) and \(|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\) again we get

$$\begin{aligned} \mathbb {E}\left[ |r_i^3(u_n)\mathbbm {1}_{({\mathcal {C}_i^n})^C}|\right]&\le K u_n^\beta \mathbb {E}\left[ \left| \Delta _n^{-p/\beta }\widetilde{V}_i^n(p)-|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| \right] \\&\le K u_n^\beta \mathbb {E}\left[ \left| \Delta _n^{-p/\beta }(\widetilde{V}_i^n(p)-\overline{V}_i^n(p))\right| +\left| \Delta _n^{-p/\beta }\overline{V}_i^n(p)-|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| \right] \\&\le K u_n^\beta (k_n^{-1/2}\vee \alpha _n \vee \Delta _n^{1/2}) \le K u_n^\beta (k_n^{-1/2}\vee \alpha _n ) \end{aligned}$$

where the last line holds by Lemma 5.8 and (5.8) plus the definition of \(\alpha _n\). \(\square \)

Lemma 5.15

We have

$$\begin{aligned} \frac{1}{n-k_n-2}\mathbb {E}\left[ |R_4^n(u)|\right] \le K u^\beta \Delta _n k_n \end{aligned}$$

where the constant K might depend on p and \(\beta \) but not on u.

Proof

Recall the function \(f^n_{i,u}\) from (5.12) and set \(\widetilde{r}_{i,n}=(|\overline{\sigma \lambda }|^p_i-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p})\). Then

$$\begin{aligned}&\mathbb {E}\left[ \left| R_4^n(u)\right| \right] \nonumber \\&\quad \le \mathbb {E}\left[ \left| R_4^n(u)-\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\sum _{i=k_n+3}^{N_n(1)}\mathbb {E}_{i-2}^n\left[ (f^n_{i, u})'(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p})\right] \widetilde{r}_{i,n} \right| \right] \end{aligned}$$
(5.14)
$$\begin{aligned}&\qquad + \mathbb {E}\left[ \left| \mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\sum _{i=k_n+3}^{N_n(1)}(\widetilde{r}_{i,n}-\mathbb {E}_{i-k_n-3}^n\left[ \widetilde{r}_{i,n}\right] )\mathbb {E}_{i-2}^n\left[ (f^n_{i, u})'(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p})\right] \right| \right] \end{aligned}$$
(5.15)
$$\begin{aligned}&\qquad +\mathbb {E}\left[ \left| \mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\sum _{i=k_n+3}^{N_n(1)}\mathbb {E}_{i-2}^n\left[ (f^n_{i, u})'(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p})\right] \mathbb {E}_{i-k_n-3}^n\left[ \widetilde{r}_{i,n}\right] \right| \right] . \end{aligned}$$
(5.16)

In the sequel, we prove the same rate of convergence for all three terms on the right-hand side. Starting with (5.14), from the definition of \(r_i^4(u)\) we have

$$\begin{aligned} r_i^4(u)=\mathbb {E}_{i-2}^n\left[ f_{i,u}(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }|\overline{\sigma \lambda }|^p_i) -f_{i,u}(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }|\sigma _{\tau _{i-2}^n}|^p| \lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p})\right] , \end{aligned}$$

using the independence of \(\phi _{i-1}^n\) and \(\phi _{i}^n\) from \({\mathcal {F}}_{\tau _{i-2}^n}\). A second-order Taylor expansion, possible by the usual boundedness assumptions, now gives

$$\begin{aligned}&\mathbb {E}\left[ \left| R_4^n(u)-\sum _{i=k_n+3}^{N_n(1)} \mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta } \widetilde{r}_{i,n} \mathbb {E}_{i-2}^n\left[ (f^n_{i, u})'(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p})\right] \right| \right] \\&\quad = \mathbb {E}\left[ \left| \sum _{i=k_n+3}^{N_n(1)}\frac{1}{2}(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta })^2(\widetilde{r}_{i,n})^2\mathbb {E}_{i-2}^n\left[ f_{i,u_n}''(\delta _{i,n})\right] \right| \right] \end{aligned}$$

for some \(\delta _{i,n}\) between \(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }|\overline{\sigma \lambda }|^p_i\) and \(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }| \sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\). Now it is an easy consequence of (5.13) and Lemma 5.11 together with the reasoning from (5.11) that the expectation in (5.14) is bounded by \(K u^\beta k_n\) with K as in the statement of the lemma.

For (5.16), boundedness of all processes involved gives

$$\begin{aligned} \left| \mathbb {E}_{i-2}^n\left[ (f^n_{i, u})'(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }|\sigma _{\tau _{i-2}^n} |^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p})\right] \right| \le Ku^\beta \end{aligned}$$

by (5.13) whereas Lemma 5.11 proves

$$\begin{aligned} \left| \mathbb {E}_{i-k_n-3}\left[ \widetilde{r}_{i,n}\right] \right| \le Kk_n\Delta _n. \end{aligned}$$
(5.17)

Thus

$$\begin{aligned} \mathbb {E}\left[ \mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\sum _{i=k_n+3}^{N_n(1)}\left| \mathbb {E}_{i-2}^n\left[ (f_{i, u})'(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n} |^{\frac{p}{\beta }-p})\right] \right| \left| \mathbb {E}_{i-k_n-3}^n\left[ \widetilde{r}_{i,n}\right] \right| \right] \le K u^\beta k_n. \end{aligned}$$

Finally, for the treatment of (5.15) we have to be a little more specific. A simple computation proves

$$\begin{aligned} \mathbb {E}_{i-2}^n\left[ (f^n_{i, u})'(\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta } |\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p})\right] = K u^\beta L(p,u,\beta ) |\sigma _{\tau _{i-2}^n}|^{-p}|\lambda _{\tau _{i-2}^n}|^{p-\frac{p}{\beta }} \end{aligned}$$

for some K as above. Thus, setting \(\Xi _i=\widetilde{r}_{i,n}-\mathbb {E}_{i-k_n-3} \left[ \widetilde{r}_{i,n}\right] \) we can bound (5.15) by

$$\begin{aligned}&K u^\beta L(p,u,\beta ) \mathbb {E}\left[ \left| \sum _{i=k_n+3}^{N_n(1)} \Xi _i \left( |\sigma _{\tau _{i-2}^n}|^{-p}|\lambda _{\tau _{i-2}^n}|^{p-\frac{p}{\beta }} - |\sigma _{\tau _{i-k_n-3}^n}|^{-p}|\lambda _{\tau _{i-k_n-3}^n}|^{p-\frac{p}{\beta }} \right) \right| \right] \end{aligned}$$
(5.18)
$$\begin{aligned}&\quad + K u^\beta L(p,u,\beta ) \mathbb {E}\left[ \left| \sum _{i=k_n+3}^{N_n(1)} \Xi _i |\sigma _{\tau _{i-k_n-3}^n}|^{-p}|\lambda _{\tau _{i-k_n-3}^n} |^{p-\frac{p}{\beta }}\right| \right] . \end{aligned}$$
(5.19)

An application of the Cauchy–Schwarz inequality bounds (5.18) by the product of

$$\begin{aligned} K u^\beta L(p,u,\beta ) \mathbb {E}\left[ \sum _{i=k_n+3}^{N_n(1)} \Xi _i^2 \right] ^{1/2} \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}\left[ \sum _{i=k_n+3}^{N_n(1)} \left( |\sigma _{\tau _{i-2}^n}|^{-p}| \lambda _{\tau _{i-2}^n}|^{p-\frac{p}{\beta }} - |\sigma _{\tau _{i-k_n-3}^n}|^{-p} |\lambda _{\tau _{i-k_n-3}^n}|^{p-\frac{p}{\beta }}\right) ^2\right] ^{1/2}. \end{aligned}$$

Lemma 5.11 together with (5.17) proves

$$\begin{aligned} \mathbb {E}_{i-k_n-3}^n[|\Xi _i|^2]\le K k_n\Delta _n, \end{aligned}$$
(5.20)

and from Lemma 5.6 we have

$$\begin{aligned} \mathbb {E}_{i-k_n-3}^n\left[ \left( |\sigma _{\tau _{i-2}^n}|^{-p}|\lambda _{\tau _{i-2}^n} |^{p-\frac{p}{\beta }} - |\sigma _{\tau _{i-k_n-3}^n}|^{-p}|\lambda _{\tau _{i-k_n-3}^n} |^{p-\frac{p}{\beta }}\right) ^2\right] \le K k_n \Delta _n \end{aligned}$$

with the same reasoning as when establishing (5.10). Boundedness of \(L(p,u,\beta )\) and \(N_n(1)\le C \Delta _n^{-1}\) now prove that (5.18) is bounded by \(K u^\beta k_n\).

For (5.19) we use an argument involving discrete martingales, and we first change the upper summation bound from \(N_n(1)\) to \(N_n(1)+ 2k_n +5\) as the corresponding error term is of the order \(K u^\beta k_n\) by boundedness of \(\sigma \) and \(\lambda \), so similar to the one from (5.18). The martingale argument is explained the easiest if we first pretend that the factors \(K u^\beta L(p,u,\beta ) |\sigma _{\tau _{i-k_n-3}^n}|^{-p}|\lambda _{\tau _{i-k_n-3}^n}|^{p-\frac{p}{\beta }}\) were not present. We write

$$\begin{aligned} \sum _{i=k_n+3}^{N_n(1)+2k_n+5}\Xi _i = \sum _{j=1}^{k_n+3}A_j+ \sum _{i=\lfloor N_n(1)/(k_n+3)\rfloor )(k_n+3) +2k_n +6}^{N_n(1)+2k_n +5}\Xi _i \end{aligned}$$
(5.21)

with

$$\begin{aligned} A_j=\sum _{i=1}^{\lfloor N_n(1)/(k_n+3)\rfloor +1}\Xi _{i(k_n+3)+(j-1)} =\sum _{i=1}^{\infty }\Xi _{i(k_n+3)+(j-1)} \mathbbm {1}_{\{i-1 \le \lfloor N_n(1)/(k_n+3)\rfloor \}}, \end{aligned}$$

\(j=1, \ldots , k_n+3.\) It can be shown that

$$\begin{aligned}&\mathbb {E}[\Xi _{i(k_n+3)+(j-1)} \mathbbm {1}_{\{i-1 \le \lfloor N_n(1)/(k_n+3)\rfloor \}} \Xi _{\ell (k_n+3)+(j-1)} \mathbbm {1}_{\{\ell -1 \le \lfloor N_n(1)/(k_n+3)\rfloor \}}] = 0 \end{aligned}$$

holds for every \(1 \le \ell < i\), using the fact that by construction one knows at time \(\tau _{(i-1)(k_n+3)+(j-1)}^n\) whether the event \(\{\tau _{(i-1)(k_n+3)}^n \le 1 \}\) has happened or not. The latter event is equivalent to \(\{i-1 \le \lfloor N_n(1)/(k_n+3)\rfloor \}\), so after conditioning on \({\mathcal {F}}_{(i-1)(k_n+3)+(j-1)}^n\) the claim follows from \(\mathbb {E}_{r-k_n-3}^n[\Xi _{r}] = 0\) for every r.

Thus, by (5.20), Cauchy–Schwarz inequality and \(\lfloor N_n(1)/(k_n+3)\rfloor +1 \le Kn/k_n\) we obtain

$$\begin{aligned} \mathbb {E}\left[ \left| A_j\right| \right] \le K\mathbb {E}\left[ \left| \sum _{i=1}^{\lfloor N_n(1)/(k_n+3)\rfloor +1}|\Xi _{k_n+3+(j-1)+(i-1)(k_n+1)}|^2\right| \right] ^{1/2} \le K. \end{aligned}$$

As the sum over the residual terms in (5.21) has at most \(k_n+2\) elements we obtain

$$\begin{aligned} \mathbb {E}\left[ \left| \sum _{i=k_n+3}^{N_n(1)+2k_n+5}\Xi _i \right| \right] \le K k_n \end{aligned}$$

as desired. If we now include \(K u^\beta L(p,u,\beta ) |\sigma _{\tau _{i-k_n-3}^n}|^{-p}|\lambda _{\tau _{i-k_n-3}^n}|^{p-\frac{p}{\beta }}\), we just get an additional factor \(u^{\beta }\) as usual. This is due to boundedness of \(\sigma \) and \(\lambda \) again (and of the function L) plus the fact that measurability w.r.t. \({\mathcal {F}}_{\tau _{i-k_n-3}^n}\) keeps the martingale property from above intact. \(\square \)

Lemma 5.16

We have

$$\begin{aligned}&\frac{1}{n-k_n-2}\mathbb {E}\left[ |Z^n(u)-\overline{Z}^n(u)|\right] \\ {}&\quad \le K \left( k_n^{-\beta /2p+\iota }\vee \Delta _n^{1/2}(u^{\beta /2-\iota } + u^{\beta /2})\left( k_n^{-1/2} \vee \alpha _n \vee (k_n\Delta _n)^{1/2}\right) ^{1/2}\right) \end{aligned}$$

where the constant K might depend on p, \(\beta \) and \(\iota \) but not on u.

Proof

As usual we have

$$\begin{aligned}&\frac{1}{n-k_n-2}\mathbb {E}\left[ \sum _{i=k_n+3}^{N_n(1)}\left| (z_i^n(u) - {\overline{z}}_i^n(u)) \mathbbm {1}_{{\mathcal {C}_i^n}}\right| \right] \le K k_n^{-\beta /2p+\iota }, \end{aligned}$$

and for the analogous sum involving \(\mathbbm {1}_{({\mathcal {C}_i^n})^C}\), as in the previous proof, we may change the upper summation index to \(N_n(1)+2\) without loss of generality. Now, note that by Lemma 5.10 and using the same arguments for \(z_i(u_n)\) we have

$$\begin{aligned} \mathbb {E}_{i-2}^n\left[ z_i(u)\right] =0=\mathbb {E}_{i-2}^n\left[ \overline{z}_i(u)\right] . \end{aligned}$$

Thus for all \(i,j \ge k_n +3\) with \(j-i \ge 2\) we have

$$\begin{aligned} \mathbb {E}\left[ (z_i(u)-\overline{z}_i(u))(z_j(u)-\overline{z}_j(u))\mathbbm {1}_{({\mathcal {C}_i^n})^C}\mathbbm {1}_{(\mathcal {C}_j^n)^C}\mathbbm {1}_{\{N_n(1)+2\ge j\}}\mathbbm {1}_{\{N_n(1)+2\ge i\}}\right] =0 \end{aligned}$$

where we have used that \(\mathbbm {1}_{\{N_n(1)+2\ge j\}},\mathbbm {1}_{\{N_n(1)+2\ge i\}},\mathbbm {1}_{({\mathcal {C}_i^n})^C}\) and \(\mathbbm {1}_{(\mathcal {C}_j^n)^C}\) are all \(\mathcal {F}_{\tau _{j-2}^n}^n\)-measurable. Using \(2|xy|\le x^2+y^2\) and \(N_n(1)\le C \Delta _n^{-1}\) we then get

$$\begin{aligned}&\mathbb {E}\left[ \left( \sum _{i=k_n+3}^{N_n(1)+2}(z_i(u)-\overline{z}_i(u))\mathbbm {1}_{({\mathcal {C}_i^n})^C}\right) ^2\right] \\&\quad =\mathbb {E}\bigg [\sum _{i=k_n+3}^{\infty }(z_i(u)-\overline{z}_i(u))\mathbbm {1}_{(\mathcal {C}_i^n)^C}\mathbbm {1}_{\{N_n(1)+2\ge i\}}\big ((z_i(u)-\overline{z}_i(u))\mathbbm {1}_{(\mathcal {C}_i^n)^C}\mathbbm {1}_{\{N_n(1)+2\ge i\}}\\&\qquad +(z_{i-1}(u)-\overline{z}_{i-1}(u))\mathbbm {1}_{(\mathcal {C}_{i-1}^n)^C}\mathbbm {1}_{\{N_n(1)+2\ge i-1\}}+(z_{i+1}(u)-\overline{z}_{i+1}(u))\mathbbm {1}_{(\mathcal {C}_{i+1}^n)^C}\mathbbm {1}_{\{N_n(1)+2\ge i+1\}}\big )\bigg ]\\&\quad \le 3 \sum _{i=k_n+3}^{\lfloor C \Delta _n^{-1} \rfloor +2} \mathbb {E}\left[ (z_i(u)-\overline{z}_i(u))^2\mathbbm {1}_{({\mathcal {C}_i^n})^C}\right] . \end{aligned}$$

Now, with (5.9) plus the standard inequalities \(|\cos (x)-\cos (y)|^2\le 4|x-y|^p\) and \(|\exp (-x)-\exp (-y)|^2\le |x-y|^p\), which hold for all \(p \in (0,2]\), we obtain

$$\begin{aligned}&\mathbb {E}_{i-2}^n\left[ |(z_i(u)-\overline{z}_i(u))\mathbbm {1}_{({\mathcal {C}_i^n})^C}|^2\right] \\&\quad \le 2\mathbb {E}_{i-2}^n\left[ \left| \cos \left( u\frac{\sigma _{\tau _{i-2}^n}\left( \widetilde{\Delta _{i}^nS} -\left( \frac{\lambda _{\tau _{i-2}^n}}{\lambda _{\tau _{i-3}^n}}\right) ^{\frac{1}{\beta }-1} \widetilde{\Delta _{i-1}^nS}\right) }{\widetilde{V}_i^n(p)^{1/p}}\right) \right. \right. \\&\qquad \left. \left. -\cos \left( u\frac{\lambda _{\tau _{i-2}^n}^{1-1/\beta } \widetilde{\Delta _i^n S}-\lambda _{\tau _{i-3}^n}^{1-1/\beta } \widetilde{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta }\mu _{p,\beta }^{1/\beta } \kappa _{p,\beta }^{1/\beta }}\right) \right| ^2\mathbbm {1}_{({\mathcal {C}_i^n})^C}\right] \\&\qquad + 2\mathbb {E}_{i-2}^n\left[ \left| \mathbb {E}_{i-2}^n\left[ \exp \left( -\frac{A_\beta u^\beta {|\sigma _{\tau _{i-2}^n}|}^\beta |\lambda _{\tau _{i-2}^n} |^{1-\beta } ((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })}{\Delta _n^{-1} \widetilde{V}_i^n(p)^{\beta /p}}\right) \right] \right. \right. \\&\qquad -\left. \left. \mathbb {E}_{i-2}^n\left[ \exp (-u^\beta C_{p,\beta }((\phi _{i}^n)^{1-\beta } +(\phi _{i-1}^n)^{1-\beta })\right] \right| ^2\mathbbm {1}_{({\mathcal {C}_i^n})^C}\right] \\&\quad \le Ku^{\beta -\iota }\mathbb {E}_{i-2}^n\left[ \left| \frac{|\sigma _{\tau _{i-2}^n} ||\lambda _{\tau _{i-2}^n}|^{\frac{1}{\beta }-1}}{\widetilde{V}_i^n(p)^{1/p}}-\frac{1}{\Delta _n^{1/\beta }\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }} \right| ^{\beta -\iota }\mathbbm {1}_{({\mathcal {C}_i^n})^C}|\lambda _{\tau _{i-2}^n}^{1-1/\beta }\widetilde{\Delta _i^n S} -\lambda _{\tau _{i-3}^n}^{1-1/\beta }\widetilde{\Delta _{i-1}^nS}|^{\beta -\iota }\right] \\&\qquad + Ku^\beta \mathbb {E}_{i-2}^n\left[ \left| \frac{A_\beta |\sigma _{\tau _{i-2}^n}|^\beta |\lambda _{\tau _{i-2}^n} |^{1-\beta }}{\Delta _n^{-1}\widetilde{V}_i^n(p)^{\beta /p}}-\frac{A_\beta }{\mu _{p,\beta } \kappa _{p,\beta }}\right| \mathbbm {1}_{({\mathcal {C}_i^n})^C}|(\phi _{i}^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta }| \right] , \end{aligned}$$

and part (a) of Lemma 5.5 together with \(\mathbb {E}\left| (\phi _{i}^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta }\right| <\infty \) and the \({\mathcal {F}}_{i-2}^n\)-measurability of the other terms proves that the term above is bounded by

$$\begin{aligned}&K u^{\beta -\iota } \left| \frac{|\sigma _{\tau _{i-2}^n}||\lambda _{\tau _{i-2}^n}|^{\frac{1}{\beta }-1} \mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }-\Delta _n^{-1/\beta }\widetilde{V}_i^n (p)^{1/p}}{\Delta _n^{-1/\beta }\widetilde{V}_i^n(p)^{1/p}\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right| ^{\beta -\iota } \mathbbm {1}_{({\mathcal {C}_i^n})^C}\\&\quad + Ku^{\beta } \left| \frac{|\sigma _{\tau _{i-2}^n}|^\beta |\lambda _{\tau _{i-2}^n} |^{1-\beta }\mu _{p,\beta }\kappa _{p,\beta }-\Delta _n^{-1}\widetilde{V}_i^n(p)^{\beta /p}}{\Delta _n^{-1}\widetilde{V}_i^n(p)^{\beta /p}\mu _{p,\beta }\kappa _{p,\beta }}\right| \mathbbm {1}_{({\mathcal {C}_i^n})^C}. \end{aligned}$$

Let us for a moment only discuss the first term. On \(({\mathcal {C}_i^n})^C\) and using Condition 5.1, \(\Delta _n^{-p/\beta }\widetilde{V}_i^n(p)\) as well as all quantities involving \(\sigma \) and \(\lambda \) are bounded from above and below by K. Thus, together with \(|x^q-y^q| \le q|\max (x,y)^{q-1}||x-y|\) for \(q\ge 1\) we get

$$\begin{aligned}&\left| \frac{|\sigma _{\tau _{i-2}^n}||\lambda _{\tau _{i-2}^n}|^{\frac{1}{\beta }-1}\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }-\Delta _n^{-1/\beta }\widetilde{V}_i^n(p)^{1/p}}{\Delta _n^{-1/\beta }\widetilde{V}_i^n(p)^{1/p}\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right| ^{\beta -\iota } \mathbbm {1}_{({\mathcal {C}_i^n})^C}\\&\quad \le K \left| \frac{1}{p}\max \left( |\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta },\Delta _n^{-p/\beta }\widetilde{V}_i^n(p)\right) ^{1/p-1}\mathbbm {1}_{({\mathcal {C}_i^n})^C}\right. \\&\qquad \times \left. \left| \Delta _n^{-p/\beta }\widetilde{V}_i^n(p)-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| \right| ^{\beta -\iota }\\&\quad \le K \left| \Delta _n^{-p/\beta }\widetilde{V}_i^n(p)-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| ^{\beta -\iota } \mathbbm {1}_{({\mathcal {C}_i^n})^C}. \end{aligned}$$

With a similar argument for the second term we then obtain

$$\begin{aligned}&\mathbb {E}_{i-2}^n\left[ |(z_i(u)-\overline{z}_i(u))\mathbbm {1}_{({\mathcal {C}_i^n})^C}|^2\right] \\ {}&\quad \le \mathbbm {1}_{({\mathcal {C}_i^n})^C} K u^{\beta -\iota } \left| \Delta _n^{-p/\beta }\widetilde{V}_i^n(p)-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| ^{\beta -\iota } \\ {}&\qquad + \mathbbm {1}_{({\mathcal {C}_i^n})^C} K u^{\beta } \left| \Delta _n^{-p/\beta }\widetilde{V}_i^n(p)-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| . \end{aligned}$$

Now, we have

$$\begin{aligned}&\mathbb {E}\left[ \left| \Delta _n^{-p/\beta }\widetilde{V}_i^n(p)-|\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| \right] \\ {}&\quad \le \mathbb {E}\left[ \left| \Delta _n^{-p/\beta }\widetilde{V}_i^n(p)-\Delta _n^{-p/\beta }\overline{V}_i^n(p)\right| +\left| \Delta _n^{-p/\beta }\overline{V}_i^n(p)-|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| \right. \\&\quad \quad +\left. \left| |\sigma _{\tau _{i-2}^n}|^p|\lambda _{\tau _{i-2}^n}|^{\frac{p}{\beta }-p}\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }-|\overline{\sigma \lambda }|^p_i\mu _{p,\beta }^{p/\beta }\kappa _{p,\beta }^{p/\beta }\right| \right] \\&\quad \le \left( k_n^{-1/2} \vee \alpha _n \vee \Delta _n^{1/2} \vee (k_n\Delta _n)^{1/2}\right) \end{aligned}$$

using Lemma 5.8, (5.7) and Lemma 5.11. A similar result holds with the exponent being replaced by \(\beta - \iota > 1\). The claim now follows easily. \(\square \)

5.4 Proof of the main theorems

5.4.1 Proof of Theorem 3.1

As discussed before, we may assume Conditions 5.1 and 5.3 to hold. First, we have

$$\begin{aligned} \widetilde{L}^n(p,u) - L(p,u,\beta ) = \frac{1}{N_n(1)-k_n-2}\sum _{i=k_n+3}^{N_n(1)} \left( \cos \left( u\frac{\widetilde{\Delta _i^n X}-\widetilde{\Delta _{i-1}^nX} }{(\widetilde{V}_i^n(p))^{1/p}}\right) - L(p,u,\beta ) \right) , \end{aligned}$$

and it is a simple consequence of (5.3), the decomposition in (5.4) and Lemmas 5.12 to 5.16 that

$$\begin{aligned} \widetilde{L}^n(p,u) - L(p,u,\beta ) = \frac{1}{N_n(1)-k_n-2} {\overline{Z}}^{n}(u) + o_\mathbb {P}(1) \end{aligned}$$

holds. Note that we have convergence to zero of all bounds in Lemmas 5.12 to 5.16 as \(\Delta _n \rightarrow 0\), \(k_n \rightarrow \infty \) and \(k_n \Delta _n \rightarrow 0\) by assumption.

The proof of

$$\begin{aligned} \frac{1}{N_n(1)-k_n-2} {\overline{Z}}^{n}(u) {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}0 \end{aligned}$$

follows along the lines of Lemma 5.16. We may first change the upper summation index to \(N_n(1)+ 2\) which does not change anything asymptotically because of (5.3), and we then have

$$\begin{aligned} \mathbb {E}\left[ \overline{z}_i(u) \overline{z}_j(u) \mathbbm {1}_{\{N_n(1)+2\ge j\}}\mathbbm {1}_{\{N_n(1)+2\ge i\}}\right] =0 \end{aligned}$$

for all \(i,j \ge k_n +3\) with \(j-i \ge 2\) using Lemma 5.10. Thus, we obtain

$$\begin{aligned} \mathbb {E}\left[ \left( \sum _{i=k_n+3}^{N_n(1)+2}\overline{z}_i(u) \right) ^2\right] \le 3 \sum _{i=k_n+3}^{\lfloor C \Delta _n^{-1} \rfloor +2} \mathbb {E}\left[ (\overline{z}_i(u))^2\right] \le K \Delta _n^{-1}, \end{aligned}$$

and the claim follows from (5.3) again. \(\square \)

5.4.2 Proof of Remark 3.2

From the definition of \({\hat{\beta }}(p, u_n, v_n)\) it follows easily that the claim \({\hat{\beta }}(p, u_n, v_n) \le 2\) is equivalent to

$$\begin{aligned} \sum _{i=k_n+3}^{N_n(1)}\rho ^2(1-\cos \left( u_na_i\right) ) \le \sum _{i=k_n+3}^{N_n(1)} (1-\cos \left( \rho u_na_i\right) ) \end{aligned}$$
(5.22)

where we have used the shorthand notation \(a_i=\frac{\widetilde{\Delta _i^n X}-\widetilde{\Delta _{i-1}^nX} }{(\widetilde{V}_i^n(p))^{1/p}}\). A sufficient condition for (5.22) is \(\rho ^2(1-\cos (x)) \le 1-\cos (\rho x)\) for all \(x\in \mathbb {R}\) which itself is equivalent to

$$\begin{aligned} g_\rho (x)= 1 - \cos \left( \rho x\right) - \rho ^2(1-\cos (x)) \ge 0 \quad \text {for all }x\in \mathbb {R}. \end{aligned}$$
(5.23)

Using properties of the cosine and inserting \(\rho =1/2\) we note \(g_{\frac{1}{2}}(x)=g_{\frac{1}{2}}(-x)\) and \(g_{\frac{1}{2}}(x) = g_{\frac{1}{2}}(x+4\pi )\). For (5.23) to hold it then suffices to show \(g_{\frac{1}{2}}(x)\ge 0\) for all \(x\in [0,2\pi ]\). So let \(x\in [0,2\pi ]\). Then

$$\begin{aligned} g_{\frac{1}{2}}'(x)&= \frac{1}{2}\sin \left( \frac{x}{2}\right) - \frac{1}{4}\sin (x)\\&=\frac{1}{2}\sin \left( \frac{x}{2}\right) - \frac{1}{2} \sin \left( \frac{x}{2}\right) \cos \left( \frac{x}{2}\right) = \frac{1}{2}\sin \left( \frac{x}{2}\right) \left( 1-\cos \left( \frac{x}{2}\right) \right) \ge 0 \end{aligned}$$

by properties of the trigonometric functions. The claim follows from \(g_{\frac{1}{2}}(0) = 0\). \(\square \)

5.4.3 Proof of Theorem 3.3

We will assume throughout that Assumptions 5.1 and 5.3 are in place, and we set \(k_n\sim C_1 \Delta _n^{-\varpi }\) for some \(C_1> 0\) and some \(\varpi \in (0,1)\) as well as \(u_n\sim C_2 \Delta _n^{\varrho }\) for some \(C_2 > 0\) and \(\varrho \in (0,1)\).

Lemma 5.17

Under the conditions

$$\begin{aligned} \beta '<\frac{\beta }{2}, \quad \frac{1}{3}\vee \frac{1}{8\varrho }<p<\frac{\beta }{2}, \quad \varpi \ge \frac{2}{3}, \quad \frac{1}{3\beta }<\varrho< \frac{1}{\beta }, \quad \frac{1}{\beta }<\frac{\varpi }{p}-\varrho , \quad 2\varpi -\varrho \beta <1, \end{aligned}$$

we have

$$\begin{aligned} \frac{\sqrt{N_n(1)}}{u_n^{\beta /2}}\left( (\widetilde{L}^n(p,u_n)-L(p,u_n,\beta )) - \frac{1}{N_n(1)-k_n-2} {\overline{Z}}^n(u_n)\right) {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}0. \end{aligned}$$

Proof

The claim follows from a tedious but straightforward computation, using (5.3) and Lemmas 5.12 to 5.16 as well as the conditions on \(k_n\) and \(u_n\). \(\square \)

Lemma 5.18

Let \(u_n\) be as above and set \(v_n=\rho u_n\) for some \(\rho > 0\). Then, for every fixed \(i > 2\) we have

$$\begin{aligned} \frac{1}{u_n^{\beta /2}v_n^{\beta /2}}\mathbb {E}\left[ \overline{z}_i(u_n) \overline{z}_i(v_n)\right]&\rightarrow C_{p,\beta }\kappa _{\beta ,\beta } \frac{2+2\rho ^\beta -|1-\rho |^\beta -(1+\rho )^\beta }{2\rho ^{\beta /2}},\\ \frac{1}{u_n^{\beta /2}v_n^{\beta /2}}\mathbb {E}\left[ \overline{z}_i(u_n) \overline{z}_{i-1}(v_n)\right]&\rightarrow C_{p,\beta }\kappa _{\beta ,\beta } \frac{2+2\rho ^\beta -|1-\rho |^\beta -(1+\rho )^\beta }{4\rho ^{\beta /2}}, \end{aligned}$$

and the same result holds with interchanged roles of \(u_n\) and \(v_n\).

Proof

Using the shorthand notation \(\widehat{\Delta _i^nS} = \lambda _{\tau _{i-2}^n}^{-1/\beta +1}\widetilde{\Delta _{i}^nS}\) and the equality \(\cos (x)\cos (y)=\frac{1}{2}\left( \cos (x-y)+\cos (x+y)\right) \) we have

$$\begin{aligned}&\cos \left( u_n\frac{\widehat{\Delta _i^nS}-\widehat{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta } \mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) \cos \left( v_n\frac{\widehat{\Delta _i^nS} -\widehat{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta }\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) \\&\quad =\frac{1}{2} \left( \cos \left( \frac{(u_n-v_n)\widehat{\Delta _i^nS}+(-u_n+v_n)\widehat{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta }\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) +\cos \left( \frac{(u_n+v_n) \widehat{\Delta _i^nS}+(-u_n-v_n)\widehat{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta } \mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) \right) ,\\&\qquad \times \cos \left( u_n\frac{\widehat{\Delta _i^nS}-\widehat{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta } \mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) \cos \left( v_n\frac{\widehat{\Delta _{i-1}^nS} -\widehat{\Delta _{i-2}^nS}}{\Delta _n^{1/\beta }\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) \\&\quad =\frac{1}{2} \left( \cos \left( \frac{u_n\widehat{\Delta _i^nS}+(-u_n-v_n)\widehat{\Delta _{i-1}^nS}+v_n \widehat{\Delta _{i-2}^nS}}{\Delta _n^{1/\beta }\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) \right. \\&\qquad \left. +\cos \left( \frac{u_n\widehat{\Delta _i^nS}+(-u_n+v_n) \widehat{\Delta _{i-1}^nS}-v_n\widehat{\Delta _{i-2}^nS}}{\Delta _n^{1/\beta }\mu _{p,\beta }^{1/\beta } \kappa _{p,\beta }^{1/\beta }}\right) \right) . \end{aligned}$$

With the same notation as in the proof of Lemma 5.10 we obtain

$$\begin{aligned}&(u_n-v_n)\Delta _n^{-1/\beta }\widehat{\Delta _{i}^nS}+(-u_n+v_n)\Delta _n^{-1/\beta }\widehat{\Delta _{i-1}^nS}\\&\quad \sim u_n(1-\rho )\lambda _{\tau _{i-2}^n}^{-1/\beta +1}((\phi _i^n\lambda _{\tau _{i-2}^n})^{1-\beta })^{1/\beta }S^{(1)}+u_n(\rho -1)\lambda _{\tau _{i-3}^n}^{-1/\beta +1}((\phi _{i-1}^n\lambda _{\tau _{i-3}^n})^{1-\beta })^{1/\beta }S^{(2)}\\&\quad \sim u_n|1-\rho |S^{(3)}((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })^{1/\beta } \end{aligned}$$

as the (\(\mathcal {F}_{\tau _{i-2}^n}\)-conditional) distribution, and in the same manner

$$\begin{aligned} (u_n+v_n)\Delta _n^{-1/\beta }\widehat{\Delta _{i}^nS}+&(-u_n-v_n)\Delta _n^{-1/\beta }\widehat{\Delta _{i-1}^nS} \sim u_n(1+\rho )S^{(3)}((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })^{1/\beta }\\ u_n\Delta _n^{-1/\beta }\widehat{\Delta _{i}^nS}+&(-u_n-v_n)\Delta _n^{-1/\beta }\widehat{\Delta _{i-1}^nS}+v_n\Delta _n^{-1/\beta }\widehat{\Delta _{i-2}^nS}\\ {}&\sim u_nS^{(3)}((\phi _i^n)^{1-\beta }+(1+\rho )^\beta (\phi _{i-1}^n)^{1-\beta }+ \rho ^\beta (\phi _{i-2}^n)^{1-\beta })^{1/\beta },\\ u_n\Delta _n^{-1/\beta }\widehat{\Delta _{i}^nS}+&(-u_n+v_n)\Delta _n^{-1/\beta }\widehat{\Delta _{i-1}^nS}-v_n\Delta _n^{-1/\beta }\widehat{\Delta _{i-2}^nS}\\&\sim u_nS^{(3)}((\phi _i^n)^{1-\beta }+|1-\rho |^\beta (\phi _{i-1}^n)^{1-\beta }+ \rho ^\beta (\phi _{i-2}^n)^{1-\beta })^{1/\beta }. \end{aligned}$$

We see in particular that exchanging the roles of \(u_n\) and \(v_n\) is irrelevant to the distributions. Then, with Lemma 5.10 and its proof,

$$\begin{aligned}&\frac{1}{u_n^{\beta /2}v_n^{\beta /2}}\mathbb {E}\left[ \overline{z}_i(u_n)\overline{z}_i(v_n)\right] \\&\quad =\frac{1}{2u_n^{\beta }\rho ^{\beta /2}}\mathbb {E}\left[ \exp \left( -C_{p,\beta }u_n^\beta |1-\rho |^\beta ((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right) \right. \\&\qquad \left. +\exp \left( -C_{p,\beta }u_n^\beta (1+\rho )^\beta ((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right) \right] \\&\qquad -\frac{1}{u_n^{\beta }\rho ^{\beta /2}}\mathbb {E}\left[ \exp \left( -u_n^\beta C_{p,\beta }((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right) \right] \\ {}&\qquad \times \mathbb {E}\left[ \exp \left( -u_n^\beta \rho ^\beta C_{p,\beta }((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right) \right] ,\\&\frac{1}{u_n^{\beta /2}v_n^{\beta /2}}\mathbb {E}\left[ \overline{z}_i(u_n)\overline{z}_{i-1}(v_n)\right] \\&\quad =\frac{1}{2u_n^{\beta /2}v_n^{\beta /2}}\mathbb {E}\left[ \exp \left( -C_{p,\beta }u_n^\beta ((\phi _i^n)^{1-\beta }+(1+\rho )^\beta (\phi _{i-1}^n)^{1-\beta }+ \rho ^\beta (\phi _{i-2}^n)^{1-\beta })\right) \right. \\&\qquad \left. +\exp \left( -C_{p,\beta }u_n^\beta ((\phi _i^n)^{1-\beta }+|1-\rho |^\beta (\phi _{i-1}^n)^{1-\beta }+ \rho ^\beta (\phi _{i-2}^n)^{1-\beta })\right) \right] \\&\qquad -\frac{1}{u_n^{\beta /2}v_n^{\beta /2}}\mathbb {E}\left[ \exp \left( -u_n^\beta C_{p,\beta }((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right) \right] \\ {}&\qquad \times \mathbb {E}\left[ \exp \left( -u_n^\beta \rho ^\beta C_{p,\beta }((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right) \right] . \end{aligned}$$

We now have for example

$$\begin{aligned}&\mathbb {E}\left[ \exp \left( -C_{p,\beta }u_n^\beta |1-\rho |^\beta ((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right) \right] \\&\quad =-C_{p,\beta }u_n^\beta |1-\rho |^\beta \mathbb {E}\left[ \exp (-\epsilon _{i}^n)((\phi _i^n)^{1-\beta } +(\phi _{i-1}^n)^{1-\beta })\right] +1 \end{aligned}$$

for some \(\epsilon _{i}^n\in [0,u_n^\beta |1-\rho |^\beta C_{p,\beta } ((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })]\), and since \(u_n \rightarrow 0\) and \(\mathbb {E}[(\phi _i^n)^{1-\beta }] = M < \infty \) hold, dominated convergence gives

$$\begin{aligned} \mathbb {E}\left[ \exp (-\epsilon _i^n)((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right] \rightarrow \mathbb {E}\left[ ((\phi _i^n)^{1-\beta }+(\phi _{i-1}^n)^{1-\beta })\right] =\kappa _{\beta ,\beta }. \end{aligned}$$

The same argument for the other terms, including the usage of \(u_n \rightarrow 0\) when dealing with the product of the two expectations, gives

$$\begin{aligned}&\frac{1}{u_n^{\beta /2}v_n^{\beta /2}}\mathbb {E}\left[ \overline{z}_i(u_n)\overline{z}_i(v_n)\right] \\&\quad \rightarrow \frac{-C_{p,\beta }|1-\rho |^\beta \kappa _{\beta ,\beta }+1-C_{p,\beta }(1+\rho )^\beta \kappa _{\beta ,\beta }+1-2\left( - C_{p,\beta }\kappa _{\beta ,\beta }- \rho ^\beta C_{p,\beta } \kappa _{\beta ,\beta }+1\right) }{2\rho ^{\beta /2}}\\&\quad = C_{p,\beta }\kappa _{\beta ,\beta }\frac{2+2\rho ^\beta -|1-\rho |^\beta -(1+\rho )^\beta }{2\rho ^{\beta /2}}. \end{aligned}$$

Similar arguments lead to

$$\begin{aligned} \frac{1}{u_n^{\beta /2}v_n^{\beta /2}}\mathbb {E}\left[ \overline{z}_i(u_n)\overline{z}_{i-1}(v_n)\right]&{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C_{p,\beta }\frac{\kappa _{\beta ,\beta }}{2}\frac{4+4\rho ^\beta -(1+(1+\rho )^\beta +\rho ^\beta ) -(1+|1-\rho |^\beta +\rho ^\beta )}{2\rho ^{\beta /2}}\\&=C_{p,\beta }\kappa _{\beta ,\beta }\frac{2+2\rho ^\beta -|1-\rho |^\beta -(1+\rho )^\beta }{4\rho ^{\beta /2}}. \end{aligned}$$

\(\square \)

Lemma 5.19

Let \(u_n\) be as above and set \(v_n=\rho u_n\) for some \(\rho > 0\). Then the \({\mathcal {F}}\)-stable convergence in law

$$\begin{aligned} \left( \frac{\sqrt{\Delta _n}}{u_n^{\beta /2}}\overline{Z}^n(u_n), \frac{\sqrt{\Delta _n}}{v_n^{\beta /2}}\overline{Z}^n(v_n)\right) ~{\mathop {\longrightarrow }\limits ^{{\mathcal {L}}-(s)}}~(X,Y) \end{aligned}$$

holds, where (XY) is mixed normal distributed with mean 0 and covariance matrix \(\mathcal {C}\) consisting of the elements \(\mathcal {C}_{ij}(1)\) given by

$$\begin{aligned}&\mathcal {C}_{11}(t)=\mathcal {C}_{22}(t)=\int _{0}^{t} \frac{1}{\lambda _s} \textrm{d}s~C_{p,\beta }\kappa _{\beta ,\beta }(4-2^\beta ),\\&\mathcal {C}_{12}(t)=\mathcal {C}_{21}(t)=\int _{0}^{t} \frac{1}{\lambda _s} \textrm{d}s~C_{p,\beta }\kappa _{\beta ,\beta }\frac{2+2\rho ^\beta - (1+\rho )^\beta -|1-\rho |^\beta }{\rho ^{\beta /2}}, \quad t>0. \end{aligned}$$

Proof

We set

$$\begin{aligned} \zeta _i^n&=\frac{\sqrt{\Delta _n}}{u_n^{\beta /2}}\left( \overline{z}_i(u_n),\overline{z}_i(v_n)\right) \\&=\left( \frac{\sqrt{\Delta _n}}{u_n^{\beta /2}}\left( \cos \left( u_n\frac{\widehat{\Delta _{i}^nS}- \widehat{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta }\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) -L(p,u_n,\beta )\right) ,\frac{\sqrt{\Delta _n}}{v_n^{\beta /2}}\left( \cos \left( v_n\frac{\widehat{\Delta _{i}^nS} -\widehat{\Delta _{i-1}^nS}}{\Delta _n^{1/\beta }\mu _{p,\beta }^{1/\beta }\kappa _{p,\beta }^{1/\beta }}\right) -L(p,v_n,\beta )\right) \right) , \end{aligned}$$

and we can see by the uniform boundedness of \(\overline{z}_i(\cdot )\) and because of \(\Delta _nu_n^{-\beta } \rightarrow 0\) that

$$\begin{aligned} \sum _{i=k_n+3}^{N_n(t)}\zeta _i^n \quad \text {and} \quad \sum _{i=k_n+3}^{N_n(t)+1}\left( \zeta _i^n-\mathbb {E}^n_{i-1} \left[ \zeta _i^n\right] +\mathbb {E}^n_{i}\left[ \zeta _{i+1}^n\right] \right) , \quad t > 0, \end{aligned}$$

are asymptotically equivalent. We note that \(N_n(t)+1\) is an \((\mathcal {F}_{\tau _{i}^n})_{i\ge 1}\)-stopping time for each \(t > 0\), and, therefore, to apply Theorem 2.2.15 in Jacod and Protter (2012) it is sufficient to show that for \(q=\frac{2}{1-\varrho \beta }+2>2\), \(\eta _i^n=\zeta _i^n-\mathbb {E}^n_{i-1}\left[ \zeta _i^n\right] +\mathbb {E}^n_{i}\left[ \zeta _{i+1}^n\right] \) and any fixed \(t > 0\)

$$\begin{aligned}&\sum _{i=k_n+3}^{N_n(t)+1}\mathbb {E}^n_{i-1}\left[ \eta _i^n\right] \xrightarrow {\mathbb {P}}(0,0), \end{aligned}$$
(5.24)
$$\begin{aligned}&\sum _{i=k_n+3}^{N_n(t)+1}\left( \mathbb {E}^n_{i-1}\left[ \eta _{i}^{n,j}\eta _{i}^{n,k}\right] -\mathbb {E}^n_{i-1} \left[ \eta _{i}^{n,j}\right] \mathbb {E}^n_{i-1}\left[ \eta _{i}^{n,k}\right] \right) \xrightarrow {\mathbb {P}} \mathcal {C}_{jk}(t), \end{aligned}$$
(5.25)
$$\begin{aligned}&\sum _{i=k_n+3}^{N_n(t)+1}\mathbb {E}^n_{i-1}\left[ \Vert \zeta _i^n\Vert ^q\right] \xrightarrow {\mathbb {P}}0, \end{aligned}$$
(5.26)
$$\begin{aligned}&\sum _{i=k_n+3}^{N_n(t)+1}\mathbb {E}^n_{i-1}\left[ \zeta _i^n(M_{\tau _{i}^n}-M_{\tau _{i-1}^n})\right] \xrightarrow {\mathbb {P}}0, \end{aligned}$$
(5.27)

hold, where M is either one of the Brownian motions W, \(\widetilde{W}\) or \({\overline{W}}\) or a bounded martingale orthogonal to any of the Brownian motions. Note that Theorem 2.2.15 in Jacod and Protter (2012) is stated as a functional result, and our claim then simply follows by specifically choosing \(t=1\).

First note that Lemma 5.10 gives \(\mathbb {E}^n_{i-1}\left[ \zeta _{i+1}^n\right] =(0,0)\) and therefore \(\mathbb {E}^n_{i-1}\left[ \eta _i^n\right] = (0,0)\) by definition as well. (5.24) then holds. Also, \(\varrho <\frac{1}{\beta }\) gives

$$\begin{aligned} \frac{1}{1-\varrho \beta } - \varrho \beta \frac{1+1-\varrho \beta }{1-\varrho \beta }=\frac{(1-\varrho \beta )^2}{1-\varrho \beta }>0. \end{aligned}$$

Thus, the uniform boundedness of \(\overline{z}_i(\cdot )\) and Assumption 5.3 give

$$\begin{aligned} \sum _{i=k_n+3}^{N_n(t)+1}\mathbb {E}^n_{i-1}\left[ \left| \sqrt{\frac{\Delta _n}{u_n^{\beta }}} \overline{z}_i(u_n)\right| ^q\right] \le N_n(t) u_n^{-\beta \frac{1+1-\varrho \beta }{1-\varrho \beta }}\Delta _n^{1+\frac{1}{1-\varrho \beta }}\le C u_n^{-\beta \frac{1+1-\varrho \beta }{1-\varrho \beta }}\Delta _n^{\frac{1}{1-\varrho \beta }} \rightarrow 0 \end{aligned}$$

which proves (5.26). To show (5.25) we first recall \(\mathbb {E}^n_{i-1}\left[ \eta _i^n\right] =(0,0)\) and then a simple calculation yields

$$\begin{aligned}&\mathbb {E}^n_{i-1}\left[ \left( \zeta _i^{n,j}-\mathbb {E}^n_{i-1}\left[ \zeta _i^{n,j}\right] +\mathbb {E}^n_{i}\left[ \zeta _{i+1}^{n,j}\right] \right) \left( \zeta _i^{n,k}-\mathbb {E}^n_{i-1} \left[ \zeta _i^{n,k}\right] +\mathbb {E}^n_{i}\left[ \zeta _{i+1}^{n,k}\right] \right) \right] \\&\quad =\mathbb {E}^n_{i-1}\left[ \zeta _i^{n,j}\zeta _i^{n,k}\right] -\mathbb {E}^n_{i-1}\left[ \zeta _i^{n,j}\right] \mathbb {E}^n_{i-1}\left[ \zeta _i^{n,k}\right] +\mathbb {E}^n_{i-1}\left[ \zeta _i^{n,j}\zeta _{i+1}^{n,k}\right] +\mathbb {E}^n_{i-1}\left[ \zeta _i^{n,k}\zeta _{i+1}^{n,j}\right] \\&\qquad +\mathbb {E}\left[ \mathbb {E}^n_{i}\left[ \zeta _{i+1}^{n,j}\right] \mathbb {E}^n_{i}\left[ \zeta _{i+1}^{n,k}\right] \right] , \end{aligned}$$

using iterated expectations, \(\mathbb {E}^n_{i-1}\left[ \zeta _{i+1}^n\right] =(0,0)\) and the fact that the distribution of \(\mathbb {E}^n_{i}\left[ \zeta _{i+1}^{n,j}\right] \mathbb {E}^n_{i}\left[ \zeta _{i+1}^{n,k}\right] \) is independent of \(\mathcal {F}_{\tau _{i-1}^n}\). We then prove

$$\begin{aligned} \sum _{i=k_n+3}^{N_n(t)}\mathbb {E}^n_{i-1}\left[ \zeta _i^{n,j}\zeta _i^{n,k}\right]&{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\int _{0}^{t} \frac{1}{\lambda _s}\textrm{d}s\lim _{n\rightarrow \infty } \Delta _n^{-1}\mathbb {E}\left[ \zeta _m^{n,j}\zeta _m^{n,k}\right] ,\\ \sum _{i=k_n+3}^{N_n(t)}\mathbb {E}^n_{i-1}\left[ \zeta _i^{n,j}\right] \mathbb {E}^n_{i-1}\left[ \zeta _i^{n,k}\right]&{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\int _{0}^{t} \frac{1}{\lambda _s}\textrm{d}s\lim _{n\rightarrow \infty } \Delta _n^{-1} \mathbb {E}\left[ \mathbb {E}^n_{m-1} \left[ \zeta _m^{n,j}\right] \mathbb {E}^n_{m-1}\left[ \zeta _m^{n,k}\right] \right] ,\\ \sum _{i=k_n+3}^{N_n(t)}\mathbb {E}^n_{i-1}\left[ \zeta _i^{n,j}\zeta _{i+1}^{n,k}\right]&{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\int _{0}^{t} \frac{1}{\lambda _s}\textrm{d}s\lim _{n\rightarrow \infty } \Delta _n^{-1} \mathbb {E}\left[ \zeta _m^{n,j}\zeta _{m+1}^{n,k}\right] ,\\ \sum _{i=k_n+3}^{N_n(t)}\mathbb {E}^n_{i-1}\left[ \zeta _i^{n,k}\zeta _{i+1}^{n,j}\right]&{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\int _{0}^{t} \frac{1}{\lambda _s}\textrm{d}s\lim _{n\rightarrow \infty } \Delta _n^{-1} \mathbb {E}\left[ \zeta _m^{n,k}\zeta _{m+1}^{n,j}\right] \end{aligned}$$

and that the limits on the right-hand side exist and are the same, irrespective of the choice of m. This would result in

$$\begin{aligned} \sum _{i=k_n+3}^{N_n(t)}&\left( \zeta _i^n-\mathbb {E}^n_{i-1}\left[ \zeta _i^n\right] +\mathbb {E}^n_{i}\left[ \zeta _{i+1}^n\right] \right) ^T\left( \zeta _i^n-\mathbb {E}^n_{i-1}\left[ \zeta _i^n\right] +\mathbb {E}^n_{i}\left[ \zeta _{i+1}^n\right] \right) \nonumber \\&\xrightarrow {\mathbb {P}}\int _{0}^{t} \frac{1}{\lambda _s}\textrm{d}s\lim _{n\rightarrow \infty }n\left( \mathbb {E}\left[ ({\zeta _m^n})^T\zeta _m^n\right] +\mathbb {E}\left[ (\zeta _m^n)^T\zeta _{m+1}^n\right] +\mathbb {E}\left[ (\zeta _{m+1}^n)^T\zeta _{m}^n\right] \right) , \end{aligned}$$
(5.28)

everything for an arbitrary m.

We give the arguments for the first convergence result above in detail, the other ones can be treated in exactly the same way. We set \(X_i^{n,1}=\mathbb {E}_{i}^n\left[ ({\zeta _{i+1}^n})^T({\zeta _{i+1}^n})\right] \) and then prove

$$\begin{aligned} \sum _{i=k_n+2}^{N_n(t)+1}\mathbb {E}_{i-1}^n\left[ X_i^{n,1}\right] {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\int _{0}^{t} \frac{1}{\lambda _s} \textrm{d}s\lim _{n\rightarrow \infty } \Delta _n^{-1} \mathbb {E}\left[ X_m^{n,1}\right] \end{aligned}$$
(5.29)

as well as

$$\begin{aligned} \sum _{i=k_n+2}^{N_n(t)+1}\mathbb {E}_{i-1}^n\left[ ((X_i^{n,1})_{jk})^2\right] {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}0, \quad j,k=1,2. \end{aligned}$$
(5.30)

Lemma 2.2.12 in Jacod and Protter (2012) (plus the usual asymptotic negligibility when adding finitely many summands) finally gives the claim.

Now, note first that the distribution of \(X_i^{n,1}\) is independent of \(\mathcal {F}_{\tau _{i-1}^n}\). Therefore

$$\begin{aligned} \mathbb {E}_{i-1}^n\left[ X_i^{n,1}\right] =\mathbb {E}\left[ X_i^{n,1}\right] \quad \text {and} \quad \mathbb {E}_{i-1}^n\left[ (X_i^{n,1})^2\right] =\mathbb {E}\left[ (X_i^{n,1})^2\right] . \end{aligned}$$

(5.29) is then an easy consequence of

$$\begin{aligned} \sum _{i=k_n+2}^{N_n(t)+1}\mathbb {E}_{i-1}^n\left[ X_i^{n,1}\right]&=\Delta _n(N_n(t)-k_n) \Delta _n^{-1}\mathbb {E}\left[ X_i^{n,1}\right] {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\int _{0}^{t} \frac{1}{\lambda _s}\textrm{d}s\lim _{n\rightarrow \infty } \Delta _n^{-1}\mathbb {E}\left[ X_m^{n,1}\right] \end{aligned}$$

where we used (5.3) and Lemma 5.18 to prove that the limit on the right hand side exists. To show (5.30) we use Jensen inequality to obtain

$$\begin{aligned} \mathbb {E}\left[ ((X_i^{n,1})_{jk})^2\right]&\le \mathbb {E}\left[ \mathbb {E}_i^n\left[ \left( \zeta _{i+1}^{n,j}\right) ^2\right] \right] \left\| \left( \zeta _{i+1}^{n,k}\right) ^2\right\| _\infty , \end{aligned}$$

and uniform boundedness of \(\overline{z}_i(\cdot )\) as well as \(N_n(t) \le C \Delta _n^{-1}\) give

$$\begin{aligned} \sum _{i=k_n+2}^{N_n(t)+1}\mathbb {E}\left[ ((X_i^{n,1})_{jk})^2\right]&\le K\sum _{i=k_n+2}^{N_n(t)+1}\frac{\Delta _n}{u_n^{\beta }}\mathbb {E}\left[ \left( \zeta _{i+1}^{n,j}\right) ^2\right] \rightarrow 0, \end{aligned}$$

using \(\Delta _nu_n^{-\beta }\rightarrow 0\) along with Lemma 5.18 in the last step. The proof of (5.25) can then be finished by a tedious but straightforward computation, combining (5.28) with Lemma 5.18 again.

Finally, to prove (5.27) we use Theorem 4.34 in Chapter III of Jacod and Shiryaev (2003). We set for \(k_n+3\le i \le N_n(t)\) and \(u\ge \tau _{i-2}^n\):

$$\begin{aligned} \mathcal {H}:=\mathcal {F}_{\tau _{i-2}^n} \quad \text {and} \quad \mathcal {H}_u := \mathcal {H} \bigvee \sigma \left( S_r : u \ge r\ge \tau _{i-2}^n \right) , \end{aligned}$$

i.e. \((\mathcal {H}_u)_{u\ge \tau _{i-2}^n }\) is the filtration generated by \(\mathcal {H}\) and \(\sigma \left( S_r: u \ge r \ge \tau _{i-2}^n \right) \). Now \((S_u)_{u\ge \tau _{i-2}^n}\) is a process with independent increments w.r.t. to \(\sigma \left( S_r: r\ge \tau _{i-2}^n \right) \). For all \(u\ge \tau _{i-2}^n\) we set \(K_u= \mathbb {E}\left[ \zeta _i|\mathcal {H}_u\right] \) and note that \(K_{\tau _{i}^n} = \zeta _i\) due to \(\zeta _i\) being \(\mathcal {H}_{\tau _i^n}\)-measurable. Then with the aforementioned Theorem 4.34 we have

$$\begin{aligned} \zeta _i = K_{\tau _{i}^n} = K_{\tau _{i-2}^n} +\int _{\tau _{i-2}}^{\tau _{i}}H_s \textrm{d}S_s, \end{aligned}$$

where \((H_u)_{u\ge \tau _{i-2}^n}\) is a predictable process. Then

$$\begin{aligned} \mathbb {E}_{i-1}^n\left[ \zeta _i^n(M_{\tau _{i}}-M_{\tau _{i-1}})\right]&= \left( K_{\tau _{i-2}^n}+\int _{\tau _{i-2}}^{\tau _{i-1}}H_s\textrm{d}S_s\right) \mathbb {E}_{i-1}^n \left[ M_{\tau _{i}}-M_{\tau _{i-1}}\right] \\&\quad +\mathbb {E}_{i-1}^n\left[ \int _{\tau _{i-1}}^{\tau _{i}} H_s\textrm{d}S_s(M_{\tau _{i}}-M_{\tau _{i-1}})\right] = 0 \end{aligned}$$

where we used that the martingale \((S_u)_{u\ge 0}\) is orthogonal to M in all cases. \(\square \)

Corollary 5.20

Suppose that the conditions in Lemma 5.17 hold and choose \(v_n=\rho u_n\) for some \(\rho > 0\). Then we have the \({\mathcal {F}}\)-stable convergence in law

$$\begin{aligned} \left( \frac{\sqrt{N_n(1)}}{u_n^{\beta /2}}(\widetilde{L}^n(p,u_n) -L(p,u_n,\beta )),\frac{\sqrt{N_n(1)}}{v_n^{\beta /2}}(\widetilde{L}^n (p,v_n)-L(p,v_n,\beta ))\right) ~{\mathop {\longrightarrow }\limits ^{{\mathcal {L}}-(s)}}~(X',Y') \end{aligned}$$

where \((X',Y')\) is jointly normal distributed with mean 0 and covariance matrix \(\mathcal {C'}\) consisting of

$$\begin{aligned} \mathcal {C}_{11}'=\mathcal {C}_{22}'=C_{p,\beta }\kappa _{\beta ,\beta }(4-2^\beta ), \quad \mathcal {C}_{12}'=\mathcal {C}_{21}'=C_{p,\beta }\kappa _{\beta ,\beta }\frac{2+2\rho ^\beta - (1+\rho )^\beta -|1-\rho |^\beta }{\rho ^{\beta /2}}. \end{aligned}$$

Proof

Using Lemma 5.17 we have

$$\begin{aligned} \frac{\sqrt{N_n(1)}}{u_n^{\beta /2}}\left( (\widetilde{L}^n(p,u_n)-L(p,u_n,\beta )) - \frac{1}{N_n(1)-k_n-2} {\overline{Z}}^n(u_n)\right) {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}0, \end{aligned}$$

and similarly for \(v_n\), and from several applications of (5.3) together with \(k_n \Delta _n \rightarrow 0\) we also get

$$\begin{aligned} \frac{\sqrt{N_n(1)}}{\sqrt{\Delta _n}(N_n(1)-k_n-2)} = \frac{1}{\sqrt{\Delta _n N_n(1)}} (1+o_\mathbb {P}(1)) {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\frac{1}{\sqrt{\int _{0}^{1}\frac{1}{\lambda _s}\textrm{d}s}}. \end{aligned}$$

Then Lemma 5.19 together with the properties of stable convergence in law yields the claim. \(\square \)

5.4.4 Proof of Theorem 3.5

Using \(L(p,u_n,\beta ) = \mathbb {E}\left[ \exp \left( -u^\beta C_{p,\beta }((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta })\right) \right] \), a Taylor expansion of the function

$$\begin{aligned} (x,y)\mapsto \frac{\log (-(x-1))-\log (-(y-1))}{\log (1/\rho )} \end{aligned}$$

with gradient

$$\begin{aligned} (g_1(x),g_2(y))=\left( \frac{1}{\log (1/\rho )(x-1)},\frac{1}{\log (1/\rho )(1-y)}\right) \end{aligned}$$

around \((L(p,u_n,\beta ),L(p,v_n,\beta ))\) gives

$$\begin{aligned}&u_n^{\beta /2}\sqrt{N_n(1)}(\hat{\beta }(p,u_n,v_n)-\beta ) \nonumber \\&\quad =u_n^{\beta /2}\sqrt{N_n(1)}\left( \frac{\log (-(L(p,u_n,\beta )-1))-\log (-(L(p,v_n,\beta )-1))}{\log (u_n/v_n)}-\beta \right) \end{aligned}$$
(5.31)
$$\begin{aligned}&\qquad +\frac{1}{\log (u_n/v_n)}\frac{u_n^\beta }{\mathbb {E}[\exp (-u_n^\beta C_{p,\beta }((\phi ^{(1)})^{1-\beta } +(\phi ^{(2)})^{1-\beta })]-1}\frac{\sqrt{N_n(1)}}{u_n^{\beta /2}}(\widetilde{L}^n(p,u_n)-L(p,u_n,\beta ))\end{aligned}$$
(5.32)
$$\begin{aligned}&\qquad +\frac{1}{\log (u_n/v_n)}\frac{1}{\rho ^{\beta /2}}\frac{v_n^\beta }{1-\mathbb {E}[\exp (-v_n^\beta C_{p,\beta }((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta })]}\frac{\sqrt{N_n(1)}}{v_n^{\beta /2}} (\widetilde{L}^n(p,v_n)-L(p,v_n,\beta )) \end{aligned}$$
(5.33)
$$\begin{aligned}&\qquad +u_n^{\beta }(g_1(\eta _1^n)-g_1(L(p,u_n,\beta )))\frac{\sqrt{N_n(1)}}{u_n^{\beta /2}}(\widetilde{L}^n (p,u_n)-L(p,u_n,\beta )) \nonumber \\&\qquad +v_n^{\beta }\frac{1}{\rho ^{\beta /2}}(g_2(\eta _2^n)-g_2(L(p,v_n,\beta )))\frac{\sqrt{N_n(1)}}{v_n^{\beta /2}}(\widetilde{L}^n(p,v_n)-L(p,v_n,\beta )) , \end{aligned}$$
(5.34)

for some \(\eta _1^n\) between \(\widetilde{L}^n(p,u_n)\) and \(L(p,u_n,\beta )\) and some \(\eta _2^n\) between \(\widetilde{L}^n(p,v_n)\) and \(L(p,v_n,\beta )\).

As before, we have

$$\begin{aligned} 1-L(p,u_n,\beta )=\mathbb {E}\left[ \exp (-\epsilon _{1}^n) C_{p,\beta }u_n^\beta \left( (\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta }\right) \right] \end{aligned}$$

for some \(\varepsilon _1^n\) between 0 and \(C_{p,\beta }u_n^\beta ((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta })\). Obviously, \(\varepsilon _1^n \rightarrow 0\) almost surely, so by dominated convergence

$$\begin{aligned} \frac{1-L(p,u_n,\beta )}{u_n^\beta } \rightarrow \mathbb {E}\left[ C_{p,\beta } \left( (\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta }\right) \right] = C_{p,\beta } \kappa _{\beta , \beta } > 0. \end{aligned}$$
(5.35)

We now prove

$$\begin{aligned} u_n^{\beta } \left| \frac{1}{\eta _1^n-1} - \frac{1}{L(p,u_n,\beta ))-1} \right| = u_n^{\beta } \frac{|L(p,u_n,\beta )-\eta _1^n|}{(1-\eta _1^n)(1-L(p,u_n,\beta ))} {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}0 \end{aligned}$$
(5.36)

from which, together with Corollary 5.20 and Slutsky’s lemma, the asymptotic negligibility of (5.34) follows. A similar result obviously holds for the term involving \(\eta _2^n\) and \(v_n\). Using (5.35), we get (5.36) from

$$\begin{aligned} \frac{1-\eta _1^n}{|L(p,u_n,\beta )-\eta _1^n|} {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\infty . \end{aligned}$$

To prove the latter claim we use \(1-\eta _1^n = (1-L(p,u_n,\beta )) + (L(p,u_n,\beta )-\eta _1^n)\) and the fact that \((1-L(p,u_n,\beta ))\) is of the order \(u_n^\beta \) using (5.35) while \(|L(p,u_n,\beta )-\eta _1^n| \le |L(p,u_n,\beta )-\widetilde{L}^n(p,u_n)|\) is at most of order \(\Delta _n^{1/2} u_n^{\beta /2}\) using Corollary 5.20. \(\Delta _nu_n^{-\beta } \rightarrow 0\) together with Slutsky’s lemma then yields the claim.

In order to prove the convergence of the bias term (5.31) towards zero we use that for \(\varepsilon _1^n\) as above there exists some \(\epsilon _{2}^n\) between \(\mathbb {E}[\exp (-\epsilon _{1}^n)((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta })]\) and \(\kappa _{\beta ,\beta }\) such that

$$\begin{aligned}&\log (-(L(u_n,p,\beta )-1))=\log (u_n^\beta C_{p,\beta })+\log \left( \mathbb {E}\left[ \exp (-\epsilon _{1}^n)\left( (\phi ^{(1)})^{1-\beta } +(\phi ^{(2)})^{1-\beta }\right) \right] \right) \\&\quad =\log (u_n^\beta C_{p,\beta })+\frac{1}{\epsilon _{2}^n}\left( \mathbb {E}[\exp (-\epsilon _{1}^n) ((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta })]-\kappa _{\beta ,\beta })+\log (\kappa _{\beta ,\beta }\right) . \end{aligned}$$

Clearly \(\varepsilon _2^n \rightarrow \kappa _{\beta , \beta }\), and with \(\varepsilon _3^n\) between 0 and \(\varepsilon _1^n\) we obtain for an arbitrary \(\iota >0\)

$$\begin{aligned}&\frac{1}{u_n^{\beta -\iota }\epsilon _{2}^n}\left( \mathbb {E}\left[ \exp (-\epsilon _1^n) \left( (\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta }\right) \right] -\kappa _{\beta ,\beta }\right) \\&\quad =\frac{1}{\epsilon _{2}^n}\mathbb {E}\left[ \exp (-\epsilon _3^n)\frac{(-\epsilon _{1}^n)}{u_n^{\beta -\iota }}\left( (\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta }\right) \right] \rightarrow 0, \end{aligned}$$

where we used \(\epsilon _1^n ((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta }) \le u_n^\beta C_{p,\beta } ((\phi ^{(1)})^{1-\beta }+(\phi ^{(2)})^{1-\beta })^2\) and dominated convergence via part (c) of Assumption 2.3. The same arguments hold for \(\log (-(L(v_n,p,\beta )-1))\), so

$$\begin{aligned}&\frac{1}{u_n^{\beta -\iota }}\left( \frac{\log (-(L(p,u_n,\beta )-1))- \log (-(L(p,v_n,\beta )-1))}{\log (u_n/v_n)}-\beta \right) \\&\quad =\frac{1}{u_n^{\beta -\iota }}\left( \frac{\log (u_n^\beta C_{p,\beta }) +\log (\kappa _{\beta ,\beta })-(\log (v_n^\beta C_{p,\beta })+\log (\kappa _{\beta ,\beta }))}{\log (u_n/v_n)}-\beta \right) + o_p(1) {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}0. \end{aligned}$$

Now, \(\frac{1}{3\beta }<\varrho \) and \(N_n(1)\le C \Delta _n^{-1}\) yield \(u_n^{\frac{3}{2}\beta -\iota }\sqrt{N_n(1)}\rightarrow 0\) almost surely for \(\iota > 0\) small enough, and therefore (5.31) converges in probability to zero.

The claim then follows from deriving the asymptotics of (5.32) and (5.33) for which we use (5.35) and Corollary 5.20. The form of the limiting variance is then computed easily. \(\square \)

5.4.5 Proof of Corollary 3.6

The result follows from Theorem 3.5 immediately, upon using Slutsky’s lemma and \(u_n^{\beta /2}\sqrt{N_n(1)} \rightarrow \infty \) almost surely, where the latter is a consequence of (5.3) and \(\Delta _n u_n^{-\beta } \rightarrow 0\). \(\square \)