1 Introduction

Dynamic models based on stochastic partial differential equations (SPDEs) are recently of great interest, in particular their calibration based on statistics, see, for instance, Hambly and Søjmark (2019), Fuglstad and Castruccio (2020), Altmeyer and Reiß (2021) and Altmeyer et al. (2022). Bibinger and Trabs (2020), Cialenco and Huang (2020) and Chong (2020) have independently of one another studied the parameter estimation for parabolic SPDEs based on power variation statistics of time increments when a solution of the SPDE is observed discretely in time and space. Bibinger and Trabs (2020) pointed out the relation of their estimators to realized volatilities which are well-known as key statistics for financial high-frequency data in econometrics. We develop estimators based on these realized volatilities which significantly improve upon the M-estimation from Bibinger and Trabs (2020). Our new estimators attain smaller asymptotic variances, they are explicit functions of realized volatilities and we can readily provide asymptotic confidence intervals. Since generalized estimation approaches for small noise asymptotics in Kaino and Uchida (2021a), rate-optimal estimation for more general observation schemes in Hildebrandt and Trabs (2021), long-span asymptotics in Kaino and Uchida (2021b), and with two spatial dimensions in Tonaki et al. (2022) have been built upon the M-estimator from Bibinger and Trabs (2020), we expect that our new method is of interest to further improve parameter estimation for SPDEs. Our theoretical framework is the same as in Bibinger and Trabs (2020). We consider for \((t,y)\in \mathbb {R}_+\times [0,1]\) a linear parabolic SPDE

$$\begin{aligned} \textrm{d}X_t(y)=\left( \theta _2\frac{\partial ^{2}X_t(y)}{\partial y^{2}}+\theta _1\frac{\partial X_t(y)}{\partial y}+\theta _0 X_t(y)\right) \,\textrm{d}t+\sigma \,\textrm{d}B_{t}(y), \end{aligned}$$
(1)

with one space dimension. The bounded spatial domain is the unit interval [0, 1], which can be easily generalized to some arbitrary bounded interval. Although estimation methods in the case of an unbounded spatial domain are expected to be similar, the theory is significantly different, see Bibinger and Trabs (2019). \((B_{t}(y))\) is a cylindrical Brownian motion in a Sobolev space on [0, 1]. The initial value \(X_0(y)=\xi (y)\) is assumed to be independent of \((B_{t}(y))\). We work with Dirichlet boundary conditions: \(X_{t}(0)=X_{t}(1)=0\), for all \(t\in \mathbb {R}_+\). A specific example is the SPDE

$$\begin{aligned} \textrm{d}X_t(y)=\left( \frac{\partial X_t(y)}{\partial y}+\frac{\kappa }{2}\frac{\partial ^{2}X_t(y)}{\partial y^{2}}\right) \,\textrm{d}t+\sigma \,\textrm{d}B_{t}(y), \end{aligned}$$
(2)

used for the term structure model of Cont (2005).

Existence and uniqueness of a mild solution of the SPDE (1) written \(\textrm{d}X_t(y)=A_{\theta } X_t(y)\,\textrm{d}t+\sigma \,\textrm{d}B_{t}(y)\), with differential operator \(A_{\theta }\), which is given by

$$\begin{aligned} X_t=\exp (t\,A_{\theta })\,\xi +\sigma \int _0^t \exp {((t-s)\,A_{\theta })}\,\textrm{d}B_s,\end{aligned}$$
(3)

with a Bochner integral and where \(\exp (t\,A_{\theta })\) is the strongly continuous heat semigroup, is a classical result, see Chapter 6.5 in Da Prato and Zabczyk (1992). We focus on parameter estimation based on discrete observations of this solution \((X_t(y))\) on the unit square \((t,y)\in [0,1]\times [0,1]\). The spatial observation points \(y_j\), \(j=1,\ldots ,m\), have at least distance \(\delta >0\) from the boundaries at which the solution is zero by the Dirichlet conditions.

Assumption 1

We assume equidistant high-frequency observations in time \(t_i=i\Delta _n\), \(i=0,\dots ,n\), where \(\Delta _n=1/n\rightarrow 0\), asymptotically. We consider the same asymptotic regime as in Bibinger and Trabs (2020), where \(m=m_n\rightarrow \infty \), such that \(m_n={\mathcal {O}}(n^{\rho })\), for some \(\rho \in (0,1/2)\), and \(m\cdot \min _{j=2,\dots ,m}\left| y_j-y_{j-1}\right| \) is uniformly in n bounded from below and from above.

This asymptotic regime with more observations in time than in space is natural for most applications. Hildebrandt and Trabs (2021) and Bibinger and Trabs (2020) showed that in this regime the realized volatilities

$$\begin{aligned}{{\,\mathrm{RV_{n}}\,}}(y_j)=\sum _{i=1}^n (X_{i\Delta _n}(y_j)-X_{(i-1)\Delta _n}(y_j))^2,~j=1,\ldots ,m,\end{aligned}$$

are sufficient to estimate the parameters with the optimal rate \((m_n n)^{-1/2}\), while Hildebrandt and Trabs (2021) establish different optimal convergence rates when the condition \(m_n/\sqrt{n}\rightarrow 0\) is violated and propose rate-optimal estimators for this setup based on double increments in space and time. Our proofs and results could be generalized to non-equidistant observations in time, when their distances decay at the same order, but the observation schemes would affect asymptotic variances and thus complicate the results. Instead, there is no difference between equidistant and non-equidistant observations in space, since the spatial covariances will not be used for estimation and are asymptotically negligible for our results under Assumption 1.

The natural parameters, depending on \(\theta _1\in \mathbb {R}\), and \(\theta _2>0\), \(\sigma >0\) from (1), which are identifiable under high-frequency asymptotics are

$$\begin{aligned} \sigma ^2_0:=\sigma ^2/\sqrt{\theta _2}\quad \text {and}\quad \kappa :=\theta _1/\theta _2,\end{aligned}$$
(4)

the normalized volatility parameter \(\sigma ^2_0\), and the curvature parameter \(\kappa \). The parameter \(\theta _0\in \mathbb {R}\) could be estimated consistently only from observations on [0, T], as \(T\rightarrow \infty \). This is addressed in Kaino and Uchida (2021b).

While Bibinger and Trabs (2020) focused first on estimating the volatility when the parameters \(\theta _1\) and \(\theta _2\) are known, we consider the estimation of the curvature parameter \(\kappa \) in Sect. 2. We present an estimator for known \(\sigma _0^2\) and a robustification for the case of unknown \(\sigma _0^2\). In Sect. 3 we develop a novel estimator for both parameters, \((\sigma _0^2,\kappa )\), which improves the M-estimator from Section 4 of Bibinger and Trabs (2020) significantly. It is based on a log-linear model for \({{\,\mathrm{RV_{n}}\,}}(y)\) with explanatory spatial variable y. Section 4 is on the implementation and numerical results. We draw a numerical comparison of asymptotic variances and show the new estimators’ improvement over existing methods. We demonstrate significant efficiency gains for finite-sample applications in a Monte Carlo simulation study. All proofs are given in Sect. 5.

2 Curvature estimation

Section 3 of Bibinger and Trabs (2020) addressed the estimation of \(\sigma ^2\) in (1) when \(\theta _1\) and \(\theta _2\) are known. Here, we focus on the estimation of \(\kappa \) from (4), first when \(\sigma _0^2\) is known and then for unknown volatility. The volatility estimator by Bibinger and Trabs (2020), based on observations in one spatial point \(y_j\), used the realized volatility \({{\,\mathrm{RV_{n}}\,}}(y_j)\). The central limit theorem with \(\sqrt{n}\) rate for this estimator from Theorem 3.3 in Bibinger and Trabs (2020) yields equivalently that

$$\begin{aligned} \sqrt{n}\bigg (\frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}-\frac{\exp (-\kappa y_j)\sigma _0^2}{\sqrt{\pi }}\bigg )~{\mathop {\longrightarrow }\limits ^{d}}~{\mathcal {N}}\big (0,\Gamma \sigma _0^4\exp (-2\kappa y_j)\big ),\end{aligned}$$
(5)

with \(\Gamma \approx 0.75\) a numerical constant analytically determined by a series of covariances. Since the marginal processes of \(X_t(y)\) in time have regularity 1/4, the scaling factor \(1/\sqrt{n}\) for \({{\,\mathrm{RV_{n}}\,}}(y_j)\) is natural. To estimate \(\kappa \) consistently we need observations in at least two distinct spatial points. A key observation in Bibinger and Trabs (2020) was that under Assumption 1, realized volatilities in different spatial observation points de-correlate asymptotically. From (5) we can hence write

$$\begin{aligned} \frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n }}=\exp (-\kappa y_j)\frac{\sigma _0^2}{\sqrt{\pi }}+\exp (-\kappa y_j)\,\sigma _0^2\,\sqrt{\frac{\Gamma }{n}}\,Z_j+R_{n,j} \end{aligned}$$
(6)

with \(Z_j\) i.i.d. standard normal and remainders \(R_{n,j}\), which turn out to be asymptotically negligible for the asymptotic distribution of the estimators. The equation

$$\begin{aligned} \log \left( \frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n }}\right)&=-\kappa y_j+\log \Big (\frac{\sigma _0^2}{\sqrt{\pi }}\Big )+\log \big (1+\sqrt{\Gamma \pi \Delta _n}\,Z_j\big )\nonumber \\&\quad +\log \left( 1+ \frac{ R_{n,j}\exp (\kappa y_j)\sqrt{\pi }\sigma _0^{-2}}{1+\sqrt{\Gamma \pi \Delta _n}\,Z_j}\right) , \end{aligned}$$
(7)

and an expansion of the logarithm yield an approximation

$$\begin{aligned} \kappa \approx \frac{-\log \big (\Delta _n^{1/2}{{\,\mathrm{RV_{n}}\,}}(y_j)\big )+\log (\sigma _0^2)-\log (\sqrt{\pi })}{y_j}+\frac{\sqrt{\Gamma \pi \Delta _n}}{y_j}\,Z_j. \end{aligned}$$
(8)

In the upcoming example, we briefly discuss optimal estimation in a related simple statistical model.

Example 1

Assume independent observations \(Y_i\sim {\mathcal {N}}(\mu ,\varsigma _i^2),i=1,\ldots ,m\), where \(\mu \) is unknown and \(\varsigma _i^2>0\) are known. The maximum likelihood estimator (mle) is given by

$$\begin{aligned} {\hat{\mu }}=\frac{\sum _{i=1}^m Y_i\varsigma _i^{-2}}{\sum _{i=1}^m\varsigma _i^{-2}}.\end{aligned}$$
(9)

The expected value and variance of this mle are

$$\begin{aligned} {\mathbb {E}}[{\hat{\mu }}]= \mu ~~~~~\text {and}~~~~~ {\mathbb {V}}\text {ar}({\hat{\mu }})=\left( \sum _{i=1}^m\varsigma _i^{-2}\right) ^{-1}. \end{aligned}$$

Note that \(\varsigma _i^{-2}\) can be viewed as Fisher information of observing \(Y_i\). The efficiency of the mle in this model is implied by standard asymptotic statistics.

If we have independent observations with the same expectation and variances as in Example 1, but not necessarily normally distributed, the estimator \({{\hat{\mu }}}\) from (9) can be shown to be the linear unbiased estimator with minimal variance.

If \(\sigma _0^2\) is known, this and (8) motivate the following curvature estimator:

$$\begin{aligned} {\hat{\kappa }}_{n,m} =\frac{-\sum _{j=1}^m \log \Big (\frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}\Big )y_j+\sum _{j=1}^m\log \Big (\frac{\sigma _0^2}{\sqrt{\pi }}\Big )y_j}{\sum _{j=1}^my_j^2}. \end{aligned}$$
(10)

Theorem 1

Grant Assumptions 1 and 2 with \(y_1=\delta \), \(y_m=1-\delta \), and \(\delta \in (0,1/2)\). Then, the estimator (10) satisfies, as \(n\rightarrow \infty \), the central limit theorem (clt)

$$\begin{aligned} \sqrt{nm_n}\big ({\hat{\kappa }}_{n,m}-\kappa \big )~{\mathop {\longrightarrow }\limits ^{d}}~{\mathcal {N}}\bigg (0,\frac{3\Gamma \pi }{1-\delta +\delta ^2}\bigg ). \end{aligned}$$
(11)

Typically \(\delta \) will be small and the asymptotic variance close to \(3\Gamma \pi \). Assumption 2 poses a mild restriction on the initial condition \(\xi \) and is stated at the beginning of Sect. 5. The logarithm yields a variance stabilizing transformation for (5) and the delta-method readily a clt for log realized volatilities with constant asymptotic variances. This implies a clt for the estimator as \(\Delta _n\rightarrow 0\), and when \(1<m<\infty \) is fix. The proof of (11) is, however, not obvious and based on an application of a clt for weakly dependent triangular arrays by Peligrad and Utev (1997).

If \(\sigma _0^2\) is unknown the estimator (10) is infeasible. Considering differences for different spatial points in (7) yields a natural generalization of Example 1 and the estimator (10) for this case:

$$\begin{aligned} {\hat{\varkappa }}_{n,m}=\frac{\sum _{j\ne l}\log \Big (\frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{{{\,\mathrm{RV_{n}}\,}}(y_l)}\Big )(y_l-y_j)}{\sum _{j\ne l}(y_j-y_l)^2}. \end{aligned}$$
(12)

This estimator achieves as well the parametric rate of convergence \(\sqrt{nm_n}\), it is asymptotically unbiased and satisfies a clt. Its asymptotic variance is, however, much larger than the one in (11).

Theorem 2

Grant Assumptions 1 and 2 with \(y_1=\delta \), \(y_m=1-\delta \), and \(\delta \in (0,1/2)\). Then, the estimator (12) satisfies, as \(n\rightarrow \infty \), the clt

$$\begin{aligned} \sqrt{nm_n}\left( {\hat{\varkappa }}_{n,m}-\kappa \right) ~{\mathop {\longrightarrow }\limits ^{d}}~{\mathcal {N}}\bigg (0,\frac{12\Gamma \pi }{(1-2\delta )^2}\bigg ). \end{aligned}$$
(13)

In particular, consistency of the estimator holds as \(n\rightarrow \infty \), also if \(m\ge 2\) remains fix. The clts (11) and (13) are feasible, i.e. the asymptotic variances are known constants and do not hinge on any unknown parameters. Hence, asymptotic confidence intervals can be constructed based on the theorems.

3 Asymptotic log-linear model for realized volatilities and least squares estimation

Applying the logarithm to (6) and a first-order Taylor expansion

$$\begin{aligned}\log (a+x)=\log (a)+\frac{x}{a}+{\mathcal {O}}\Big (\frac{x^2}{a^2}\Big ),~x\rightarrow 0,\end{aligned}$$

yield an asymptotic log-linear model

$$\begin{aligned} \log \left( \frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n }}\right) =-\kappa y_j+\log \left( \frac{\sigma _0^2}{\sqrt{\pi }}\right) +\sqrt{\frac{\Gamma \pi }{n}}\,Z_j+{\tilde{R}}_{n,j} \end{aligned}$$
(LLM)

for the rescaled realized volatilities, with \(Z_j\) i.i.d. standard normal and remainders \({\tilde{R}}_{n,j}\), which turn out to be asymptotically negligible for the asymptotic distribution of the estimators. When we ignore the remainders \({\tilde{R}}_{n,j}\), the estimation of \(-\kappa \) is then directly equivalent to estimating the slope parameter in a simple ordinary linear regression model with normal errors. The intercept parameter in the model (LLM) is a strictly monotone transformation

$$\begin{aligned} \alpha (\sigma _0^2)=\log \big (\frac{\sigma _0^2}{\sqrt{\pi }}\big )\end{aligned}$$
(14)

of \(\sigma _0^2\). To exploit the analogy of (LLM) to a log-linear model, it is useful to recall some standard results on least squares estimation for linear regression.

Example 2

In a simple linear ordinary regression model

$$\begin{aligned}Y_i=\alpha +\beta x_i+\epsilon _i~,~i=1,\ldots ,m,\end{aligned}$$

with white noise \(\epsilon _i\), homoscedastic with variance \({\mathbb {V}}\text {ar}(\epsilon _i)=\sigma ^2\), the least squares estimation yields

$$\begin{aligned} {{\hat{\beta }}}&=\frac{\sum _{j=1}^m(x_j-{\bar{x}})(Y_j-{\bar{Y}})}{\sum _{j=1}^m(x_j-{\bar{x}})^2}, \end{aligned}$$
(15a)
$$\begin{aligned} {{\hat{\alpha }}}&={\bar{Y}}-{{\hat{\beta }}}{\bar{x}}, \end{aligned}$$
(15b)

with the sample averages \({\bar{Y}}=m^{-1}\sum _{j=1}^m Y_j\), and \({\bar{x}}=m^{-1}\sum _{j=1}^m x_j\). The estimators (15a) and (15b) are known to be BLUE (best linear unbiased estimators) by the famous Gauß-Markov theorem, i.e. they have minimal variances among all linear and unbiased estimators. In the normal linear model, if \(\epsilon _i{\mathop {\sim }\limits ^{i.i.d.}}{\mathcal {N}}(0,\sigma ^2)\), the least squares estimator coincides with the mle and standard results imply asymptotic efficiency. The variance-covariance matrix of \(({{\hat{\alpha }}},{{\hat{\beta }}})\) is well-known and

$$\begin{aligned} {\mathbb {V}}\text {ar}({{\hat{\beta }}})&=\frac{\sigma ^2}{\sum _{j=1}^m (x_j-{\bar{x}})^2}, \end{aligned}$$
(15c)
$$\begin{aligned} {\mathbb {V}}\text {ar}({{\hat{\alpha }}})&=\frac{\sigma ^2\sum _{j=1}^m x_j^2}{m\sum _{j=1}^m (x_j-{\bar{x}})^2}, \end{aligned}$$
(15d)
$$\begin{aligned} {\mathbb {C}}\text {ov}({{\hat{\alpha }}},{{\hat{\beta }}})&=-\frac{\sigma ^2{\bar{x}}}{\sum _{j=1}^m (x_j-{\bar{x}})^2}. \end{aligned}$$
(15e)

For the derivation of (15a)–(15e) in this example see, for instance, Example 7.2-1 of Zimmerman (2020).

We give this elementary example, since our estimator and the asymptotic variance-covariance matrix of our estimator will be in line with the translation of the example to our model (LLM).

The M-estimation of Bibinger and Trabs (2020) was based on the parametric regression model

$$\begin{aligned} \frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n }}=\frac{\sigma _0^2}{\sqrt{\pi }}\exp (-\kappa y_j)+\delta _{n,j}, \end{aligned}$$
(16)

with non-standard observation errors \((\delta _{n,j})\). The proposed estimator

$$\begin{aligned} \arg \min _{s,k}\sum _{j=1}^{m}\bigg (\frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n }}-\frac{s^2}{\sqrt{\pi }}\exp (-k y_j)\bigg )^{2} \end{aligned}$$
(17)

was shown to be rate-optimal and asymptotically normally distributed in Theorem 4.2 of Bibinger and Trabs (2020). In view of the analogy of (LLM) to an ordinary linear regression model, however, it appears clear that the estimation method by Bibinger and Trabs (2020) is inefficient, since ordinary least squares is applied to a model with heteroscedastic errors. In fact, generalized least squares could render a more efficient estimator related to our new methods. In model (16), the variances of \(\delta _{n,j}\) depend on j via the factor \(\exp (-2\kappa y_j)\). This induces, moreover, that the asymptotic variance-covariance matrix of the estimator (17) depends on the parameter \((\sigma _0^2,\kappa )\). In line with the least squares estimator from Example 2, the asymptotic distribution of our estimator will not depend on the parameter.

Writing \({\overline{y}}=m_n^{-1}\sum _{j=1}^{m_n}y_j\), our estimator for \(\kappa \) reads

$$\begin{aligned} {\hat{\kappa }}_{n,m}^{LS}&=\frac{\sum _{j=1}^{m_n}\log \left( \frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}\right) y_j-{\overline{y}}\sum _{j=1}^{m_n}\log \left( \frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}\right) }{m_n({\overline{y}})^2-\sum _{j=1}^{m_n}y_j^2} \nonumber \\&=-\frac{\sum _{j=1}^{m_n}\left( \log \big (\frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}\big )-\Big (m_n^{-1}\sum _{u=1}^{m_n}\log \big (\frac{{{\,\mathrm{RV_{n}}\,}}(y_u)}{\sqrt{n}}\big )\Big )\right) \big (y_j-{\overline{y}}\big )}{\sum _{j=1}^{m_n}\big (y_j-{\overline{y}}\big )^2}. \end{aligned}$$
(18a)

The estimator for the intercept is

$$\begin{aligned} {\widehat{\alpha }}^{LS}(\sigma _{0}^{2})&={\overline{y}}{\hat{\kappa }}_{n,m}^{LS}+m_n^{-1}\sum _{j=1}^{m_n} \log \Big (\frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}\Big )\nonumber \\&=\frac{{\overline{y}} \left( \sum _{j=1}^{m_n} \log \big (\frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}\big ) y_j\right) -m_n^{-1}\left( \sum _{j=1}^{m_n} \log \left( \frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}\right) \right) \left( \sum _{j=1}^{m_n} y_j^2\right) }{m_n({\overline{y}})^2- \sum _{j=1}^{m_n} y_j^2}. \end{aligned}$$
(18b)

We shall prove that the OLS-estimator (18a) for \(\kappa \) in our log-linear model is, in fact, identical to the estimator \({\hat{\varkappa }}_{n,m}\) from (12).

Theorem 3

Grant Assumptions 1 and 2 with \(y_1=\delta \), \(y_m=1-\delta \), and \(\delta \in (0,1/2)\). The estimators (18a) and (18b) satisfy, as \(n\rightarrow \infty \), the bivariate clt

$$\begin{aligned} \sqrt{nm_n}\left( \begin{pmatrix} {\hat{\kappa }}_{n,m}^{LS} \\ {\widehat{\alpha }}^{LS}(\sigma _{0}^{2}) \end{pmatrix} - \begin{pmatrix} \kappa \\ \alpha (\sigma _0^2) \end{pmatrix} \right) ~{\mathop {\longrightarrow }\limits ^{d}}~{\mathcal {N}}\left( 0,\Sigma \right) ,\end{aligned}$$

with the asymptotic variance-covariance matrix

$$\begin{aligned} \Sigma = \begin{pmatrix}\frac{12\Gamma \pi }{(1-2 \delta )^2 }&{}\frac{6 \Gamma \pi }{(1-2\delta )^2}\\ \frac{6 \Gamma \pi }{(1-2\delta )^2}&{}4\Gamma \pi \frac{1-\delta +\delta ^2}{(1-2\delta )^2}\end{pmatrix}. \end{aligned}$$

In particular, consistency of the estimators holds as \(n\rightarrow \infty \), also if \(m\ge 2\) remains fix. Different to the typical situation with an unknown noise variance in Example 2, the noise variance in (LLM) is a known constant. Therefore, different to Theorem 4.2 of Bibinger and Trabs (2020), our central limit theorem is readily feasible and provides asymptotic confidence intervals.

An application of the multivariate delta method yields the bivariate clt for the estimation errors of the two-point estimators.

Corollary 4

Under the assumptions of Theorem 3, it holds that

$$\begin{aligned} \sqrt{nm_n}\left( \begin{pmatrix} \hat{\kappa }_{n,m}^{LS} \\ (\hat{\sigma }_{0}^{2})^{LS} \end{pmatrix} - \begin{pmatrix} \kappa \\ \sigma _0^2 \end{pmatrix}\right) ~{\mathop {\longrightarrow }\limits ^{d}}~\mathcal {N}\left( 0,{\tilde{\Sigma }}\right) , \end{aligned}$$

where \(({\hat{\sigma }}_{0}^{2})^{LS}\) is obtained from \({\widehat{\alpha }}^{LS}(\sigma _{0}^{2})\) with the inverse of (14), with

$$\begin{aligned} {\tilde{\Sigma }}=\begin{pmatrix} \frac{12\Gamma \pi }{(1-2\delta )^2} &{}\frac{6\sigma _0^2\Gamma \pi }{(1-2\delta )^2} \\ \frac{6\sigma _0^2\Gamma \pi }{(1-2\delta )^2}&{} \frac{4\sigma _0^4\Gamma \pi (1-\delta +\delta ^2)}{(1-2\delta )^2} \end{pmatrix}. \end{aligned}$$

Here, naturally the parameter \(\sigma _0^2\) occurs in the asymptotic variance of the estimated volatility and in the asymptotic covariance. The transformation or plug-in, however, still readily yield asymptotic confidence intervals.

4 Numerical illustration and simulations

4.1 Numerical comparison of asymptotic variances

The top panel of Fig.  gives a comparison of the asymptotic variances for curvature estimation, \(\kappa \), of our new estimators to the minimum contrast estimator of Bibinger and Trabs (2020). We fix \(\delta =0{.}05\). While the asymptotic variance-covariance matrix of our new estimator is rather simple and explicit, the one in Bibinger and Trabs (2020) is more complicated but can be explicitly computed from their Eqs. (21)–(23).

The uniformly smallest variance is the one of \({\hat{\kappa }}_{n,m}\) from (10). It is visualized with the yellow curve which is constant in \(\kappa \), i.e. the asymptotic variance does not hinge on the parameter. This estimator, however, requires that the volatility \(\sigma _0^2\) is known. It is thus fair to compare the asymptotic variance of the minimum contrast estimator from Bibinger and Trabs (2020) only to the least squares estimator based on the log-linear model, since both work for unknown \(\sigma _0^2\). While the asymptotic variance of the new estimator, visualized with the black curve, does not hinge on the parameter value, the variance of the estimator by Bibinger and Trabs (2020) (brown curve) is, in particular, large when \(\kappa \) has a larger distance to zero. All curves in the top panel of Fig. 1 do not depend on the value of \(\sigma _0^2\). Our least squares estimator in the log-linear model uniformly dominates the estimator from Bibinger and Trabs (2020). For \(\delta \rightarrow 0\), the asymptotic variances of the two least squares estimators would coincide in \(\kappa =0\). However, due to the different dependence on \(\delta \), the asymptotic variance of the estimator from Bibinger and Trabs (2020) is larger than the one of our new estimator also in \(\kappa =0\). The lower panel of Fig. 1 shows the ratios of the asymptotic variances of the two least squares estimators for both unknown parameters. Left we see the ratio for curvature estimation determined by the ratio of the black and brown curves from the graphic in the top panel. Right we see the ratio of the asymptotic variances for estimating \(\sigma _0^2\), as a function depending on different values of \(\kappa \). This ratio does not hinge on the value of \(\sigma _0^2\).

Fig. 1
figure 1

Top panel: Comparison of asymptotic variances of \({\hat{\kappa }}_{n,m}\) from (10) (for known \(\sigma _0^2\)), \({\hat{\kappa }}_{n,m}^{LS}\) from (18a) and the estimator from Bibinger and Trabs (2020), for \(\delta =0{.}05\), and for different values of \(\kappa \). Lower panel: Ratio of asymptotic variances of new method using (18a) and (18b) versus Bibinger and Trabs (2020), left for estimating \(\kappa \), right for \(\sigma _0^2\)

4.2 Monte Carlo simulation study

The simulation of the SPDE is based on its spectral decomposition (21) and an exact simulation of the Ornstein-Uhlenbeck coordinate processes. In Bibinger and Trabs (2020) a truncation method was suggested to approximate the infinite series \(\sum _{k=1}^{\infty }x_k(t)e_k(y)\) in (21) by a finite sum \(\sum _{k=1}^{K}x_k(t)e_k(y)\), up to some spectral cut-off frequency K which needs to be set sufficiently large. In Kaino and Uchida (2021b) this procedure was adopted, but they observed that choosing K too small results in a strong systematic bias of simulated estimates. A sufficiently large K depends on the number of observations, but even for moderate sample sizes \(K=10^5\) was recommended by Kaino and Uchida (2021b). This leads to tedious, long computation times as reported in Kaino and Uchida (2021b). A nice improvement for observations on an equidistant grid in time and space has been presented by Hildebrandt (2020) using a replacement method instead of the truncation method. The replacement method approximates addends with large frequencies in the Fourier series using a suitable set of independent random vectors instead of simply cutting off these terms. The spectral frequency to start with replacement can be set much smaller than the cut-off K for truncation. We thus use Algorithm 3.2 from Hildebrandt (2020) here, which allows to simulate a solution of the SPDE with the same precision as the truncation method while reducing the computation time considerably. For instance, for \(n=10{,}000\) and \(m=100\), the computation time with a standard computer of the truncation method is almost 6 h while the replacement method requires less than one minute. In Hildebrandt (2020) we also find bounds for the total variation distance between approximated and true distribution allowing to select an appropriate trade-off between precision and computation time. We implement the method with \(20\cdot m_n\) as the spectral frequency to start with replacement. We simulate observations on equidistant grid points in time and space. We illustrate results for a spatial resolution with \(m=11\), and a temporal resolution with \(n=1000\). This is in line with Assumption 1. We simulated results for \(m=100\) and \(n=10{,}000\), as well. Although the ratio of spatial and temporal observations is more critical then, the normalized estimation results were similar. If the condition \(m_n\,n^{-1/2}\rightarrow 0\) is violated, however, we see that the variances of the estimators decrease at a slower rate than \((n\cdot m_n)^{-1/2}\). While the numerical computation of estimators as in Bibinger and Trabs (2020) relies on optimization algorithms, the implementation of our estimators is simple, since they rely on explicit transformations of the data. We use the programming language R and provide a package for these simulations on github.Footnote 1

Fig. 2
figure 2

Comparison of empirical distributions of normalized estimation errors for \(\kappa \) from simulation with \(n=1000\), \(m=11\), \(\sigma _0^2=1\), \(\kappa =1\) in the left two columns and \(\kappa =6\) in the right two columns. Grey is for \({\hat{\kappa }}_{n,m}^{LS}\), brown for the estimator by Bibinger and Trabs (2020) and yellow for \({\hat{\kappa }}_{n,m}\) (color figure online)

Figure  compares empirical distributions of normalized estimation errors

  1. 1.

    of \({\hat{\kappa }}_{n,m}^{LS}\) (grey) versus Bibinger and Trabs (2020) (brown) and

  2. 2.

    of \({\hat{\kappa }}_{n,m}\) with known \(\sigma _0^2\) (yellow) compared to \({\hat{\kappa }}_{n,m}^{LS}\) (grey),

for small curvature \(\kappa =1\) in the left two columns, and larger curvature \(\kappa =6\) in the right two columns. The plots are based on a Monte Carlo simulation with 1000 iterations, and for \(n=1000\), \(m=11\), and \(\sigma _0^2=1\). We use the standard R-density plots with Gaussian kernels and bandwidths selected by Silverman’s rule of thumb. The dotted lines give the corresponding densities of the asymptotic limit distributions. We can report that analogous plots for different parameter values of \(\sigma _0^2\) look (almost) identical. With increasing values of n and m, the fit of the asymptotic distributions becomes more accurate, otherwise the plots look as well similar as long as \(m\le \sqrt{n}\).

As expected, the efficiency gains of the new method are much more relevant for larger curvature. In particular, in the third plot from the left for \(\kappa =6\), the new estimator outperforms the one from Bibinger and Trabs (2020) significantly. In the first plot from the left for \(\kappa =1\), instead, the two estimators have similar empirical distributions. The fit of the asymptotic normal distributions is reasonably well for all estimators. This is more clearly illustrated in the QQ-normal plots in Fig. . Using the true value of \(\sigma _0^2\), as expected, the estimator \({\hat{\kappa }}_{n,m}\) outperforms the other methods. We compare it to our new least squares estimator in the second and fourth plots from the left.

Figure  draws a similar comparison of estimated volatilities \(\sigma _0^2\), for unknown \(\kappa \), using the estimator (18b) from the log-linear model and the estimator from Bibinger and Trabs (2020). While for \(\kappa =1\) in the left panel the performance of both methods is similar, for \(\kappa =6\) in the right panel our new estimator outperforms the previous one. Figure  gives the QQ-normal plots for the estimation of \(\sigma _0^2\). All plots are based on 1000 Monte Carlo iterations. The QQ-normal plots compare standardized estimation errors to the standard normal. For the estimator from Bibinger and Trabs (2020) we use an estimated asymptotic variance based on plug-in, while for our new estimators the asymptotic variances are known constants.

Fig. 3
figure 3

Comparison of empirical distributions of normalized estimation errors for \(\sigma _0^2\) from simulation with \(n=1000\), \(m=11\), \(\sigma _0^2=1\), \(\kappa =1\) in the left panel and \(\kappa =6\) in the right panel. Grey is for \(({\hat{\sigma }}_{0}^{2})^{LS}\), and brown for the estimator by Bibinger and Trabs (2020) (color figure online)

Fig. 4
figure 4

QQ-normal plots for normalized estimation errors for \(\kappa \) from simulation with \(n=1000\), \(m=11\), \(\sigma _0^2=1\), \(\kappa =1\) in the left panel and \(\kappa =6\) in the right panel. Brown (top) is the estimator from Bibinger and Trabs (2020), dark grey is for (18a) and yellow (bottom) for (10) (color figure online)

Fig. 5
figure 5

QQ-normal plots for normalized estimation errors for \(\sigma _0^2\) from simulation with \(n=1000\), \(m=11\), \(\sigma _0^2=1\), \(\kappa =1\) in the left panel and \(\kappa =6\) in the right panel. Brown (top) is the estimator from Bibinger and Trabs (2020) and dark grey is for the estimator using (18b) (color figure online)

5 Proofs of the theorems

5.1 Preliminaries

The asymptotic analysis is based on the eigendecomposition of the SPDE. The eigenfunctions \((e_k)\) and eigenvalues \((-\lambda _k)\) of the self-adjoint differential operator

$$\begin{aligned}A_\theta =\theta _0+\theta _1\frac{\partial }{\partial y}+\theta _2\frac{\partial ^{2}}{\partial y^{2}}\end{aligned}$$

are given by

$$\begin{aligned} e_k(y)&=\sqrt{2}\sin \big (\pi ky\big )\exp \Big (-\frac{\theta _1}{2\theta _2}y\Big ),\quad y\in [0,1], \end{aligned}$$
(19)
$$\begin{aligned} \lambda _k&=-\theta _0+\frac{\theta _1^2}{4\theta _2}+\pi ^2k^2\theta _2,\qquad k\in \mathbb {N}, \end{aligned}$$
(20)

where all eigenvalues are negative and \((e_k)_{k\ge 1}\) form an orthonormal basis of the Hilbert space \(H_\theta :=\{f:[0,1]\rightarrow \mathbb {R}: \Vert f\Vert _\theta <\infty \}\) with

$$\begin{aligned} \langle f,g\rangle _{\theta }&:=\int _{0}^{1}e^{y\theta _1/\theta _2}f(y)g(y)\,\textrm{d}y\qquad \text {and}\qquad \Vert f\Vert _\theta ^2:=\langle f,f\rangle _\theta . \end{aligned}$$

Let \(\xi \in H_{\theta }\) be the initial condition. We impose the same mild regularity condition on \(X_0=\xi \) as in Assumption 2.2 of Bibinger and Trabs (2020).

Assumption 2

In (1) we assume that

  1. (i)

    either \({\mathbb {E}}[\langle \xi ,e_k\rangle _\theta ]=0\) for all \(k\ge 1\) and \(\sup _k \lambda _k{\mathbb {E}}[\langle \xi ,e_k\rangle _\theta ^2]<\infty \) holds true or \({\mathbb {E}}[\langle A_\theta \xi ,\xi \rangle _\theta ]<\infty \);

  2. (ii)

    \((\langle \xi ,e_k\rangle _\theta )_{k\ge 1}\) are independent.

This assumption is more general than the one in Hildebrandt and Trabs (2021) that \((X_t(y))\) is started in equilibrium and satisfied for all sufficiently regular functions \(\xi \). We refer to Section 2 of Bibinger and Trabs (2020) for more details on the probabilistic structure.

For the solution \(X_{t}(y)\) from (3), we have the spectral decomposition

$$\begin{aligned} X_{t}(y)=\sum _{k\ge 1}x_{k}(t)e_{k}(y),\,\text {with}~x_{k}(t)=\langle X_{t},e_{k}\rangle _\theta , \end{aligned}$$
(21)

in that the coordinate processes \(x_k\) satisfy the Ornstein-Uhlenbeck dynamics:

$$\begin{aligned} \textrm{d}x_{k}(t)=-\lambda _{k}x_{k}(t)\textrm{d}t+\sigma _t\,\textrm{d}W_{t}^{k},\quad x_{k}(0)=\langle \xi ,e_{k}\rangle _{\theta }, \end{aligned}$$
(22)

with independent one-dimensional Wiener processes \(\{(W_{t}^{k}),k\in \mathbb {N}\}\).

We denote for some integrable random variable Z, its compensated version by

$$\begin{aligned} {\overline{Z}}:=Z-{\mathbb {E}}[Z].\end{aligned}$$
(23)

We use upper-case characters for (compensated) random variables, while the notation for sample averages, as \({\overline{y}}\), is, except in Example 2 in Sect. 3, with lower-case characters. The notation (23) is mainly used for \({{\,\mathrm{RV_{n}}\,}}(y)\) in the sequel.

5.2 Proof of Theorem 1

A first-order Taylor expansion of the logarithm and Proposition 3.1 of Bibinger and Trabs (2020) yield that

$$\begin{aligned}&\log \bigg (\frac{{{\,\mathrm{RV_{n}}\,}}(y)}{\sqrt{n}}\bigg )=\log \bigg ({\mathbb {E}}\bigg [\frac{{{\,\mathrm{RV_{n}}\,}}(y)}{\sqrt{n}}\bigg ]+\bigg (\frac{{{\,\mathrm{RV_{n}}\,}}(y)}{\sqrt{n}}-{\mathbb {E}}\bigg [\frac{{{\,\mathrm{RV_{n}}\,}}(y)}{\sqrt{n}}\bigg ]\bigg )\bigg )\nonumber \\&\quad =\log \bigg (e^{-\kappa y}\frac{\sigma _0^2}{\sqrt{\pi }}+{\mathcal {O}}(\Delta _n)\bigg )+\frac{\overline{{{\,\mathrm{RV_{n}}\,}}(y)}}{\sqrt{n}\big (e^{-\kappa y}\frac{\sigma _0^2}{\sqrt{\pi }}+{\mathcal {O}}(\Delta _n)\big )}+{\mathcal {O}}_{{\mathbb {P}}}\bigg (\bigg (\frac{\overline{{{\,\mathrm{RV_{n}}\,}}(y)}}{\sqrt{n}}\bigg )^2\bigg )\nonumber \\&\quad =-\kappa y+\log \bigg (\frac{\sigma _0^2}{\sqrt{\pi }}\bigg )+{\mathcal {O}}(\Delta _n)+\frac{\overline{{{\,\mathrm{RV_{n}}\,}}(y)}\sqrt{\pi }e^{\kappa y}}{\sqrt{n}\sigma _0^2}\big (1+{\mathcal {O}}(\Delta _n)\big )+{\mathcal {O}}_{{\mathbb {P}}}(\Delta _n)\nonumber \\&\quad =-\kappa y+\log \bigg (\frac{\sigma _0^2}{\sqrt{\pi }}\bigg )+\frac{\overline{{{\,\mathrm{RV_{n}}\,}}(y)}}{\sqrt{n}}\frac{\sqrt{\pi }e^{\kappa y}}{\sigma _0^2}+{\mathcal {O}}_{{\mathbb {P}}}\big (\Delta _n\big ), \end{aligned}$$
(24)

for some spatial point y. The remainders called \( R_{n,j}\) in (6), and \({\tilde{R}}_{n,j}\) in (LLM), are contained in the last two addends. This yields for the estimator (10) that

$$\begin{aligned} {\hat{\kappa }}_{n,m}=\kappa -\sum _{j=1}^{m_n}\frac{\overline{{{\,\mathrm{RV_{n}}\,}}(y_j)}}{\sqrt{n}}\frac{y_je^{\kappa y_j} \sqrt{\pi }}{\sigma _0^2\sum _{j=1}^{m_n}y_j^2}+{\mathcal {O}}_{{\mathbb {P}}}(\Delta _n), \end{aligned}$$
(25)

where we conclude the order of the remainder, since under Assumption 1 it holds that

$$\begin{aligned} \frac{\sum _{j=1}^{m_n} y_j}{\sum _{j=1}^{m_n} y_j^2}\Delta _n={\mathcal {O}}(\Delta _n). \end{aligned}$$

Since under Assumption 1, \(\sqrt{nm_n}\Delta _n\rightarrow 0\), it suffices to prove a clt for the leading term from above:

$$\begin{aligned}\sum _{i=1}^n\zeta _{n,i}:=\sqrt{m_n}\sum _{j=1}^{m_n}\overline{{{\,\mathrm{RV_{n}}\,}}(y_j)}\frac{\sqrt{\pi }y_je^{\kappa y_j}}{\sigma _0^2\sum _{j=1}^{m_n}y_j^2}~{\mathop {\longrightarrow }\limits ^{d}}~{\mathcal {N}}\bigg (0,\frac{3\Gamma \pi }{1-\delta +\delta ^2}\bigg ),\end{aligned}$$

where \(\zeta _{n,i}\) includes the ith squared increment of the realized volatility \({{\,\mathrm{RV_{n}}\,}}(y_j)\). Note that summation over time (increments) is always indexed in i, and summation over spatial points in j. Although this leading term is linear in the realized volatilities, we cannot directly adopt a clt from Bibinger and Trabs (2020) due to the different structures of the weights. Thus, we require an original proof of the clt for which we can reuse some ingredients from Bibinger and Trabs (2020).

We begin with the asymptotic variance. We can adopt Lemma 6.4 from Bibinger and Trabs (2020) and Proposition 6.5 and obtain for any \(\eta \in (0,1)\) that

$$\begin{aligned} {\mathbb {V}}\text {ar}\bigg (\frac{1}{\sqrt{n}}{{\,\mathrm{RV_{n}}\,}}(y)\,e^{\kappa y}\bigg )&=\frac{\Gamma \sigma _0^4}{n}\big (1+{\mathcal {O}}(\Delta _n^\eta +\Delta _n^{1/2}\delta ^{-1})\big ),\end{aligned}$$
(26)
$$\begin{aligned} {\mathbb {C}}\text {ov}\bigg (\frac{1}{\sqrt{n}}{{\,\mathrm{RV_{n}}\,}}(y)\,e^{\kappa y},\frac{1}{\sqrt{n}}{{\,\mathrm{RV_{n}}\,}}(u)\,e^{\kappa u}\bigg )&={\mathcal {O}}\bigg (\Delta _n^{3/2}\Big (\left| y-u\right| ^{-1}+\delta ^{-1}\Big )\bigg ), \end{aligned}$$
(27)

for any spatial points y and u. We obtain that

$$\begin{aligned}&\lim _{n\rightarrow \infty } {\mathbb {V}}\text {ar}\left( \sum _{i=1}^n\zeta _{n,i}\right) =\lim _{n\rightarrow \infty }\frac{m_n\pi }{\sigma _0^4\big (\sum _{j=1}^{m_n}y_j^2\big )^2}{\mathbb {V}}\text {ar}\left( \sum _{j=1}^{m_n} {{\,\mathrm{RV_{n}}\,}}(y_j)\,e^{\kappa y_j}y_j\right) \\&\quad =\lim _{n\rightarrow \infty }\frac{nm_n\pi }{\sigma _0^4\left( \sum _{j=1}^{m_n}y_j^2\right) ^2}{\mathbb {V}}\text {ar}\left( \frac{1}{\sqrt{n}}\sum _{j=1}^{m_n}{{\,\mathrm{RV_{n}}\,}}(y_j)e^{\kappa y_j}y_j\right) \\&\quad =\lim _{n\rightarrow \infty }\frac{nm_n\pi }{\sigma _0^4\left( \sum _{j=1}^{m_n}y_j^2\right) ^2} \left( \sum _{j=1}^{m_n}y_j^2\,{\mathbb {V}}\text {ar}\left( \frac{1}{\sqrt{n}}{{\,\mathrm{RV_{n}}\,}}(y_j)\,e^{\kappa y_j}\right) \right. \\&\left. \quad +\sum _{j\ne l} y_jy_l{\mathbb {C}}\text {ov}\left( \frac{1}{\sqrt{n}}{{\,\mathrm{RV_{n}}\,}}(y_j)\,e^{\kappa y_j},\frac{1}{\sqrt{n}}{{\,\mathrm{RV_{n}}\,}}(y_l)\,e^{\kappa y_l}\right) \right) \\&\quad =\lim _{n\rightarrow \infty }\frac{nm_n\pi }{\sigma _0^4 \left( \sum _{j=1}^{m_n}y_j^2\right) ^2} \left( \frac{\Gamma \sigma _0^4\sum _{j=1}^{m_n}y_j^2}{n} \left( 1+{\mathcal {O}}(\Delta _n^\eta )\right) \right. \\&\left. \quad + {\mathcal {O}}\left( \Delta _n^{3/2}\left( \sum _{j\ne l}\frac{y_jy_l}{\left| y_j-y_l\right| }+m_n^2\delta ^{-1}\right) \right) \right) \\&\quad =\lim _{n\rightarrow \infty } \frac{(1-2\delta )\Gamma \pi }{\frac{(1-2\delta )}{m_n}\sum _{j=1}^{m_n}y_j^2}\big (1+{\mathcal {O}}(\Delta _n^{\eta })\big )+{\mathcal {O}}\bigg (\Delta _n^{1/2}\Big (\sum _{j\ne l}\frac{y_j y_l}{m_n\left| y_j-y_l\right| }+\frac{m_n}{\delta }\Big )\bigg )\\&\quad =\lim _{n\rightarrow \infty } \frac{(1-2\delta )\Gamma \pi }{\frac{(1-2\delta )}{m_n}\sum _{j=1}^{m_n}y_j^2}\big (1+{\mathcal {O}}(\Delta _n^\eta )\big )+{\mathcal {O}}\bigg (\Delta _n^{1/2}\Big (m_n\log (m_n)+\frac{m_n}{\delta }\Big )\bigg )\\&\quad =\frac{\Gamma \pi (1-2\delta )}{\int _{\delta }^{1-\delta }y^2\textrm{d}y}=\frac{3\Gamma \pi (1-2\delta )}{(1-\delta )^3-\delta ^3}=\frac{3\Gamma \pi }{1-\delta +\delta ^2}. \end{aligned}$$

The assumption that \(y_1=\delta \), \(y_m=1-\delta \), is used only for the convergence of the Riemann sum in the last step. For the covariances, we used Assumption 1 and an elementary estimate

$$\begin{aligned} \sum _{j\ne l}\frac{y_j y_l}{\left| y_j-y_l\right| }={\mathcal {O}}\bigg ( \sum _{r=1}^{m_n}\sum _{l=1}^{m_n}\frac{(l+r) \, l}{m_n\,r}\bigg )={\mathcal {O}}\big (m_n^2+m_n^2\log (m_n)\big ).\end{aligned}$$
(28)

Since \(\Delta _n^{1/2}m_n\log (m_n)\rightarrow 0\) under Assumption 1, the remainders are negligible.

Next, we establish a covariance inequality for the empirical characteristic function. There exists a constant C, such that for all \(t\in \mathbb {R}\):

$$\begin{aligned} \left| {\mathbb {C}}\text {ov}\big (\exp \big ({\textrm{i}tQ_a^b}\big ),\exp \big ({\textrm{i}t Q_{b+u}^v}\big )\big )\right| \le \frac{C\,t^2}{u^{3/4}}\sqrt{{\mathbb {V}}\text {ar}(Q_a^b){\mathbb {V}}\text {ar}(Q_{b+u}^v)}, \end{aligned}$$
(29)

where \(Q_a^b:=\sum _{i=a}^b\zeta _{n,i}\), for natural numbers \(1\le a\le b < b+u\le v\le n\).

Let \(u\ge 2\), the case \(u=1\) can be derived separately and is in fact easier. By the spectral decomposition (21)

$$\begin{aligned}X_{i\Delta _n}(y_j)-X_{(i-1)\Delta _n}(y_j)=\sum _{k\ge 1}\big (x_k({i\Delta _n})-x_k({(i-1)\Delta _n})\big )e_k(y_j).\end{aligned}$$

The increments of the Ornstein-Uhlenbeck processes \((x_k(t))\) from (22) contain terms

$$\begin{aligned}\int _0^{(i-1)\Delta _n}e^{-\lambda _k((i-1)\Delta _n-s)}(e^{-\lambda _k \Delta _n}-1)\,\sigma \textrm{d}W_s^k,\end{aligned}$$

which depend on the path of \((W^k_t,0\le t\le (i-1)\Delta _n)\). Defining

$$\begin{aligned} A_2(y_j)&=\sum _{i=b+u}^{v}\bigg (\sum _{k\ge 1}\Big (x_k({i\Delta _n})-x_k({(i-1)\Delta _n}) \nonumber \\&-\int _0^{b\Delta _n} e^{-\lambda _k((i-1)\Delta _n-s)}(e^{-\lambda _k \Delta _n}-1)\,\sigma \textrm{d}W_s^k\Big )e_k(y_j)\bigg )^2,\end{aligned}$$
(30)

we can write with the notation (23) for squared increments

$$\begin{aligned} Q_{b+u}^v&=\frac{\sqrt{m_n}\sqrt{\pi }}{\sigma _0^2\sum _{j=1}^{m_n}y_j^2}\sum _{j=1}^{m_n}\sum _{i=b+u}^{v}\overline{\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_j)}y_j e^{\kappa y_j}\\&=\frac{\sqrt{m_n}\sqrt{\pi }}{\sigma _0^2\sum _{j=1}^{m_n}y_j^2}\sum _{j=1}^{m_n}\big (A_1(y_j)+A_2(y_j)\big )y_j e^{\kappa y_j}\\&=B_1+B_2,\end{aligned}$$

where \(A_1\) is defined by \(A_2\) and \(Q_{b+u}^v\), and \(B_r\), \(r=1,2\), to include the sums over \(A_r\). Analogous terms \(A_r\) have been considered in Proposition 6.6 of Bibinger and Trabs (2020). This decomposition is useful, since \(B_2\) is independent of \(Q_a^b\). Analogously to the proof of Proposition 6.6 in Bibinger and Trabs (2020), we have for all j that

$$\begin{aligned} {\mathbb {V}}\text {ar}\big ({A}_1(y_j)\big )\le \frac{{\tilde{C}}\sigma ^4(v-b-u+1)\Delta _n}{(u-1)^{3/2}}, \end{aligned}$$

with some constant \({\tilde{C}}\), and from Eq. (59) of Bibinger and Trabs (2020) that

$$\begin{aligned} {\mathbb {C}}\text {ov}\big (A_1(y_j),A_1(y_l)\big )={\mathcal {O}}\bigg (\frac{\Delta _n^{3/2}(v-b-u+1)}{(u-1)^{3/2}}\frac{1}{\left| y_j-y_l\right| }\bigg ). \end{aligned}$$

Thereby, we obtain that

$$\begin{aligned}&{\mathbb {V}}\text {ar}(B_1)=\frac{\pi m_n}{\sigma _0^4\big (\sum _{j=1}^{m_n}y_j^2\big )^2}\bigg (\sum _{j=1}^{m_n}e^{2\kappa y_j}y_j^2\,{\mathbb {V}}\text {ar}\big (A_1(y_j)\big )\\&+\sum _{j\ne l}e^{\kappa (y_j+y_l)}y_jy_l\,{\mathbb {C}}\text {ov}\big ({A}_1(y_j),{A}_1(y_l)\big )\bigg )\\&\le \frac{\pi m_n}{\sigma _0^4\big (\sum _{j=1}^{m_n}y_j^2\big )^2} \frac{{\tilde{C}}\sigma ^4(v-b-u+1)\Delta _n}{(u-1)^{3/2}} e^{2\kappa }\sum _{j=1}^{m_n} y_j^2\\&+{\mathcal {O}}\bigg (\frac{1}{m_n}\, \frac{\Delta _n^{3/2}(v-b-u+1)}{(u-1)^{3/2}}m_n^2\log (m_n)\bigg ) \\&\le \frac{C'(v-b-u+1)\Delta _n}{(u-1)^{3/2}}+ {\mathcal {O}}\bigg (\frac{\Delta _n^{3/2}(v-b-u+1)}{(u-1)^{3/2}}m_n\log (m_n)\bigg ), \end{aligned}$$

with a constant \(C'\), where we use that \(m_n\big (\sum _j y_j^2\big )^{-1}\) is bounded and (28). Since \(\Delta _n^{1/2}m_n\log (m_n)\rightarrow 0\), we find a constant \(C''\), such that

$$\begin{aligned}&{\mathbb {V}}\text {ar}(B_1)\le \frac{C''(v-b-u+1)\Delta _n}{(u-1)^{3/2}}. \end{aligned}$$
(31)

With the variance-covariance structure of \((\zeta _{n,i})\), we obtain with some constants \(C_r\), \(r=1,2,3\), that

$$\begin{aligned} {\mathbb {V}}\text {ar}(Q_{b+u}^{v})&\ge C_1\frac{m_n\pi }{\sigma _0^4\big (\sum _{j=1}^{m_n}y_j^2\big )^2}\sum _{j=1}^{m_n}y_j^2\sum _{i=b+u}^v{\mathbb {V}}\text {ar}(\zeta _{n,i})e^{2\kappa y_j}\nonumber \\&=C_2\frac{\Delta _n m_n (v-b-u+1)}{\sum _{j=1}^{m_n}y_j^2}\ge C_3 (v-b-u+1)\Delta _n. \end{aligned}$$
(32)

Since Eq. (54) from Bibinger and Trabs (2020) applied to our decomposition with \(B_1\) and \(B_2\), yields that

$$\begin{aligned}\left| {\mathbb {C}}\text {ov}\big (\exp \big ({\textrm{i}tQ_a^b}\big ),\exp \big ({\textrm{i}t Q_{b+u}^v}\big )\big )\right| \le 2t^2\sqrt{{\mathbb {V}}\text {ar}(Q_a^b)\,{\mathbb {V}}\text {ar}(B_1)},\end{aligned}$$

(31) and (32) imply (29).

A Lindeberg condition for the triangular array \((\zeta _{n,i})\) is obtained by the stronger Lyapunov condition. It is satisfied, since

$$\begin{aligned}&\sum _{i=1}^n{\mathbb {E}}\big [\left| \zeta _{n,i}\right| ^4\big ]\le \frac{m_n^2\pi ^2}{\sigma _0^8\big (\sum _{j=1}^{m_n}y_j^2\big )^4}\sum _{i=1}^n{\mathbb {E}}\Big [\Big (\sum _{j=1}^{m_n}\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_j) y_j e^{\kappa y_j}\Big )^4\Big ]\\&\quad \le \frac{m_n^{-2}\pi ^2}{\sigma _0^8\big (m_n^{-1}\sum _{j=1}^{m_n}y_j^2\big )^4}e^{4\kappa }\sum _{i=1}^n\sum _{j,k,u,v=1}^{m_n}{\mathbb {E}}\big [\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_j)\\&\quad \big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_k)\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_u)\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_v)\big ]\\&\quad \le \frac{m_n^{-2}\pi ^2}{\sigma _0^8\big (m_n^{-1}\sum _{j=1}^{m_n}y_j^2\big )^4}e^{4\kappa }\sum _{i=1}^n\sum _{j,k,u,v=1}^{m_n}\bigg ({\mathbb {E}}\big [\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^8(y_j)\big ]\\&\quad \times {\mathbb {E}}\big [\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^8(y_k)\big ]{\mathbb {E}}\big [\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^8(y_u)\big ]\\&\quad \times {\mathbb {E}}\big [\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^8(y_v)\big ]\bigg )^{1/4}\\&\quad ={\mathcal {O}}\big (m_n^2n\Delta _n^2\big )={\mathcal {O}}\big (m_n^2\Delta _n\big ). \end{aligned}$$

In the last step the inner sum is estimated with a factor \(m_n^4\), and we just use the regularity of \((X_t(y))_{t\ge 0}\). As \(m_n^2\Delta _n\rightarrow 0\), we conclude the Lyapunov condition which together with (29) and the asymptotic analysis of the variance yields the clt for the centered triangular array \((\zeta _{n,i})\) by an application of Theorem B from Peligrad and Utev (1997).

5.3 Proof of Theorem 3

We establish first the asymptotic variances and covariance of the estimators before proving a bivariate clt. With (24), we obtain that

$$\begin{aligned} {\hat{\kappa }}_{n,m}^{LS}-\kappa =\frac{\sum _{j=1}^{m_n}\frac{\overline{{{\,\mathrm{RV_{n}}\,}}(y_j)}}{\sqrt{n}}\frac{\sqrt{\pi }e^{\kappa y_j}}{\sigma _0^2}\big (y_j-{\overline{y}}\big )}{m_n({\overline{y}})^2-\sum _{u=1}^{m_n}y_u^2}+{\mathcal {O}}_{{\mathbb {P}}}(\Delta _n), \end{aligned}$$

since for the remainders it holds true that

$$\begin{aligned} \max \bigg (\frac{\sum _{j=1}^{m_n}y_j\,{\mathcal {O}}_{{\mathbb {P}}}(\Delta _n)}{m_n({\overline{y}})^2-\sum _{u=1}^{m_n}y_u^2},\frac{{\overline{y}}\,{\mathcal {O}}_{{\mathbb {P}}}(m_n\Delta _n)}{m_n({\overline{y}})^2-\sum _{u=1}^{m_n}y_u^2}\bigg )={\mathcal {O}}_{{\mathbb {P}}}(\Delta _n). \end{aligned}$$

We can use (26) and (27) to compute the asymptotic variance:

$$\begin{aligned}&\lim _{n\rightarrow \infty } {\mathbb {V}}\text {ar}\Big (\sqrt{nm_n}({\hat{\kappa }}_{n,m}^{LS}-\kappa )\Big )\\&\quad =\lim _{n\rightarrow \infty }\frac{nm_n \pi }{\sigma _0^4\big (m_n({\overline{y}})^2-\sum _{u=1}^{m_n}y_u^2\big )^2}{\mathbb {V}}\text {ar}\bigg (\sum _{j=1}^{m_n} \frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}\,e^{\kappa y_j}\big (y_j-{\overline{y}}\big )\bigg )\\&\quad =\lim _{n\rightarrow \infty }\frac{nm_n\pi }{\sigma _0^4\big (m_n({\overline{y}})^2-\sum _{u=1}^{m_n}y_u^2\big )^2}\bigg (\sum _{j=1}^{m_n}\big (y_j-{\overline{y}}\big )^2\,{\mathbb {V}}\text {ar}\Big (\frac{1}{\sqrt{n}}{{\,\mathrm{RV_{n}}\,}}(y_j)\,e^{\kappa y_j}\Big )\\&\quad +\sum _{j\ne l} \big (y_j-{\overline{y}}\big )\big (y_l-{\overline{y}}\big ){\mathbb {C}}\text {ov}\Big (\frac{1}{\sqrt{n}}{{\,\mathrm{RV_{n}}\,}}(y_j)\,e^{\kappa y_j},\frac{1}{\sqrt{n}}{{\,\mathrm{RV_{n}}\,}}(y_l)\,e^{\kappa y_l}\Big )\bigg )\\&\quad =\lim _{n\rightarrow \infty } \bigg (\frac{n m_n \pi }{\sigma _0^4\big (\sum _{u=1}^{m_n}y_u^2-m_n({\overline{y}})^2\big )}\frac{\Gamma \sigma _0^4}{n}\big (1+{\mathcal {O}}(\Delta _n^{\eta })\big )\\&\quad +{\mathcal {O}}\bigg (\frac{m_n\Delta _n^{1/2}}{\sigma _0^4\big (\sum _{u=1}^{m_n}y_u^2-m_n({\overline{y}})^2\big )^2}\sum _{j\ne l}\big (y_j-{\overline{y}}\big ) \big (y_l-{\overline{y}}\big )\Big (\frac{1}{\left| y_j-y_l\right| }+\frac{1}{\delta }\Big )\bigg )\bigg )\\&\quad =\lim _{n\rightarrow \infty }\frac{\Gamma \pi }{m_n^{-1}\sum _{u=1}^{m_n}y_u^2-({\overline{y}})^2}\\&\quad =\frac{\Gamma \pi }{(1-2\delta )^{-1}\int _{\delta }^{1-\delta }y^2\textrm{d}y-\Big ((1-2\delta )^{-1}\int _{\delta }^{1-\delta }y\,\textrm{d}y\Big )^2}=\frac{12\Gamma \pi }{(1-2\delta )^2}. \end{aligned}$$

We used that the sum of covariances is of order

$$\begin{aligned} {\mathcal {O}}\Big (m_n^{-1}\Delta _n^{1/2}\sum _{j\ne l}\frac{\big (y_j-{\overline{y}}\big ) \big (y_l-{\overline{y}}\big )}{\left| y_j-y_l\right| }\Big )={\mathcal {O}}\Big (\Delta _n^{1/2}m_n\log (m_n)\Big )={\scriptstyle {{\mathcal {O}}}}(1). \end{aligned}$$
(33)

For the estimator (18b), we obtain that

$$\begin{aligned} {\widehat{\alpha }}^{LS}(\sigma _{0}^{2})={\overline{y}}({\hat{\kappa }}_{n,m}^{LS}-\kappa )+\alpha (\sigma _{0}^{2})+\frac{1}{m_n}\sum _{j=1}^{m_n}\frac{\overline{{{\,\mathrm{RV_{n}}\,}}(y_j)}}{\sqrt{n}}\frac{\sqrt{\pi }e^{\kappa y_j}}{\sigma _0^2}+{\mathcal {O}}_{{\mathbb {P}}}(\Delta _n), \end{aligned}$$

such that

$$\begin{aligned} {\widehat{\alpha }}^{LS}(\sigma _{0}^{2})-\alpha (\sigma _{0}^{2})=\frac{\sum _{j=1}^{m_n}\frac{\overline{{{\,\mathrm{RV_{n}}\,}}(y_j)}}{\sqrt{n}}\frac{\sqrt{\pi }e^{\kappa y_j}}{\sigma _0^2}\Big (y_j{\overline{y}}-m_n^{-1}\sum _{j=1}^{m_n}y_j^2\Big )}{m_n({\overline{y}})^2-\sum _{u=1}^{m_n}y_u^2}+{\mathcal {O}}_{{\mathbb {P}}}(\Delta _n). \end{aligned}$$

With (26) and (27) and analogous steps as above, the asymptotic variance yields

$$\begin{aligned}&\lim _{n\rightarrow \infty } {\mathbb {V}}\text {ar}\Big (\sqrt{nm_n}\,\Big ({\widehat{\alpha }}^{LS}(\sigma _{0}^{2})-\alpha (\sigma _{0}^{2})\Big )\Big )\\&\quad =\lim _{n\rightarrow \infty }\frac{nm_n\pi }{\sigma _0^4\big (m_n({\overline{y}})^2-\sum _{u=1}^{m_n}y_u^2\big )^2}\sum _{j=1}^{m_n}\Big (y_j{\overline{y}}-m_n^{-1}\sum _{u=1}^{m_n}y_u^2\Big )^2\,{\mathbb {V}}\text {ar}\Big (\frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}\,e^{\kappa y_j}\Big )\\&\quad =\lim _{n\rightarrow \infty }\frac{m_n\Gamma \pi }{\Big (\sum _{u=1}^{m_n}y_u^2-m_n({\overline{y}})^2\Big )^2}\sum _{u=1}^{m_n}y_u^2\Big (m_n^{-1}\sum _{u=1}^{m_n}y_u^2-({\overline{y}})^2\Big )\\&\quad =\lim _{n\rightarrow \infty }\Gamma \pi \,\frac{\sum _{u=1}^{m_n}y_u^2}{\sum _{u=1}^{m_n}y_u^2-m_n({\overline{y}})^2}\\&\quad =\Gamma \pi \,\frac{(1-2\delta )^{-1}\int _{\delta }^{1-\delta }y^2\textrm{d}y}{(1-2\delta )^{-1}\int _{\delta }^{1-\delta }y^2\textrm{d}y-\Big ((1-2\delta )^{-1}\int _{\delta }^{1-\delta }y\,\textrm{d}y\Big )^2}. \end{aligned}$$

The covariance terms for spatial points \(y_j\ne y_u\) are asymptotically negligible by a similar estimate as in (33). The asymptotic covariance between both estimators yields

$$\begin{aligned}&\lim _{n\rightarrow \infty } nm_n{\mathbb {C}}\text {ov}\Big ({\widehat{\alpha }}^{LS}(\sigma _{0}^{2}),\,{\hat{\kappa }}_{n,m}^{LS}\Big )\\&\quad =\lim _{n\rightarrow \infty }\frac{nm_n\pi }{\sigma _0^4\big (m_n({\overline{y}})^2-\sum _{u=1}^{m_n}y_u^2\big )^2}\sum _{j=1}^{m_n}\Big (y_j{\overline{y}}-\frac{\sum _{u=1}^{m_n}y_u^2}{m_n}\Big )\Big (y_j-{\overline{y}}\Big )\,{\mathbb {V}}\text {ar}\Big (\frac{{{\,\mathrm{RV_{n}}\,}}(y_j)}{\sqrt{n}}\,e^{\kappa y_j}\Big )\\&\quad =\lim _{n\rightarrow \infty }m_n\Gamma \pi \,\frac{{\overline{y}}\Big (\sum _{u=1}^{m_n}y_u^2-m_n({\overline{y}})^2\Big )}{\Big (\sum _{u=1}^{m_n}y_u^2-m_n({\overline{y}})^2\Big )^2}\\&\quad =\lim _{n\rightarrow \infty }\Gamma \pi \,\frac{{\overline{y}}}{m_n^{-1}\sum _{u=1}^{m_n}y_u^2-({\overline{y}})^2}\\&\quad =\Gamma \pi \,\frac{(1-2\delta )^{-1}\int _{\delta }^{1-\delta }y\,\textrm{d}y}{(1-2\delta )^{-1}\int _{\delta }^{1-\delta }y^2\textrm{d}y-\Big ((1-2\delta )^{-1}\int _{\delta }^{1-\delta }y\,\textrm{d}y\Big )^2}. \end{aligned}$$

The covariance terms for spatial points \(y_j\ne y_u\) are asymptotically negligible by a similar estimate as in (33). Computing the elementary integrals and simple transformations yield the asymptotic variance-covariance matrix \(\Sigma \) in Theorem 3.

To establish the bivariate clt, it suffices to prove the clt for the \(\mathbb {R}^2\)-valued triangular array

$$\begin{aligned}\Xi _{n,i}=\frac{\sqrt{m_n\pi }}{\sigma _0^2\big (m_n({\overline{y}})^2-\sum _{u=1}^{m_n}y_u^2\big )}\sum _{j=1}^{m_n}\overline{\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_j)}e^{\kappa y_j}\left( \begin{array}{c}y_j-{\overline{y}}\\ y_j\,{\overline{y}}-\frac{\sum _{u=1}^{m_n}y_u^2}{{m_n}}\end{array}\right) .\end{aligned}$$

Here, we use the notation (23) for the squared time increments. The first entry of this vector is the leading term of \(\sqrt{nm_n}({\hat{\kappa }}_{n,m}^{LS}-\kappa )\), and the second entry of \(\sqrt{nm_n}\,\big ({\widehat{\alpha }}^{LS}(\sigma _{0}^{2})-\alpha (\sigma _{0}^{2})\big )\). We apply the Cramér-Wold device and Theorem B from Peligrad and Utev (1997). Taking the scalar product with some arbitrary \(\gamma \in \mathbb {R}^2\), we obtain by linearity that

$$\begin{aligned}\langle \gamma ,\Xi _{n,i}\rangle&=S_{mn}\sum _{j=1}^{m_n}\overline{\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_j)}e^{\kappa y_j} G_j^{\gamma },\\ \text{ with }~~~~S_{mn}&=\frac{\sqrt{m_n\pi }}{\sigma _0^2\big (m_n({\overline{y}})^2-\sum _{u=1}^{m_n}y_u^2\big )},\quad \text{ and }~~~~G_j^{\gamma }=\bigg \langle \gamma , \left( \begin{array}{c}y_j-{\overline{y}}\\ y_j\,{\overline{y}}-\frac{1}{m_n}\sum _{u=1}^{m_n}y_u^2\end{array}\right) \bigg \rangle . \end{aligned}$$

Note that for any \(\gamma \in \mathbb {R}^2\), \(S_{mn}\,G_j^{\gamma }\) is uniformly in j bounded by a constant, such that the structure for proving a covariance inequality for the empirical characteristic function and a Lyapunov condition is analogous to the one-dimensional case. With \(\xi _{n,i}^{\gamma }:=\langle \gamma ,\Xi _{n,i}\rangle \),

$$\begin{aligned}\sum _{i=1}^n\xi _{n,i}^{\gamma }=\Big \langle \gamma ,\sum _{i=1}^n\Xi _{n,i}\Big \rangle =\sum _{i=1}^{n}\langle \gamma ,\Xi _{n,i}\rangle ,\end{aligned}$$

we obtain for \({\tilde{Q}}_a^b:=\sum _{i=a}^b\xi _{n,i}^{\gamma }\), that

$$\begin{aligned} {\tilde{Q}}_{b+u}^v&=S_{mn}\sum _{j=1}^{m_n}\big (A_1(y_j)+A_2(y_j)\big )y_j e^{\kappa y_j}\,G_j^{\gamma },\end{aligned}$$

with the same terms \(A_1(y_j)\) and \(A_2(y_j)\) as in the proof of (29). Therefore, using the same bounds as in the proof of (29), we obtain that

$$\begin{aligned} \left| {\mathbb {C}}\text {ov}\big (\exp \big ({\textrm{i}t{\tilde{Q}}_a^b}\big ),\exp \big ({\textrm{i}t {\tilde{Q}}_{b+u}^v}\big )\big )\right| \le \frac{C\,t^2}{u^{3/4}}\sqrt{{\mathbb {V}}\text {ar}\big ({\tilde{Q}}_a^b\big ){\mathbb {V}}\text {ar}\big ({\tilde{Q}}_{b+u}^v\big )}, \end{aligned}$$
(34)

for all \(t\in \mathbb {R}\), for natural numbers \(1\le a\le b < b+u\le v\le n\), and for some constant C.

The Lyapunov condition for the triangular array \((\xi _{n,i}^{\gamma })\) holds, since

$$\begin{aligned}&\sum _{i=1}^n{\mathbb {E}}\big [\left| \xi _{n,i}^{\gamma }\right| ^4\big ]=\sum _{i=1}^n S_{mn}^4 {\mathbb {E}}\Big [\Big (\sum _{j=1}^{m_n}\overline{\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_j)} e^{\kappa y_j}G_j^{\gamma }\Big )^4\Big ]\\&\quad \le C\,e^{4\kappa }\sum _{i=1}^n\sum _{j,k,u,v=1}^{m_n}{\mathbb {E}}\big [\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_j)\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_k)\\&\quad \big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_u)\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^2(y_v)\big ]\\&\quad \le C\,e^{4\kappa }\sum _{i=1}^n\sum _{j,k,u,v=1}^{m_n}\bigg ({\mathbb {E}}\big [\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^8(y_j)\big ]\\&\quad \times {\mathbb {E}}\big [\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^8(y_k)\big ]{\mathbb {E}}\big [\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^8(y_u)\big ]\\&\quad \times {\mathbb {E}}\big [\big (X_{i\Delta _n}-X_{(i-1)\Delta _n}\big )^8(y_v)\big ]\bigg )^{1/4}\\&\quad ={\mathcal {O}}\big (m_n^2n\Delta _n^2\big )={\mathcal {O}}\big (m_n^2\Delta _n\big ). \end{aligned}$$

for some constant C. As \(m_n^2\Delta _n\rightarrow 0\), we conclude the Lyapunov condition which together with (34) and the asymptotic variance-covariance structure yields the clt for the triangular array \((\xi _{n,i}^{\gamma })\), for any \(\gamma \in \mathbb {R}^2\), by an application of Theorem B from Peligrad and Utev (1997). We conclude with the Cramér-Wold device.

5.4 Proof of Theorem 2

Theorem 2 is established as a simple corollary of Theorem 3 showing that the two estimators (12) and (18a) coincide. This is based on the formula that for vectors \(y,z\in \mathbb {R}^m\), we have that

$$\begin{aligned} \sum _{j\ne l}(z_j-z_l)(y_j-y_l)=2\,m\,\sum _{j=1}^m\big (z_j-{\overline{z}}\big )\big (y_j-{\overline{y}}\big )=2\,m\,\sum _{j=1}^m z_j\big (y_j-{\overline{y}}\big ),\end{aligned}$$
(35)

using our standard notation for means \({\overline{y}}\) and \({\overline{z}}\) applied to the vectors. (35) is true, since

$$\begin{aligned} \sum _{j\ne l}(z_j-z_l)(y_j-y_l)&=\sum _{j, l=1}^m(z_j-z_l)(y_j-y_l)\\&=m\sum _{j=1}^m z_j\,y_j-2\,\sum _{j, l=1}^my_j\, z_l+m\sum _{l=1}^m z_l \,y_l\\&=2m \sum _{j=1}^m z_j\,y_j-2m^2\,{\overline{y}}\,{\overline{z}}, \end{aligned}$$

and by the transformation

$$\begin{aligned} \sum _{j=1}^m\big (z_j-{\overline{z}}\big )\big (y_j-{\overline{y}}\big )=\sum _{j=1}^m z_j\,y_j - m\,{\overline{y}}\,{\overline{z}}. \end{aligned}$$

Applying (35) twice, to the numerator and to the denominator of (12) yields the estimator (18a). We hence conclude the clt in Theorem 2 as the marginal clt from the bivariate clt given in Theorem 3.