Skip to main content
Log in

Efficient estimation for the volatility of stochastic interest rate models

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

The joint analysis of non-stationary and high frequency financial data poses theoretical challenges due to that such massive data varies with time and possesses no fixed density function. This paper proposes the local linear smoothing to estimate the unknown volatility function in scalar diffusion models based on Gamma asymmetric kernels for high frequency financial big data. Under the mild conditions, we obtain the asymptotic normality for the estimator at both interior and boundary design points. Besides the standard properties of the local linear estimator such as simple bias representation and boundary bias correction, the local linear smoothing using Gamma asymmetric kernels possesses some extra advantages such as variance reduction and resistance to sparse design, which is validated through finite sample simulation study and empirical analysis on 6-month Shanghai Interbank Offered Rate (abbreviated as Shibor) in China.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  • Aït-Sahalia Y (1996) Nonparametric pricing of interest rate derivative securities. Econometrica 64(3):527–560

    Article  Google Scholar 

  • Bandi FM, Phillips PC (2003) Fully nonparametric estimation of scalar diffusion models. Econometrica 71(1):241–283

    Article  MathSciNet  Google Scholar 

  • Chen SX (2000) Probability density function estimation using Gamma kernels. Ann Inst Stat Math 52(3):471–480

    Article  MathSciNet  Google Scholar 

  • Chen SX (2002) Local linear smoothers using asymmetric kernels. Ann Inst Stat Math 54(2):312–323

    Article  MathSciNet  Google Scholar 

  • Eisenbaum N, Kaspi H (2007) On the continuity of local times of borel right Markov processes. Ann Probab 35(3):915–934

    Article  MathSciNet  Google Scholar 

  • Fan J (1993) Local linear regression smoothers and their minimax efficiencies. Ann Stat 21(1):196–216

    Article  MathSciNet  Google Scholar 

  • Fan J, Fan Y, Jiang J (2007) Dynamic integration of time- and state-domain methods for volatility estimation. J Am Stat Assoc 102:618–631

    Article  MathSciNet  Google Scholar 

  • Fan J, Gijbels I (1992) Variable bandwidth and local linear regression smoothers. Ann Stat 20(4):2008–2036

    MathSciNet  MATH  Google Scholar 

  • Fan J, Gijbels I (1996) Local polynomial modeling and its applications. Chapman and Hall, London

    MATH  Google Scholar 

  • Fan J, Zhang C (2003) A reexamination of diffusion estimators with applications to financial model validation. J Am Stat Assoc 98(461):118–134

    Article  MathSciNet  Google Scholar 

  • Jacod J (2012) Statistics and high-frequency data. In: Statistical Methods for Stochastic Differential Equations, CRC Boca Raton, London, pp 191–310

  • Jiang H, Dong X (2015) Parameter estimation for the non-stationary Ornstein–Uhlenbeck process with linear drift. Stat Pap 56:257–268

    Article  MathSciNet  Google Scholar 

  • Jiang G, Knight J (1997) A nonparametric approach to the estimation of diffusion processes, with an application to a short-term interest rate model. Econom Theory 13(5):615–645

    Article  MathSciNet  Google Scholar 

  • Karatzas I, Shreve S (2000) Brownian motions and stochastic calculus. Springer, New York

    MATH  Google Scholar 

  • Karlin S, Taylor H (1981) A second course in stochastic processes. Elsevier, Amsterdam

    MATH  Google Scholar 

  • Meyn SR, Tweedie RL (1993) Stability of Markovian processes II: continuous-time processes and sampled chains. Adv Appl Probab 25:487–517

    Article  MathSciNet  Google Scholar 

  • Park J, Phillips PCB (2001) Nonlinear regressions with integrated time series. Econometrica 69(1):117–161

    Article  MathSciNet  Google Scholar 

  • Phillips PCB, Magdalinos T (2007) Limit theory for moderate deviations from a unit root. J Econom 136(1):115–130

    Article  MathSciNet  Google Scholar 

  • Revuz D, Yor M (2005) Continuous martingales and Brownian motion. Springer, New York

    MATH  Google Scholar 

  • Rosa A, Nogueira M (2016) Nonparametric estimation of a regression function using the gamma kernel method in ergodic processes. in arXiv:1605.07520v2 [math.ST]

  • Shen G, Yu Q (2017) Least squares estimator for Ornstein-Uhlenbeck processes driven by fractional Lévy processes from discrete observations. Stat Pap. https://doi.org/10.1007/s00362-017-0918-4

  • Xu Z (2003) Staistical inference for diffusion processes. Ph.D thesis, East China Normal University

Download references

Acknowledgements

The authors would like to thank the editor, associate editor and three anonymous referees for their valuable suggestions, which greatly improved our paper. This research work is supported by National Natural Science Foundation of China (11901397), Ministry of Education, Humanities and Social Sciences Project (18YJCZH153), National Statistical Science Research Project (2018LZ05), Youth Academic Backbone Cultivation Project of Shanghai Normal University (310-AC7031-19-003021), General Research Fund of Shanghai Normal University (SK201720) and Key Subject of Quantitative Economics (310-AC7031-19-004221) and Academic Innovation Team (310-AC7031-19-004228) of Shanghai Normal University. We also thank Dr. HanChao Wang for his helpful comments for this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuping Song.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (csv 52 KB)

Appendices

Appendix

In this section, we present some technical lemmas and the proofs for the main theorems.

A. Technical lemmas

Lemma 2

(The occupation time formula) Let \(X_{t}\) be a semimartingale with local time \((L_{X}(\cdot ,a))_{a \in \mathscr {D}}.\) Let g be a bounded Borel measurable function. Then

$$\begin{aligned} \int ^{\infty }_{-\infty } L_{X}(t, a)g(a)da = \int _{0}^{t}g(X_{s})d[X]_{s},~~a.s., \end{aligned}$$
(A.1)

where [X] is the quadratic variation of X.

Lemma 3

(Jacod’s stable convergence theorem) A sequence of \(\mathbb {R}-\)valued variables \(\{\zeta _{n, i}: i \ge 1\}\) defined on the filtered probability space \((\Omega , \mathscr {F}, (\mathscr {F})_{t \ge 0}, {P})\) is \(\mathscr {F}_{i \Delta _{n}}-\)measurable for all ni. Assume there exists a continuous adapted \(\mathbb {R}-\)valued process of finite variation \(B_{t}\) and a continuous adapted and increasing process \(C_{t}\), for any \(t > 0,\) we have

$$\begin{aligned}&\sup _{0 \le s \le t}\big |\sum _{i = 1}^{[s/\Delta _{n}]}{E}\big [\zeta _{n, i} \mid \mathscr {F}_{(i - 1)\Delta _{n}}\big ] - B_{s}\big | {\mathop {\longrightarrow }\limits ^{P}}0, \end{aligned}$$
(A.2)
$$\begin{aligned}&\sum _{i = 1}^{[t/\Delta _{n}]}\big ({E}\big [\zeta _{n, i}^{2} \mid \mathscr {F}_{(i - 1)\Delta _{n}}\big ] - {E}^{2}\big [\zeta _{n, i} \mid \mathscr {F}_{(i - 1)\Delta _{n}}\big ]\big ) - C_{t} {\mathop {\longrightarrow }\limits ^{P}}0, \end{aligned}$$
(A.3)
$$\begin{aligned}&\sum _{i = 1}^{[t/\Delta _{n}]}{E}\big [\zeta _{n, i}^{4} \mid \mathscr {F}_{(i - 1)\Delta _{n}}\big ] {\mathop {\longrightarrow }\limits ^{P}}0. \end{aligned}$$
(A.4)

Assume also

$$\begin{aligned} \sum _{i = 1}^{[t/\Delta _{n}]}{E}\big [\zeta _{n, i} \Delta _{n}^{i}H \mid \mathscr {F}_{(i - 1)\Delta _{n}}\big ] {\mathop {\longrightarrow }\limits ^{P}}0, \end{aligned}$$
(A.5)

where either H is one of the components of Wiener process W or is any bounded martingale orthogonal (in the martingale sense) to W and \(\Delta _{n}^{i}H = H_{i\Delta _{n}} - H_{(i - 1)\Delta _{n}}.\)

Then the process

$$\begin{aligned} \sum _{i = 1}^{[t/\Delta _{n}]}\zeta _{n, i} {\mathop {\longrightarrow }\limits ^{\mathcal {S} - \mathcal {L}}}B_{t} + M_{t}, \end{aligned}$$

where \({\mathop {\longrightarrow }\limits ^{\mathcal {S} - \mathcal {L}}}\) denotes stable convergence in law, \(M_{t}\) is a continuous process defined on an extension \(\big (\widetilde{\Omega }, \widetilde{P}, \widetilde{\mathscr {F}}\big )\) of the filtered probability space \(\big ({\Omega }, {P}, {\mathscr {F}}\big )\) and which, conditionally on the the \(\sigma -\)filter \(\mathscr {F}\), is a centered Gaussian \(\mathbb {R}-\)valued process with \(\widetilde{E}\big [M_{t}^{2} \mid \mathscr {F}\big ] = C_{t}.\)

Remark A.1

For lemma 3, one can refer to Jacod (2012) (Lemma 4.4) for more details. The stable convergence implies the following crucial property required in the detailed proof of Theorem 2.3.

If \(Z_{n} {\mathop {\longrightarrow }\limits ^{\mathcal {S} - \mathcal {L}}}Z \) and if \(Y_{n}\) and Y are variables defined on \((\Omega , \mathscr {F}, {P})\) and with values in the same Polish space F, then

$$\begin{aligned} Y_{n} {\mathop {\longrightarrow }\limits ^{P}}Y~~~~~\Rightarrow ~~~~~(Y_{n}, ~Z_{n}) {\mathop {\longrightarrow }\limits ^{\mathcal {S} - \mathcal {L}}}(Y, ~Z), \end{aligned}$$
(A.6)

which implies that \(Y_{n} \times Z_{n} {\mathop {\longrightarrow }\limits ^{\mathcal {S} - \mathcal {L}}}Y \times Z\) through the continuous function \(g(x, y) = x \times y.\)

Lemma 4

Under Assumptions 1-3, we have

$$\begin{aligned} \Delta _{n}\sum _{j=1}^{n}K_{G(x/h_{n}+1, h_{n})}(x_{j-1})(x_{j-1}-x)^{k} {\mathop {\rightarrow }\limits ^{P}} \int _{0}^{T}K_{G(x/h_{n}+1,h_{n})}(x_{s})(x_{s}-x)^{k}ds.\nonumber \\ \end{aligned}$$
(A.7)

Remark A.2

Denote \(\alpha _{k}(x):=E[(\xi -x)^{k}],\) where \(\xi {\mathop {=}\limits ^{d}} K_{G(x/h_{n} + 1, h_{n})}({u}).\) For \(\alpha _{k}(x),\) we have

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\alpha _{0}(x)=1,\\ &{}\alpha _{1}(x)=h_{n},\\ &{}\alpha _{2}(x)=xh_{n}+2h_{n}^{2},\\ &{}\alpha _{l}(x)=O_{p}(h_{n}^{2}), \end{array}\right. } \end{aligned}$$
(A.8)

for \(l \ge 3,\) which implies that \(\xi = x + o_{p}(\sqrt{h_{n}}).\) For more details of the results (A.8), one can sketch Sect. 4, especially equation (4.2) in Chen (2002).

Based on the equations above and Theorem 1.1 on the continuity of local times in Eisenbaum and Kaspi (2007), we can obtain that

$$\begin{aligned} \int _{0}^{T} K_{G(x/h_{n}+1, h_{n})}(x_{s})(x_{s}-x)^{k}ds&= \int _{0}^{T}K_{G(x/h_{n}+1, h_{n})}(x_{s})(x_{s}-x)^{k} \frac{\sigma ^{2}(x_{s})}{\sigma ^{2}(x_{s})}ds\nonumber \\&= \int _{0}^{T}K_{G(x/h_{n}+1, h_{n})}(x_{s})(x_{s}-x)^{k} \frac{d[X]_{s}}{\sigma ^{2}(x_{s})} \nonumber \\&= \int _{0}^{\infty }K_{G(x/h_{n}+1, h_{n})}(u)(u-x)^{k}\bar{L}_{X}(T,u)du \nonumber \\&= \int _{0}^{\infty }K_{G(x/h_{n}+1, h_{n})}(u)(u-x)^{k}[\bar{L}_{X}(T,u)-\bar{L}_{X}(T,x)]du \nonumber \\&\quad + \bar{L}_{x}(T,x)\int _{0}^{\infty }K_{G(x/h_{n}+1, h_{n})}(u)(u-x)^{k}du \nonumber \\&= E\left\{ (\xi -x)^{k}[\bar{L}_{X}(T,\xi )-\bar{L}_{X}(T,x)]\right\} + \bar{L}_{X}(T,x)E[(\xi -x)^{k}] \nonumber \\&= \bar{L}_{X}(T,x) \cdot \alpha _{k}(x) \cdot (1 + o_{p}(1)). \end{aligned}$$
(A.9)

Proof

One can refer to Bandi and Phillips (2003) for similar proof procedure. For brevity, here we omit the technical details. \(\square \)

B. Detailed proof for theorem 2.3

Proof

For \((x_{i}-x_{i-1})^{2},\) based on Itô formula, we have

$$\begin{aligned}&(x_{i}-x_{i-1})^{2} \nonumber \\&\quad = \int _{(i-1)\Delta _{n}}^{i \Delta _{n}} \sigma ^{2}(x_{s})ds + 2\int _{(i-1)\Delta _{n}}^{i \Delta _{n}}(x_{s} - x_{i-1}) \mu (x_{s})ds + 2\int _{(i-1)\Delta _{n}}^{i \Delta _{n}}(x_{s} - x_{i-1}) \sigma (x_{s})dW_{s}.\nonumber \\ \end{aligned}$$
(B.1)

For simplicity, we set \(T = 1,\) and also based on formula (B.1),

$$\begin{aligned} \frac{R(h_{n})}{\sqrt{\Delta _{n}}}\left( \hat{\sigma }^{2}_{n}(x) - \sigma ^{2}(x)\right)&= \sqrt{n}R(h_{n}) \left( \hat{\sigma }^{2}_{n}(x) - \sigma ^{2}(x)\right) \\&= \frac{\sqrt{n}R(h_{n})\Delta _{n}^{2}\sum \nolimits _{i=1}^{n}\omega _{i-1} \left( \frac{(x_{i}-x_{i-1})^{2}}{\Delta _{n}}-\sigma ^{2}(x)\right) }{\Delta _{n}^{2}\sum \nolimits _{i=1}^{n}\omega _{i-1}}\\&= \frac{\sqrt{n}R(h_{n})\Delta _{n}\sum \nolimits _{i=1}^{n}\omega _{i-1}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}} (\sigma ^{2}(x_{s})-\sigma ^{2}(x))ds}{\Delta _{n}^{2}\sum \nolimits _{i=1}^{n}\omega _{i-1}}\\&\quad + \frac{2\sqrt{n}R(h_{n})\Delta _{n}\sum \limits _{i=1}^{n}\omega _{i-1}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}} (x_{s}-x_{i-1})\mu (x_{s})ds}{\Delta _{n}^{2}\sum \limits _{i=1}^{n}\omega _{i-1}}\\&\quad + \frac{2\sqrt{n}R(h_{n})\Delta _{n}\sum \limits _{i=1}^{n}\omega _{i-1}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}} (x_{s}-x_{i-1})\sigma (x_{s})dW_{s}}{\Delta _{n}^{2}\sum \limits _{i=1}^{n}\omega _{i-1}}\\&= A_{1}+A_{2}+A_{3}, \end{aligned}$$

where the regularization coefficient \(R(h_{n}) = \Big \{\begin{array}{ll} \sqrt{h_{n}^{1/2}},\quad interior~x, \\ \sqrt{h_{n}},\quad boundary~x. \end{array}\) According to Lemma 4 and the expression for \(\omega _{i-1},\) one can get that

$$\begin{aligned} \Delta _{n}^{2}\sum \limits _{i=1}^{n}\omega _{i-1} {\mathop {\rightarrow }\limits ^{P}} \bar{L}^{2}_{X}(T,x) \cdot [\alpha _{0}(x) \cdot \alpha _{2}(x) - \alpha ^{2}_{1}(x)] \cdot (1 + o_{p}(1)) = \bar{L}^{2}_{X}(T,x) \cdot [x h_{n} + h^{2}_{n}].\nonumber \\ \end{aligned}$$
(B.2)

In what follows, we only need to calculate the numerators of \(A_{1},\) \(A_{2}\) and \(A_{3}.\)

We firstly introduce the uniform boundedness for the increments of diffusion process (more details seen in Karatzas and Shreve (2000)), that is

$$\begin{aligned} \lim \sup _{\Delta _{n} \rightarrow 0} \frac{\max _{i \le n} \sup _{(i-1) \Delta _{n} \le s \le i \Delta _{n}} \mid x_{s} - x_{i-1} \mid }{(\Delta _{n} \log (1/\Delta _{n}))^{1/2}} \end{aligned}$$

where C is a suitable constant.

For the numerator of \(A_{1},\) we obtain

$$\begin{aligned} \sqrt{n}R(h_{n})\Delta _{n}&\sum \limits _{i=1}^{n}\omega _{i-1}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(\sigma ^{2}(x_{s})-\sigma ^{2}(x))ds\\&= \sqrt{n}R(h_{n})\Delta _{n}\sum \limits _{i=1}^{n}\omega _{i-1}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(\sigma ^{2}(x_{s}) - \sigma ^{2}(x_{i-1}) + \sigma ^{2}(x_{i-1}) - \sigma ^{2}(x))ds\\&= \sqrt{n}R(h_{n})\Delta _{n}\sum \limits _{i=1}^{n}\omega _{i-1}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(\sigma ^{2}(x_{s}) - \sigma ^{2}(x_{i-1}))ds\\&\quad + \sqrt{n}R(h_{n})\Delta _{n}\sum \limits _{i=1}^{n}\omega _{i-1}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(\sigma ^{2}(x_{i-1}) - \sigma ^{2}(x))ds\\&= A_{11} + A_{12}. \end{aligned}$$

According to uniform boundedness for the increments of diffusion process (hereafter indicated as the UBI) and Assumption 1(ii), \(A_{11}=o(A_{12})\) and similarly \(A_{2}=o(A_{12})\) based on Assumption 1(i), so we only need to consider \(A_{12}.\)

As for \(A_{12},\) firstly through Taylor expansion, we can get

$$\begin{aligned} \sigma ^{2}(x_{i-1})-\sigma ^{2}(x)&=(\sigma ^{2})^{'}(x)(x_{i-1}-x) + \frac{1}{2}(\sigma ^{2})^{''}(x+\theta (x_{i-1}-x))(x_{i-1}-x)^{2}, \end{aligned}$$

where \(\theta \in [0,1].\)

It can be obviously deduced that

$$\begin{aligned} \sum ^{n}_{i=1}\omega _{i-1}(x_{i-1}-x) \equiv 0. \end{aligned}$$

Then, according to Lemma 4 and the expression for \(\omega _{i-1},\)

$$\begin{aligned} \frac{A_{12}}{\sqrt{n} R(h_{n})}&= \Delta _{n}\sum \limits _{i=1}^{n}\omega _{i-1}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}} \left[ \frac{1}{2}(\sigma ^{2})^{''}(x+\theta (x_{i-1}-x))(x_{i-1}-x)^{2}\right] ds\\&\quad \rightarrow \frac{1}{2} (\sigma ^{2})''(x)\left[ \alpha _{2}^{2}(x) - \alpha _{1}(x) \cdot \alpha _{3}(x)\right] \cdot \bar{L}_{X}^{2}(T, x)\\&= \frac{1}{2} (\sigma ^{2})''(x) \cdot \bar{L}_{X}^{2}(T, x) \left[ (x h_{n} + h_{n}^{2})^{2} - O_{p}(h_{n}^{3})\right] \\&= \frac{x^{2}}{2} (\sigma ^{2})''(x) \cdot \bar{L}_{X}^{2}(T, x) \cdot h_{n}^{2}. \end{aligned}$$

Thus, the bias for \(\sqrt{n}R(h_{n}) \left( \hat{\sigma }^{2}_{n}(x) - \sigma ^{2}(x)\right) \) is

$$\begin{aligned} \frac{A_{12}}{\sqrt{n}R(h_{n}) \Delta _{n}^{2}\sum \limits _{i=1}^{n}\omega _{i-1}}&\rightarrow \frac{\frac{x^{2}}{2}(\sigma ^{2})''(x)\bar{L}_{X}^{2}(T,x)h_{n}^{2}}{\bar{L}_{X}^{2}(T,x)(xh+h^{2})} \\&\quad =\frac{x}{2}(\sigma ^{2})''(x)h_{n}. \end{aligned}$$

In what follows, we calculate the asymptotic variance for

$$\begin{aligned} \sqrt{n}R(h_{n}) \left( \hat{\sigma }^{2}_{n}(x) - \sigma ^{2}(x) - \frac{x}{2}(\sigma ^{2})''(x)h_{n}\right) . \end{aligned}$$

By Burkholder-Davis-Gundy (BDG) inequality, Hölder inequality and UBI property, we can conclude that \(A_{2}=o_{a.s}(A_{3}).\) So here we only need to deal with \(A_{3}.\) However, the \(\omega _{i-1}\) in \(A_{3}\) is not \(\mathscr {F}_{i\Delta _{n}}\)-measurable, so we can not directly use Jacod’s stable convergence theorem in Lemma 3. To solve this problem, we firstly introduce its convergence limit in probability divided by \(\bar{L}_{X}(T,x) \cdot x h_{n},\) that is,

$$\begin{aligned} \frac{\Delta _{n} \cdot \omega _{i}}{\bar{L}_{X}(T,x) \cdot x h_{n}} {\mathop {\rightarrow }\limits ^{P}}K_{G(x/h_{n}+1,h_{n})}(x_{i})\left[ 1-(x_{i}-x)\frac{1}{x}\right] =: \omega ^{+}_{i}. \end{aligned}$$

So

$$\begin{aligned} \omega ^{+}_{i-1}=K_{G(x/h_{n}+1,h_{n})}(x_{i-1})\left[ 1-(x_{i-1}-x)\frac{1}{x}\right] \end{aligned}$$

is obviously \(\mathscr {F}_{i\Delta _{n}}\)-measurable, and we can utilize the Jacod’s stable convergence theorem for

$$\begin{aligned} 2\sqrt{n}R(h_{n})\sum \limits _{i=1}^{n}\omega _{i-1}^{+}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}:=\sum ^{n}_{i=1}q_{i} \end{aligned}$$

where

$$\begin{aligned} q_{i}=2\sqrt{n}R(h_{n})\omega _{i-1}^{+}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}. \end{aligned}$$

Jacod’s sable convergence theorem in Lemma 3 tells us that the following arguments,

$$\begin{aligned}&S_{1}=\sup _{1\le i\le n}|\sum ^{n}_{i=1}E_{i-1}[q_{i}]-0|{\mathop {\rightarrow }\limits ^{P}}0, \\&S_{2}=\sum ^{n}_{i=1}(E_{i-1}[q_{i}^{2}]-E_{i-1}^{2}[q_{i}]){\mathop {\rightarrow }\limits ^{P}} \left\{ \begin{aligned}&\frac{\sigma ^{4}(x)}{\sqrt{\pi }\sqrt{x}}\bar{L}_{x}(T,x),\qquad interior~x,\\&\frac{\sigma ^{4}(x)\Gamma (2k+1)}{2^{2k}\Gamma ^{2}(k+1)}\bar{L}_{X}(T,x),\qquad boundary~x,\\ \end{aligned} \right. \\&S_{3}=\sum ^{n}_{i=1}E_{i-1}[q_{i}^{4}]{\mathop {\rightarrow }\limits ^{P}}0, \nonumber \\&S_{4}=\sum ^{n}_{i=1}E_{i-1}[q_{i}\Delta _{i}^{n}H]{\mathop {\rightarrow }\limits ^{P}}0 \end{aligned}$$

implies \(\sum ^{n}_{i=1}q_{i} {\mathop {\longrightarrow }\limits ^{\mathcal {S} - \mathcal {L}}}M_{t},\) where \(E_{i-1}[\cdot ]\) := \(E[\cdot |\mathscr {F}_{(i-1)\Delta _{n}}].\)

For \(S_{1}\), due to the martingale property, we can easily know that,

$$\begin{aligned} E_{i-1}[q_{i}] \equiv 0. \end{aligned}$$

Hence,

$$\begin{aligned} S_{1}=\sup _{1\le i\le n}|\sum ^{n}_{i=1}E_{i-1}[q_{i}]|{\mathop {\rightarrow }\limits ^{P}}0 \end{aligned}$$

For \(S_{2}\),

$$\begin{aligned} S_{2}=&\sum ^{n}_{i=1}E_{i-1}[q_{i}^{2}]\\ =&\sum ^{n}_{i=1}E_{i-1}\left[ 4nR^{2}(h_{n})\left( K_{G(x/h_{n}+1,h_{n})}^{2}(x_{i-1})-2K_{G(x/h_{n}+1,h_{n})}^{2}(x_{i-1})(x_{i-1}-x)\frac{1}{x}\right. \right. \\&\left. \left. +K_{G(x/h_{n}+1,h_{n})}^{2}(x_{i-1})(x_{i-1}-x)^{2}\frac{1}{x^{2}}\right) \left( \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}\right) ^{2}\right] \\ =&\sum ^{n}_{i=1}E_{i-1}[q_{i1}^{2}+q_{i2}^{2}+q_{i3}^{2}] \end{aligned}$$

One can show that \(q_{i1}^{2}\) is larger than the others according to Lemma 4 and the equation (A.8), so we only deal with the dominant one \(q_{i1}^{2},\)

$$\begin{aligned}&\sum ^{n}_{i=1}E_{i-1}[q_{i1}^{2}]\nonumber \\&\quad =4nR^{2}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{2}(x_{i-1})E_{i-1} \left[ \left( \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}\right) ^{2}\right] \\&\quad =4nR^{2}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{2}(x_{i-1})E_{i-1} \left[ \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})^{2}\sigma ^{2}(x_{s})ds\right] . \end{aligned}$$

Using the Itô formula on \((x_{s}-x_{i-1})^{2}\), we have

$$\begin{aligned}&\sum ^{n}_{i=1}E_{i-1}[q_{i1}^{2}]\\&=4nR^{2}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{2}(x_{i-1})\bigg \{E_{i-1} \left[ \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}\int ^{s\Delta _{n}}_{(i-1)\Delta _{n}}\sigma ^{2}(x_{u})du\sigma ^{2}(x_{s})ds\right] \\&\quad +E_{i-1}\left[ \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}\int ^{s\Delta _{n}}_{(i-1)\Delta _{n}}(x_{u}-x_{i-1})\mu (x_{u})du\sigma ^{2}(x_{s})ds\right] \\&\quad +E_{i-1}\left[ \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}\int ^{s\Delta _{n}}_{(i-1)\Delta _{n}}(x_{u}-x_{i-1})\sigma (x_{u})dW_{u}\sigma ^{2}(x_{s})ds\right] \bigg \}\\&=B_{1}+B_{2}+B_{3}. \end{aligned}$$

We can deduce that \(B_{3}\equiv 0\) by Fubini theorem, and \(B_{2}=o_{a.s.}(B_{1})\) by UBI property and Assumption 1. Applying the mean value theorem and UBI property for \(B_{1},\) we can reach

$$\begin{aligned} B_{1}=&4nR^{2}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{2}(x_{i-1})E_{i-1}\left[ \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}} \int ^{s\Delta _{n}}_{(i-1)\Delta _{n}}\sigma ^{2}(x_{u})du\sigma ^{2}(x_{s})ds \right] \\ \quad \rightarrow&4nR^{2}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{2}(x_{i-1})\frac{\sigma ^{4}(x_{i-1})}{2}\Delta _{n}^{2}\\&=2R^{2}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{2}(x_{i-1})\sigma ^{4}(x_{i-1})\Delta _{n}\\ \quad \rightarrow&2R^{2}(h_{n})B_{n}(x)\sum ^{n}_{i=1}K_{G(2x/h_{n}+1,h_{n}/2)(x_{i-1})}\sigma ^{4}(x_{i-1})\Delta _{n}, \end{aligned}$$

where

$$\begin{aligned} B_{n}(x):=\left\{ \begin{aligned}&\frac{1}{2\sqrt{\pi }}h_{n}^{-\frac{1}{2}}x^{-\frac{1}{2}},\qquad interior~x,\\&\frac{\Gamma (2k+1)}{2^{2k+1}\Gamma ^{2}(k+1)}h_{n}^{-1}, \qquad boundary~x. \end{aligned} \right. \end{aligned}$$

Hence,

$$\begin{aligned} B_{1}{\mathop {\rightarrow }\limits ^{P}} \left\{ \begin{aligned}&\frac{\sigma ^{4}(x)}{\sqrt{\pi }\sqrt{x}}\bar{L}_{X}(T,x), \qquad interior~x, \\&\frac{\sigma ^{4}(x)\Gamma (2k+1)}{2^{2k}\Gamma ^{2}(k+1)}\bar{L}_{X}(T,x), \qquad boundary~x. \end{aligned} \right. \end{aligned}$$

For \(S_{3},\) by using BDG and Hölder inequalities, we have

$$\begin{aligned} S_{3}=&16n^{2}R^{4}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{4}(x_{i-1})E_{i-1} \left[ \left( \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}\right) ^{4}\right] \\ \le&16n^{2}R^{4}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{4}(x_{i-1})E_{i-1} \left[ \sup _{s\in [(i-1)\Delta _{n},i\Delta _{n}]}\left( \int ^{s}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}\right) ^{4}\right] \\ \le&16n^{2}R^{4}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{4}(x_{i-1})E_{i-1} \left[ \left( \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})^{2}\sigma ^{2}(x_{s})ds \right) ^{2}\right] \\ \le&16n^{2}R^{4}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{4}(x_{i-1})E_{i-1} \left[ \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})^{4}\sigma ^{4}(x_{s})ds\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}1^{2}ds\right] \\ =&C nR^{4}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{4}(x_{i-1})E_{i-1}\left[ \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})^{4}\sigma ^{4}(x_{s})ds \right] \\ =&CnR^{4}(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{4}(x_{i-1})\left( \Delta _{n} \log \frac{1}{\Delta _{n}} \right) ^{2} \Delta _{n}\\ =&C R^{4}(h_{n})\Delta _{n} \left( \log \frac{1}{\Delta _{n}} \right) ^{2} \sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}^{4}(x_{i-1})\Delta _{n}\\ =&C R^{4}(h_{n})\Delta _{n} \left( \log \frac{1}{\Delta _{n}} \right) ^{2} B(4,h_{n},x) \sum ^{n}_{i=1}K_{G(4x/h+1,h/4)}\Delta _{n}\\ =&C\frac{R^{4}(h)}{n}B(4,h,x)\bar{L}_{X}(T,x) \left( \log \frac{1}{\Delta _{n}} \right) ^{2}, \end{aligned}$$

where

$$\begin{aligned} B(4,h,x)= \left\{ \begin{aligned}&h_{n}^{-\frac{3}{2}},\qquad interior~x,\\&h_{n}^{-3}, \qquad boundary~x,\\ \end{aligned} \right. \end{aligned}$$

which one can refer to Rosa and Nogueira (2016) for more details.

Hence

$$\begin{aligned} S_{3} \le C \bar{L}_{X}(T,x) \left( \log \frac{1}{\Delta _{n}} \right) ^{2} \left\{ \begin{aligned}&\frac{1}{nh^{\frac{1}{2}}}{\mathop {\rightarrow }\limits ^{P}} 0, \qquad interior~x,\\&\frac{1}{nh}{\mathop {\rightarrow }\limits ^{P}} 0, \qquad boundary~x.\\ \end{aligned} \right. \end{aligned}$$

For \(S_{4},\) if \(H=W,\) then

$$\begin{aligned} S_{4}&=2\sqrt{n}R(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}(x_{i-1})\left[ 1-(x_{i-1}-x)\frac{1}{x}\right] \\&\quad \times E_{i-1}\left[ \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}\Delta ^{n}_{i}H\right] \\&\le 2\sqrt{n}R(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}(x_{i-1})\left[ 1-(x_{i-1}-x)\frac{1}{x}\right] \\&\quad \times \sqrt{E_{i-1}\left[ \int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})^{2}\sigma ^{2}(x_{s})ds\right] }\sqrt{E_{i-1}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}ds}\\&=2\sqrt{n}R(h_{n})\sum ^{n}_{i=1}K_{G(x/h_{n}+1,h_{n})}(x_{i-1})\left[ 1-(x_{i-1}-x)\frac{1}{x}\right] \Delta _{n}^{\frac{3}{2}}\log ^{\frac{1}{2}}\frac{1}{\Delta _{n}}\\&\quad \rightarrow 2R(h_{n}) \bar{L}_{x}(T,x)\left[ 1-\frac{\alpha _{1}(x)}{x}\right] \log ^{\frac{1}{2}}\frac{1}{\Delta _{n}}\rightarrow 0, \end{aligned}$$

due to \(R(h_{n}) \rightarrow 0.\)

If H is orthogonal to W, \(\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}\) is orthogonal to H, then we can get

$$\begin{aligned} S_{4}=\sum ^{n}_{i=1}E_{i-1}[q_{i}\Delta _{i}^{n}H] \equiv 0. \end{aligned}$$

Furthermore, we can observe

$$\begin{aligned} \bar{L}_{X}(T, x)&\cdot \frac{2\sqrt{n}R(h_{n})\Delta _{n}\sum \limits _{i=1}^{n}\omega _{i-1}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}}{\Delta _{n}^{2} \sum _{i=1}^{n}\omega _{i-1}}\\&\quad - 2\sqrt{n}R(h_{n})\sum \limits _{i=1}^{n}\omega _{i-1}^{+}\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}\\&= \left[ \bar{L}_{X}(T, x) \cdot \frac{\Delta _{n}\sum _{j=1}^{n}K_{G(x/h_{n}+1, h_{n})}(x_{j-1})(x_{j-1}-x)^{2}}{\Delta _{n}^{2} \sum _{i=1}^{n}\omega _{i-1}} - 1\right] \cdot \sum \limits _{i=1}^{n} q_{2, i}\\&\quad - \left[ \bar{L}_{X}(T, x) \cdot \frac{\Delta _{n}\sum _{j=1}^{n}K_{G(x/h_{n}+1, h_{n})}(x_{j-1})(x_{j-1}-x)}{\Delta _{n}^{2} \sum _{i=1}^{n}\omega _{i-1}} - \frac{1}{x}\right] \cdot \sum \limits _{i=1}^{n} q_{3, i}, \end{aligned}$$

where \(q_{2, i} := 2\sqrt{n}R(h_{n})K_{G(x/h_{n}+1, h_{n})}(x_{i-1})\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}\) and \(q_{3, i} := 2\sqrt{n}R(h_{n})K_{G(x/h_{n}+1, h_{n})}(x_{i-1})(x_{i-1} - x)\int ^{i\Delta _{n}}_{(i-1)\Delta _{n}}(x_{s}-x_{i-1})\sigma (x_{s})dW_{s}.\)

Under the same proof procedure as \(\sum \limits _{i=1}^{n} q_{i},\) we can obtain \(\sum ^{n}_{i=1}q_{2, i} {\mathop {\longrightarrow }\limits ^{\mathcal {S} - \mathcal {L}}}M_{2, t}\) and \(\sum ^{n}_{i=1}q_{3, i} {\mathop {\longrightarrow }\limits ^{\mathcal {S} - \mathcal {L}}}M_{3, t},\) which implies that \(\sum ^{n}_{i=1}q_{2, i} = O_{p}(1)\) and \(\sum ^{n}_{i=1}q_{3, i} = O_{p}(1).\) Moreover, based on Lemma 4 and its remark, we have \(\bar{L}_{X}(T, x) \cdot \frac{\Delta _{n}\sum _{j=1}^{n}K_{G(x/h_{n}+1, h_{n})}(x_{j-1})(x_{j-1}-x)^{2}}{\Delta _{n}^{2} \sum _{i=1}^{n}\omega _{i-1}} - 1 {\mathop {\rightarrow }\limits ^{P}} 0\) and \(\bar{L}_{X}(T, x) \cdot \frac{\Delta _{n}\sum _{j=1}^{n}K_{G(x/h_{n}+1, h_{n})}(x_{j-1})(x_{j-1}-x)}{\Delta _{n}^{2} \sum _{i=1}^{n}\omega _{i-1}} - \frac{1}{x} {\mathop {\rightarrow }\limits ^{P}} 0.\) Hence, we can get the main results in Theorem 2.3 due to the Remark A.1. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, Y., Li, H. & Fang, Y. Efficient estimation for the volatility of stochastic interest rate models. Stat Papers 62, 1939–1964 (2021). https://doi.org/10.1007/s00362-020-01166-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-020-01166-4

Keywords

Mathematics Subject Classification

Navigation