Skip to main content
Log in

Estimating integrated co-volatility with partially miss-ordered high frequency data

  • Published:
Statistical Inference for Stochastic Processes Aims and scope Submit manuscript

Abstract

The covariation of short-time period returns between securities plays an important role in many area of finance. Under the wide availability of high frequency financial data, realized covariation, as an ex-post measure of the covariation, can accurately estimate the quadratic covariation. However, the realized covariation fails to work when the multiple records appear. In this paper, we propose an estimator of integrated covariation, which is robust to the high frequency data containing multiple records. Consistency of the estimator and central limit theorem have been established. Moreover, several extensions which make the estimator available to different types of high frequency data are also considered. Simulation study confirms the performance of the estimator.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. By effective seconds, we mean that during the second, at least one transaction is recorded.

  2. The average of multiple observations and pre-averaging approach.

  3. Without microstructure noise.

  4. The condition of equally spaced sampling scheme is not necessary for constructing the estimator, but it simplifies the presentation for an illustration, the extension is discussed in Sect. 4.

  5. This condition requires that the unobserved subintervals are equally spaced, it is a restriction for the feasibility of estimator, but it can not be avoided for the consistency of the proposed estimator under current framework. However, one could consider the case where the subintervals are randomly long with a distribution, exponential for example, under the double asymptotics, i.e., \(n\rightarrow \infty \) and \(L_i\rightarrow \infty \), this unnatural condition can be removed.

  6. Of course, we can arbitrarily take one data from \(E_i\), then realized covariation can be applied. This procedure loses lots of data, hence lacks efficiency. We also discuss the comparison between this estimator and proposed estimator in this paper later.

  7. Since the order of observations is crucial in calculating the realized variation, under the multiple observation situation, one must impose some “unrealistic” assumptions if we want to use the entire data-set. Besides the double asymptotics considering in the remark, we also can assume that both \(\sum _{k=1}^{L_i}\left( \frac{k-1}{L_i}\right) ^2(t_{L_i+j}-t_{L_i+j-1})\) and \(\sum _{j=1}^{L_i}(1-\frac{j-1}{L_i})^2(t_{L_i+j}-t_{L_i+j-1})\) do not depends on i.

  8. We make the some necessary adjustments, so that the estimator is based on the multiple observations. \(k_n\) is an integer, plays the role of window size, usually is taken as \(k_n=\theta \sqrt{n}\) for some positive constant \(\theta \), due to the trade off of bias and rates. g is a positive function in [0, 1], satisfying some smooth conditions.

  9. Relative bias \(=\frac{RCV_1^{weight}-\int _0^1\rho \sigma \eta ds}{\int _0^1\rho \sigma \eta ds}\).

  10. We take \(\rho \) from \(-\)1 to 1 by step 0.05, and fix \(\sigma =\eta =1\).

  11. The variance is a curve in terms of \(\rho \).

References

  • Aït-Sahalia Y, Fan J, Xiu D (2010) High frequency covariance estimates with noisy and asynchronous data. J Am Stat Assoc 160:1504–1517

    Article  MathSciNet  MATH  Google Scholar 

  • Aldous D, Eagleson G (1978) On mixing and stability of limit theorems. Ann Prob 6(2):325–331

    Article  MathSciNet  MATH  Google Scholar 

  • Bachelier L (1900) Théorie de la speculation. Gauthier-Villars, Paris

    MATH  Google Scholar 

  • Bandi FM, Russell JR (2005) Realized covariation, realized beta and microstructure noise. Working paper, Graduate School of Business, University of Chicago

  • Barndorff-Nielsen O, Graversen S, Jacod J, Podolskij M, Shephard N (2006) A central limit theorem for realised power and bipower variations of continuous semimartingales. In: Kabanov Y, Lipster R (eds) From stochastic analysis to mathematical finance, Festschrift for Albert Shiryaev. Springer, Berlin

    Google Scholar 

  • Barndorff-Nielsen OE, Hansen PR, Lunde A, Shephard N (2008) Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise. Econometrica 76(6):1481–1536

    Article  MathSciNet  MATH  Google Scholar 

  • Barndorff-Nielsen OE, Hansen PR, Lunde A, Shephard N (2011) Multivariate realised kernels: consistent positive semi-definite estimators of the covariation of equity prices with noise and non-synchronous trading. J Econom 162(2):149–169

    Article  MathSciNet  Google Scholar 

  • Barndorff-Nielsen OE, Shephard N (2004) Power and bipower variation with stochastic volatility and jumps. J Financ Econom 2(1):1–37

    Article  Google Scholar 

  • Black F, Scholes M (1973) The pricing of options and corporate liabilities. J Polit Econ 81(3):637–654

    Article  MATH  Google Scholar 

  • Christensen K, Kinnebrock S, Podolskij M (2010) Pre-averaging estimators of the ex-post covariance matrix in noisy diffusion models with non-synchronous data. J Econom 159:116–133

    Article  MathSciNet  Google Scholar 

  • Cox J (1975) Notes on option pricing i: constant elasticity of diffusions. Unpublished draft, Stanford University

  • Emanuel DC, MacBeth JD (1982) Further results of the constant elasticity of variance call option pricing model. J Financ Quant Anal 4:533–553

    Article  Google Scholar 

  • Epps T (1979) Comovements in stock prices in the very short run. J Am Stat Assoc 74:291–298

    Google Scholar 

  • Fama EF, French KR (1992) The cross-section of expected stock returns. J Financ 47:427–465

    Article  Google Scholar 

  • Fleming J, Kirby C, Ostdiek B (2001) The economic value of volatility timing. J Financ 56(1):329–352

    Article  Google Scholar 

  • Fleming J, Kirby C, Ostdiek B (2002) The economic value of volatility timing using “realized” volatility. J Financ Econ 67(3):473–509

    Article  Google Scholar 

  • Hayashi T, Yoshida N (2005) On covariance estimation of non-synchronously observed diffusion processes. Bernoulli 11:359–379

    Article  MathSciNet  MATH  Google Scholar 

  • Hayashi T, Yoshida N (2008) Asymptotic normality of a covariance estimator for nonsynchronously observed diffusion processes. Ann Inst Stat Math 60:367–406

    Article  MathSciNet  MATH  Google Scholar 

  • Hayashi T, Yoshida N (2011) Nonsynchronous covariation process and limit theorems. Stoch Process Appl 121:2416–2454

    Article  MathSciNet  MATH  Google Scholar 

  • Heston SL (1993) A closed-form solution for options with stochastic volatility with applications to bond and currency options. Rev Financ Stud 6(2):327–343

    Article  Google Scholar 

  • Jacod J (2012) Statistics and high frequency data. In: Kessler M,Lindner A, S\(\phi \)rensen M (eds) Proceedings of the 7th Séminaire Européen de Statistique, La Manga, 2007: statistical methods for stochastic differential equations

  • Jacod J, Li Y, Mykland PA, Podolskij M, Vetter M (2009) Microstructure noise in the continuous case: the pre-averaging approach. Stoch Process Appl 119(7):2249–2276

    Article  MathSciNet  MATH  Google Scholar 

  • Jacod J, Podolskij M, Vetter M (2010) Limit theorems for moving averages of discretized processes plus noise. Ann Stat 38(3):1478–1545

    Article  MathSciNet  MATH  Google Scholar 

  • Jacod J, Shiryayev AV (2003) Limit theorems for stochastic processes. Springer, New York

    Book  Google Scholar 

  • Jing BY, Li CX, Liu Z (2013) On estimating the integrated co-volatility using noisy high frequency data with jumps. Commun Stat Theory Methods 42(21):3889–3901

    Article  MathSciNet  MATH  Google Scholar 

  • Jing BY, Liu Z, Kong XB (2014) Estimating the volatility functionals with multiple transactions. Working paper

  • Koike Y (2014) Limit theorems for the pre-averaged hayashicyoshida estimator with random sampling. Stoch Process Appl 124:2699–2753

    Article  MathSciNet  MATH  Google Scholar 

  • Liu Z (2013) Multiple power estimation of volatility with multiple observations. Working paper

  • Renyi A (1963) On stable sequences of events. Sankhya Indian J Stat 25:293–302

    MathSciNet  MATH  Google Scholar 

  • Tsay RS (2005) Analysis of financial time series. Wiley, Hoboken

    Book  MATH  Google Scholar 

  • Vetter M (2010) Limit theorems for bipower variation of semimartingales. Stoch Process Appl 120:22–38

    Article  MathSciNet  MATH  Google Scholar 

  • Voev V, Lunde A (2007) Integrated covariance estimation using high-frequency data in presence of noise. J Financ Econom 5(1):68–104

    Article  Google Scholar 

  • Wang K, Liu J, Liu Z (2013) Disentangling the effect of jumps on systematic risk using a new estimator of integrated co-volatility. J Bank Financ 37(5):1777–1786

    Article  Google Scholar 

  • Zhang L (2006) Efficient estimation of stochastic volatility using noisy observations: a multi-scale approach. Bernoulli 12(6):1019–1043

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang L (2011) Estimating covariation: Epps effect, microstructure noise. J Econom 160(1):33–47

    Article  MathSciNet  Google Scholar 

  • Zhang L, Mykland P, Aït-Sahalia Y (2005) A tale of two time scales: determining integrated volatility with noisy high-frequency data. J Am Stat Assoc 100(472):1394–1411

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the Editor Prof. Yury Kutoyants, the associate editor and two referees for their very extensive and constructive suggestions which helped to improve this paper considerably. The work is partially supported by The Science and Technology Development Fund of Macau (Nos.078/2012/A3 and 078/2013/A3) and NSFC (Nos.11401607 and 71471173).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi Liu.

Appendix

Appendix

We need some notations, which simplify the presentation of the proof. For the process X, we set

$$\begin{aligned}&\displaystyle \alpha _i=\frac{1}{L_i}\sum _{k=1}^{L_i} \left( \frac{k-1}{L_i}\right) ^2,\quad \alpha '_i=\frac{1}{L_{i}}\sum _{k=1}^{L_{i}} \left( 1-\frac{k-1}{L_{i}}\right) ^2, \quad (\alpha _0=\alpha _0'=0)&\end{aligned}$$
(20)
$$\begin{aligned}&\displaystyle C_t=\int _0^t\rho \sigma ^X_s\sigma ^Y_sds, \quad \Delta _{i,j}C=C_{t_{N_{i-1}+j}}-C_{t_{N_{i-1}+j-1}}&\end{aligned}$$
(21)
$$\begin{aligned}&\displaystyle c_i = \sum _{j=1}^{L_i} \left( \frac{j-1}{L_i}\right) ^2\Delta _{i,j}C, \quad c_i'=\sum _{j=1}^{L_i}\left( 1-\frac{j-1}{L_i}\right) ^2\Delta _{i,j}C&\end{aligned}$$
(22)
$$\begin{aligned}&\displaystyle \xi _i(X) = \sum _{j=1}^{L_i} \left( \frac{j-1}{L_i}\right) \Delta _{i,j}X, \quad (\xi _0=0)&\end{aligned}$$
(23)
$$\begin{aligned}&\displaystyle \xi '_i(X)=\sum _{j=1}^{L_i} \left( 1-\frac{j-1}{L_i}\right) \Delta _{i,j}X,\quad (\xi '_0=0)&\end{aligned}$$
(24)
$$\begin{aligned}&\displaystyle \kappa _{i}(X)=\sigma ^X_{i\Delta _n}\sum _{j=1}^{L_{i}} \left( \frac{j-1}{L_{i}}\right) \Delta _{i,j}W^X,\quad (\kappa _{0}=0)&\end{aligned}$$
(25)
$$\begin{aligned}&\displaystyle \kappa '_{i}(X)=\sigma ^X_{(i-1)\Delta _n}\sum _{j=1}^{L_{i}} \left( 1-\frac{j-1}{L_{i}}\right) \Delta _{i,j}W^X, \quad (\kappa '_{0}=0)&\end{aligned}$$
(26)
$$\begin{aligned}&\displaystyle \mu _i(X) := \xi _{i}(X)+\xi '_{i+1}(X), \quad \theta _{i}(X):=\kappa _{i}(X)+\kappa '_{i+1}(X), \quad {\mathcal {F}}_i:=\mathcal {F}_{i\Delta _n\vee 0}.&\end{aligned}$$
(27)

Furthermore, by a standard localization procedure as presented in Barndorff-Nielsen et al. (2006) or Jacod (2012), there is without loss of generality to impose the following assumption addition to the assumptions of theorems.

Assumption 4

\(\sigma ^X\) and \(\sigma ^Y\) are bounded.

Thus, to prove Theorem 1, it remains to prove that

$$\begin{aligned} \sum _{i=0}^{n-1}\frac{\mu _i(X)\mu _i(Y)}{\varpi _i}\xrightarrow {\mathbb {P}}\int _0^t\rho \sigma ^X_s\sigma ^Y_sds. \end{aligned}$$
(28)

We start from an auxiliary lemma.

Lemma 1

Under the Assumptions 12 and 4, we have

$$\begin{aligned} \sum _{i=0}^{n-1}\frac{\big |\mu _i(X)\mu _i(Y)-\theta _{i}(X)\theta _{i}(Y)\big |}{\varpi _i} \xrightarrow {\mathbb {P}} 0,\end{aligned}$$
(29)
$$\begin{aligned} \sqrt{n}\sum _{i=0}^{n-1}\frac{\big |\mu _i(X)\mu _i(Y)-\theta _{i}(X)\theta _{i}(Y)\big |}{\varpi _i} \xrightarrow {\mathbb {P}} 0. \end{aligned}$$
(30)

Proof of Lemma 1

By Cauchy–Schwarz inequality, B urkholder inequality, Itô’s Isometry and the fundamental inequality

$$\begin{aligned} |x-y|\ge \big ||x|-|y|\big |, \end{aligned}$$

simple computations show that

$$\begin{aligned}&E\left[ \sum _{i=0}^{n-1}\frac{|\mu _i(X)\mu _i(Y)-\theta _{i}(X)\theta _{i}(Y)|}{\varpi _i}\right] \\&\qquad \le E\left[ \sum _{i=0}^{n-1}\frac{|\mu _i(X)-\theta _{i}(X)||\mu _{i}(Y)|+|\mu _{i}(Y)-\theta _i(Y)||\theta _i(X)|}{\varpi _i}\right] \\&\qquad =K\sum _{i=0}^{n-1}\left( \left( \int _{i\Delta _n}^{(i+1)\Delta _n}E[|\sigma ^X_s-\sigma ^X_{i\Delta _n}|^2]ds\Delta _n\right) ^{\frac{1}{2}}+ \left( \int _{i\Delta _n}^{(i+1)\Delta _n}E[|\sigma ^Y_s-\sigma ^Y_{i\Delta _n}|^2]ds\Delta _n\right) ^{\frac{1}{2}}\right) \\&\qquad \rightarrow 0, \end{aligned}$$

because \(\{\sigma _s, s>0\}\) is cádlàg. \(\square \)

Proof of Theorem 1

In view of Lemma 1, it is enough to prove

$$\begin{aligned} \sum _{i=0}^{n-1}\frac{\theta _i(X)\theta _i(Y)}{\varpi _i}\xrightarrow {\mathbb {P}} \int _0^t\rho \sigma ^X_s\sigma ^Y_sds. \end{aligned}$$
(31)

We further let \(\tilde{\theta }_i=\frac{\theta _i(X)\theta _i(Y)}{\varpi _i}\) and \(\tilde{\theta }'_i=E[\tilde{\theta }_i|\mathcal {{F}}_{i}].\) We firstly have

$$\begin{aligned} E\left[ (\tilde{\theta }_i)^2|\mathcal {F}_i \right] \le K\Delta _n^2, \end{aligned}$$
(32)

and in particular \(\tilde{\theta }'_i\le K\Delta _n\). Since \(E\left[ (\tilde{\theta }_i-\tilde{\theta }'_i)(\tilde{\theta }_j-\tilde{\theta }'_j)\right] =0\) when \(|i-j|\ge 2\), thus

$$\begin{aligned} E\left[ \left| \sum _{i=0}^{n-1}\tilde{\theta }_i-\sum _{i=0}^{n-1}\tilde{\theta }'_i\right| ^2\right]= & {} E\left[ \sum _{i=0}^{n-1}\sum _{j=0}^{n-1}(\tilde{\theta }_i-\tilde{\theta }'_i)(\tilde{\theta }_j-\tilde{\theta }'_j)\right] \end{aligned}$$
(33)
$$\begin{aligned}\le & {} K\sum _{i=0}^{n-1}E[(\tilde{\theta }_i-\tilde{\theta }'_i)^2]\le K\Delta _n\rightarrow 0, \end{aligned}$$
(34)

by Burkholder–Davis–Gundy’s inequality. Thus, to prove the theorem, it suffices to show

$$\begin{aligned} \sum _{i=0}^{n-1}\tilde{\theta }'_i\xrightarrow {\mathbb {P}} \int _0^t\rho \sigma _s^X\sigma ^Y_sds. \end{aligned}$$
(35)

Observe that

$$\begin{aligned}&\left| \sum _{i=0}^{n-1}\tilde{\theta }'_i-\int _0^t\rho \sigma _s^X\sigma ^Y_sds\right| \end{aligned}$$
(36)
$$\begin{aligned}&\qquad =\left| \sum _{i=0}^{n-1}\rho \sigma ^X_{(i-1)\Delta _n}\sigma ^Y_{(i-1)\Delta _n}\Delta _n-\int _0^t\rho \sigma ^X_s\sigma _s^Yds\right| . \end{aligned}$$
(37)

The required result follows from the Riemann integrability. \(\square \)

Proof of Theorem 2

To derive the above central limit theorem, note that the summands are one-step correlated, as illustrated in following picture. One of the methodologies to deal with the correlated variables is constructing the martingale differences, then using the martingale central limit theorem. This is similar to the central limit theorem of bi-power variation, as shown in Barndorff-Nielsen et al. (2006). Here, we apply the technique of “big blocks” and “small blocks”, which is classical method proving the central limit theorem of summation of correlated random variables, in probability theory. The extension to stochastic processes, have been developed in Jacod et al. (2009) and Vetter (2010). The “big blocks” (including p terms) are used to construct independent terms in the summation, and eventually dominates the asymptotic behavior. Whereas the “small blocks” (including only one term), which is asymptotically negligible, will be removed from the summation. Following is the detail. Giving a positive integer p, we define

$$\begin{aligned}&\displaystyle a_i(p)=i(p+1)+1, \quad b_i(p)=(i+1)(p+1),&\\&\displaystyle A_i(p)=\{k\in N^+: a_i(p)\le k<b_i(p)\}, \quad B_i(p)=\{k\in N^+: b_i(p)\le k<a_{i+1}(p)\},&\\&\displaystyle i_n(p)=\max \{i: b_i(p)\le n-1\}=\lfloor \frac{n-1}{p+1}\rfloor -1,&\\&\displaystyle j_n(p)=b_{i_n(p)}(p).&\end{aligned}$$

That is, the ith “big block” consists of \(\{\mu _i: i\in A_i(p)\}\), whereas the ith ”small blocks” contains \(\{\mu _i: i\in B_i(p)\}\). Because \(\mu _i\) is 1-dependent sequence, after removing the small blocks, we get an independent (conditional) sequence. Denote:

$$\begin{aligned} \hat{\mu }_k= & {} \frac{1}{\varpi _i}\left\{ \sum _{j=1}^{L_k}\left( 1-\frac{j-1}{L_k}\right) ^2\Delta _{k,j}X\Delta _{k,j}Y\right. \\&\left. +\sum _{j=1}^{L_{k+1}}\left( 1-\frac{j-1}{L_{k+1}}\right) ^2\Delta _{k+1,j}X\Delta _{k+1,j}Y-\left( c_k+c'_{k+1}\right) \right\} . \end{aligned}$$

We first collect all the terms in \(A_i(p)\), define as

$$\begin{aligned} \varsigma _i(p,1)=\sum _{j=a_i(p)}^{b_i(p)-1}\hat{\mu }_j, \quad \varsigma '_i(p,1)=E\left[ \varsigma _i(p,1)|\mathcal {{F}}_{a_i(p)-1}\right] . \end{aligned}$$
(38)

and all the terms in \(B_i(p)\), define as

$$\begin{aligned} \varsigma _i(p,2)=\sum _{j=b_i(p)}^{a_{i+1}(p)-1}\hat{\mu }_k=\hat{\mu }_{b_i(p)}, \quad \varsigma '_i(p,2)=E\left[ \varsigma _i(p,2)|\mathcal {{F}}_{b_i(p)-1}\right] . \end{aligned}$$
(39)

Notice that \(\varsigma _i(p,1)\) contains p summands (“big”), whereas \(\varsigma _i(p,2)\) contains only 1 summand (“small”), because finally we will let \(p\rightarrow \infty \), hence the small blocks are asymptotically negligible. To realize this, we set

$$\begin{aligned}&\displaystyle M(p)=\sum _{j=0}^{i_n(p)}\left[ \varsigma _j(p,1)-\varsigma '_j(p,1)\right] , \quad M'(p)=\sum _{j=0}^{i_n(p)}\varsigma '_j(p,1);&\end{aligned}$$
(40)
$$\begin{aligned}&\displaystyle N(p)=\sum _{j=0}^{i_n(p)}\left[ \varsigma _j(p,2)-\varsigma '_j(p,2)\right] , \quad N'(p)=\sum _{j=0}^{i_n(p)}\varsigma '_j(p,2);&\end{aligned}$$
(41)
$$\begin{aligned}&\displaystyle C(p)=\sum _{j=j_n(p)}^{n-1}\hat{\mu }_{j}.&\end{aligned}$$
(42)

We then have the following decomposition:

$$\begin{aligned}&\sqrt{n}\left( RCV_t^{weight}-\int _0^t\rho \sigma _s^X\sigma _s^Yds\right) \nonumber \\&\quad =\sqrt{n}\left( M(p)+M'(p)+N(p)+N'(p)+C(p)+R(p)\right) , \end{aligned}$$
(43)

where, R(p) consists of some “residuals” from above expansions. It is easy to follow the similar steps in Jacod et al. (2009) (identity (5.14), Lemma 5.5 and Lemma 5.6), we obtain that the last four terms are asymptotically negligible, when \(p\rightarrow \infty \) and \(n\rightarrow \infty \), i.e.,

$$\begin{aligned} \lim _{p\rightarrow \infty }\limsup _{n\rightarrow \infty }P\left( \sqrt{n}(|M'(p)|+|N(p)|+|N'(p)|+|C(p)|+|R(p)|)>\delta \right) =0,\quad \end{aligned}$$
(44)

for any \(\delta >0\). We now establish the central limit theorem for M(p) by an auxiliary lemma. \(\square \)

Lemma 2

Under the same assumptions and notations as in Theorem 2 and the Assumption 4 holds, we have for any fixed p,

$$\begin{aligned} \sqrt{n}M(p)\xrightarrow {{\mathcal {L}}-(s)}\int _0^t\gamma (p)_sdB_s, \end{aligned}$$
(45)

where,

$$\begin{aligned} (\gamma (p)_s)^2=\frac{1}{p+1}\left( p(\rho ^2+1)+\frac{2(p-1)}{(\alpha +\alpha ')^2}\varrho \right) (\sigma ^X_s\sigma _s^Y)^2, \end{aligned}$$
(46)

\(\varrho \) is defined in (60) later.

Proof of Lemma 2

Since \(L_i\equiv L\), hence we have

$$\begin{aligned} \alpha _i=\alpha:= & {} \frac{1}{L}\sum _{k=1}^{L} \Big (\frac{k-1}{L}\Big )^2;\end{aligned}$$
(47)
$$\begin{aligned} \alpha '_i=\alpha ':= & {} \frac{1}{L}\sum _{k=1}^{L}\Big (1-\frac{k-1}{L}\Big )^2. \end{aligned}$$
(48)

By a martingale central limit theorem argument as presented in Theorem IX7.28 in Jacod and Shiryayev (2003), we need to show the following conditions:

$$\begin{aligned}&n\sum _{i=0}^{i_n(p)}E\left[ \varsigma ^2_i(p,1)|\mathcal {F}_{a_i(p)-1}\right] \xrightarrow {\mathbb {P}}\int _0^t(\gamma (p))^2ds,\end{aligned}$$
(49)
$$\begin{aligned}&n^2\sum _{i=0}^{i_n(p)}E\left[ \varsigma ^4_i(p,1)|\mathcal {F}_{a_i(p)-1}\right] \xrightarrow {\mathbb {P}} 0,\end{aligned}$$
(50)
$$\begin{aligned}&\sqrt{n}\sum _{i=0}^{i_n(p)}E\left[ \varsigma _i(p,1)\Delta W^X(p)_i|\mathcal {F}_{a_i(p)-1}\right] \xrightarrow {\mathbb {P}}0,\end{aligned}$$
(51)
$$\begin{aligned}&\sqrt{n}\sum _{i=0}^{i_n(p)}E\left[ \varsigma _i(p,1)\Delta W^Y(p)_i|\mathcal {F}_{a_i(p)-1}\right] \xrightarrow {\mathbb {P}}0,\end{aligned}$$
(52)
$$\begin{aligned}&\sqrt{n}\sum _{i=0}^{i_n(p)}E\left[ \varsigma _i(p,1)\Delta N(p)_i|\mathcal {F}_{a_i(p)-1}\right] \xrightarrow {\mathbb {P}}0, \end{aligned}$$
(53)

where \(\Delta V(p)_i=V_{s_{b_i(p)}}-V_{s_{a_i(p)-1}}\) for any process V, and the Eq. (53) holds for any bounded martingale N which is orthogonal to both \(W^X\) and \(W^Y\) and also for \(N=W^X\) and \(N=W^Y\).

To show these convergences, we use the following approximations:

$$\begin{aligned} \kappa _{i,l}= & {} \sigma ^X_{l\Delta _n}\sigma ^Y_{l\Delta _n}\sum _{j=1}^{L_{i}}\left( \frac{j-1}{L_{i}}\right) ^2\Delta _{i,j}W^X\Delta _{i,j}W^Y, \quad (\kappa _{0}=0)\end{aligned}$$
(54)
$$\begin{aligned} \kappa '_{i,l}= & {} \sigma ^X_{l\Delta _n}\sigma ^Y_{l\Delta _n}\sum _{j=1}^{L_{i}}\left( 1-\frac{j-1}{L_{i}}\right) ^2\Delta _{i,j}W^X\Delta _{i,j}W^Y, \quad (\kappa '_{0}=0), \end{aligned}$$
(55)

for \(0\le i\le n-1\) and integer l. Hence, we have the following new notations:

$$\begin{aligned} \theta _{i,l}:=\frac{1}{\varpi _i}\left\{ \kappa _{i,l}+\kappa '_{i+1,l}-(c_i+c'_{i+1})\right\} ; \quad \tilde{\theta }_{i,l}:=\theta _{i,l}-E\left[ \theta _{i,l}|\mathcal {{F}}_{l}\right] . \end{aligned}$$

and the following approximations are used for different kinds of “blocks”:

$$\begin{aligned} \bar{\mu }_k= \left\{ \begin{array}{c@{\quad }c} \tilde{\theta }_{k,a_i(p)-1},&{} k\in A_i(p)\\ \tilde{\theta }_{k,b_i(p)-1},&{} k\in B_i(p)\\ \tilde{\theta }_{k,j_n(p)-1},&{} k\ge j_n(p) \end{array} \right. . \end{aligned}$$
(56)

Using the approximations of Jacod et al. (2009) (Lemma 5.2 and Lemma 5.3), we only need to show (49)–(53) by redefining

$$\begin{aligned} \varsigma _i(p,1)=\sum _{j=a_i(p)}^{b_i(p)-1}\bar{\mu }_j, \quad \varsigma '_i(p,1)=E\left[ \varsigma _i(p,1)|\mathcal {{F}}_{a_i(p)-1}\right] . \end{aligned}$$
(57)

Direct calculation can show (50) and it is easy to see that

$$\begin{aligned} E\left[ \varsigma _i(p,1)\Delta W^X(p)_i|\mathcal {F}_{s_{a_i(p)}}\right] =0, \quad E\left[ \varsigma _i(p,1)\Delta W^Y(p)_i|\mathcal {F}_{s_{a_i(p)}}\right] =0. \end{aligned}$$
(58)

The proof of (53) is the same as Jacod et al. (2009) (Lemma 5.7) or Barndorff-Nielsen et al. (2006). We hence left to proving the Equation (49). By (47), we have

$$\begin{aligned} \alpha _{i}+\alpha '_{i}\equiv \alpha +\alpha '. \end{aligned}$$
(59)

Hence, recall (56), when \(k,r\in A_i(p)\), we have

$$\begin{aligned}&E\left[ \bar{\mu }_{k}^2|\mathcal {F}_{a_i(p)-1}\right] =\left( \sigma ^X_{s_{a_i(p)-1}}\sigma ^Y_{s_{a_i(p)-1}}\right) ^2\left( \rho ^2+1\right) \Delta ^2, \\&E\left[ \bar{\mu }_k\bar{\mu }_{r}|\mathcal {F}_{a_i(p)-1}\right] = \left\{ \begin{array}{l@{\quad }l} 0, &{} |k-r|>1 \\ \frac{(\sigma ^X_{s_{a_i(p)-1}}\sigma ^Y_{s_{a_i(p)-1}})^2}{(\alpha +\alpha ')^2}\big (\rho ^2(A+B)+A-\rho ^2\alpha \alpha '\big )\Delta ^2, &{} |k-r|=1 \end{array} \right. \end{aligned}$$

where,

$$\begin{aligned} \left\{ \begin{array}{l} A=\frac{1}{L^6}\Big (\sum _{k=1}^L\big (k-1\big )\big (L-k+1\big )\Big )^2,\\ B=\frac{1}{L^6}\sum _{k=1}^L\big (k-1\big )^2\cdot \sum _{j=1}^L\big (L-j+1\big )^2=\alpha \alpha ',\\ \end{array} \right. \end{aligned}$$

Let

$$\begin{aligned} \varrho =\rho ^2(A+B)+A-\rho ^2\alpha \alpha '=(\rho ^2+1)A. \end{aligned}$$
(60)

Therefore, we obtain

$$\begin{aligned}&E\left[ \varsigma ^2_i(p,1)|\mathcal {F}_{s_{a_i(p)-1}}\right] =E\left[ \left( \sum _{k=a_i(p)}^{b_i(p)-1}\bar{\mu }_{k}\right) ^2\big |\mathcal {F}_{s_{a_i(p)-1}}\right] \\&\quad =E\left[ \sum _{k,r=a_i(p)}^{b_i(p)-1}\bar{\mu }_{k}\bar{\mu }_{r}\big |\mathcal {F}_{s_{a_i(p)-1}}\right] =\sum _{k,r=a_i(p)}^{b_i(p)-1}E\left[ \bar{\mu }_{k}\bar{\mu }_{r}\big |\mathcal {F}_{s_{a_i(p)-1}}\right] \\&\quad =\sum _{k=a_i(p)}^{b_i(p)-1}E\left[ \bar{\mu }_{k}^2|\mathcal {F}_{s_{a_i(p)-1}}\right] +\sum _{k\ne r, k,r=a_i(p)}^{b_i(p)-1}E\left[ \bar{\mu }_{k}\bar{\mu }_{r}\big |\mathcal {F}_{s_{a_i(p)-1}}\right] \\&\quad =p\left( \sigma ^X_{s_{a_i(p)-1}}\sigma ^Y_{s_{a_i(p)-1}}\right) ^2\left( \rho ^2+1\right) \Delta ^2 +2(p-1)\frac{(\sigma ^X_{s_{a_i(p)-1}}\sigma ^Y_{s_{a_i(p)-1}})2}{(\alpha +\alpha ')^2}\varrho \Delta ^2\\&\quad =\left( p(\rho ^2+1)+\frac{2(p-1)}{(\alpha +\alpha ')^2}\varrho \right) (\sigma ^X_{s_{a_i(p)-1}}\sigma ^Y_{s_{a_i(p)-1}})^2\Delta ^2. \end{aligned}$$

Hence by Riemann integrability, we obtain

$$\begin{aligned} n\sum _{i=0}^{i_n(p)}E\left[ \varsigma ^2_i(p,1)|\mathcal {F}_{s_{a_i(p)}}\right] \rightarrow ^P\int _0^t(\gamma (p)_s)^2ds, \end{aligned}$$
(61)

where,

$$\begin{aligned} (\gamma (p)_s)^2:=\frac{1}{p+1}\left( p(\rho ^2+1)+\frac{2(p-1)}{(\alpha +\alpha ')^2}\varrho \right) \sigma _s^4. \end{aligned}$$
(62)

\(\square \)

We now return to the proof of Theorem 2. In view of

$$\begin{aligned} (\gamma (p)_s)^2\rightarrow \gamma _s^2, \quad \text {when} \quad p\rightarrow \infty , \end{aligned}$$

we obtain the required result of Theorem 2.\(\square \)

Sketched Proof of Theorem 4:

For any process Z, let

$$\begin{aligned} \overline{\xi }_{k}(Z)=\sum _{j=1}^{k_n-1}g\left( \frac{j}{k_n}\right) \xi _{k+j-1}(Z), \quad \overline{\xi }'_k(Z)=\sum _{j=1}^{k_n-1}g\left( \frac{j}{k_n}\right) \xi '_{k+j-1}(Z). \end{aligned}$$
(63)

Then we have \(\Delta _{i,k_n}X=\overline{\xi }_{i}(X)+\overline{\xi }'_{i+1}(X)\). Further let \(\overline{\overline{\xi }}_{i}(Z)=\sqrt{k_n}\left( \overline{\xi }_{i}(Z)+\overline{\xi }'_{i+1}(Z)\right) \). Thus

$$\begin{aligned}&\frac{1}{k_n}\sum _{i=0}^{n-k_n}\Delta _{i,k_n}X(\epsilon )\Delta _{i,k_n}Y(\delta )=\frac{1}{n}\sum _{i=0}^{n-k_n}\left[ \overline{\overline{\xi }}_{i}(X)+\overline{\overline{\xi }}_{i}(\epsilon )\right] \left[ \overline{\overline{\xi }}_{i}(Y)+\overline{\overline{\xi }}_{i}(\delta )\right] \nonumber \\&\quad =\frac{1}{n}\sum _{i=0}^{n-k_n}\overline{\overline{\xi }}_{i}(X)\overline{\overline{\xi }}_{i}(Y)+\frac{1}{n}\sum _{i=0}^{n-k_n}\overline{\overline{\xi }}_{i}(X)\overline{\overline{\xi }}_{i}(\delta ) +\frac{2}{n}\sum _{i=0}^{n-k_n}\overline{\overline{\xi }}_{i}(\epsilon )\overline{\overline{\xi }}_{i}(Y)+\frac{2}{n}\sum _{i=0}^{n-k_n}\overline{\overline{\xi }}_{i}(\epsilon )\overline{\overline{\xi }}_{i}(\delta )\nonumber \\&\quad =:I_{1n}+I_{2n}+I_{3n}+I_{4n}. \end{aligned}$$
(64)

Routine techniques can show that

$$\begin{aligned} I_{2n}\xrightarrow {P}0, \qquad I_{3n}\xrightarrow {P}0. \end{aligned}$$
(65)

Firstly, using similar approximations as previous theorem, some computation yields

$$\begin{aligned} \left\{ \begin{array}{l} E\left[ \overline{\xi }_{i}(X)\overline{\xi }_{i}(Y)|{\mathcal {F}}_{s_i}\right] =k_n\bar{g}(2)\left( \frac{\Delta _n}{L}\right) \rho \sigma ^X_{s_i}\sigma ^Y_{s_i}\sum _{k=1}^L\left( \frac{k-1}{L}\right) ^2+e_{1n};\\ E\left[ \overline{\xi }_{i}(X)\overline{\xi }'_{i+1}(Y)|{\mathcal {F}}_{s_i}\right] =k_n\bar{g}(2)\left( \frac{\Delta _n}{L}\right) \rho \sigma ^X_{s_i}\sigma ^Y_{s_i}\sum _{k=1}^L\left( \frac{k-1}{L}\right) \left( 1-\frac{k-1}{L}\right) +e_{2n};\\ E\left[ \overline{\xi }'_{i+1}(X)\overline{\xi }_{i}(Y)|{\mathcal {F}}_{s_i}\right] =k_n\bar{g}(2)\left( \frac{\Delta _n}{L}\right) \rho \sigma ^X_{s_i}\sigma ^Y_{s_i}\sum _{k=1}^L\left( \frac{k-1}{L}\right) \left( 1-\frac{k-1}{L}\right) +e_{3n};\\ E\left[ \overline{\xi }_{i+1}'(X)\overline{\xi }_{i+1}(Y)|{\mathcal {F}}_{s_i}\right] =k_n\bar{g}(2)\left( \frac{\Delta _n}{L}\right) \rho \sigma ^X_{s_i}\sigma ^Y_{s_i}\sum _{k=1}^L\left( 1-\frac{k-1}{L}\right) ^2+e_{4n}. \end{array} \right. \end{aligned}$$

Notice that \(\frac{1}{L}\sum _{k=1}^L\left[ \left( \frac{k-1}{L}\right) +\left( 1-\frac{k-1}{L}\right) \right] ^2=1\), thus

$$\begin{aligned} E\left[ \overline{\overline{\xi }}_{i}(X)\overline{\overline{\xi }}_{i}(Y)|{\mathcal {F}}_{s_i}\right] =\bar{g}(2)\rho \sigma ^X_{s_i}\sigma ^Y_{s_i}+e_n. \end{aligned}$$
(66)

Following the proof of Lemma 5.4 in Jacod et al. (2009), one can show that \(\frac{1}{n}\sum _{i=0}^{n-k_n}e_n\xrightarrow {P} 0\). Hence

$$\begin{aligned} I_{1n}\xrightarrow {P}\bar{g}(2)\int _0^t\rho \sigma ^X_s\sigma ^Y_sds. \end{aligned}$$
(67)

Secondly, note that

$$\begin{aligned} \frac{1}{n}\sum _{i=0}^{n-k_n}\overline{\overline{\xi }}_{i}(\epsilon )\overline{\overline{\xi }}_{i}(\delta )= & {} \frac{ k_n}{n} \sum _{i=0}^{n-k_n}\Bigg \{\Big (\frac{1}{L}\sum _{j=0}^{k_n-1}\Big (g\Big (\frac{j}{k_n}\Big )-g\Big (\frac{j+1}{k_n}\Big )\Big )\sum _{k=1}^L\epsilon _{N_{i+j-1}+k}\Big )\end{aligned}$$
(68)
$$\begin{aligned}&\times \Big (\frac{1}{L}\sum _{j=0}^{k_n-1}\Big (g\Big (\frac{j}{k_n}\Big )-g\Big (\frac{j+1}{k_n}\Big )\Big )\sum _{k=1}^L\delta _{N_{i+j-1}+k}\Big )\Bigg \}. \end{aligned}$$
(69)

Recalling \(k_n=\frac{1}{\sqrt{\Delta _n}}\) and \(n=\lfloor \frac{t}{\Delta _n}\rfloor \), by Law of Large Numbers,

$$\begin{aligned} I_{4n}\xrightarrow {P}\frac{\bar{g'}(2)\varphi t}{L}. \end{aligned}$$
(70)

Finally, observe that

$$\begin{aligned}&\frac{1}{2n}\sum _{i=0}^{n-1}\left( \overline{X(\epsilon )}_{s_{i+1}}-\overline{X(\epsilon )}_{s_i})(\overline{Y(\delta )}_{s_{i+1}}-\overline{Y(\delta )}_{s_i}\right) \nonumber \\&\quad =\frac{1}{2n}\sum _{i=0}^{n-1}\left( \bar{X}_{s_{i+1}}-\bar{X}_{s_{i}}\right) \left( \bar{Y}_{s_{i+1}}-\bar{Y}_{s_{i}}\right) +\frac{1}{2n}\sum _{i=0}^{n-1}\left( \bar{X}_{s_{i+1}}-\bar{X}_{s_{i}}\right) \left( \bar{\delta }_{s_{i+1}}-\bar{\delta }_{s_{i}}\right) \nonumber \\&\qquad +\frac{1}{2n}\sum _{i=0}^{n-1}\left( \bar{\epsilon }_{s_{i+1}}-\bar{\epsilon }_{s_{i}}\right) \left( \bar{Y}_{s_{i+1}}-\bar{Y}_{s_{i}}\right) +\frac{1}{2n}\sum _{i=0}^{n-1}\left( \bar{\epsilon }_{s_{i+1}}-\bar{\epsilon }_{s_{i}}\right) \left( \bar{\delta }_{s_{i+1}}-\bar{\delta }_{s_{i}}\right) \nonumber \\&\quad = J_{1n}+J_{2n}+J_{3n}+J_{4n}. \end{aligned}$$
(71)

It is easy to show that \(J_{kn}\xrightarrow {P}0\) for \(k=1,2,3\) under Assumption 3. And

$$\begin{aligned} J_{4n}= & {} \frac{1}{2n}\sum _{i=0}^{n-1}\left( \bar{\epsilon }_{s_{i+1}}-\bar{\epsilon }_{s_{i}}\right) \left( \bar{\delta }_{s_{i+1}}-\bar{\delta }_{s_{i}}\right) \\= & {} \frac{1}{2n}\sum _{i=1}^{n-1}\bar{\epsilon }_{s_{i+1}}\bar{\delta }_{s_{i+1}}+\frac{1}{2n}\sum _{i=1}^{n-1}\bar{\epsilon }_{s_{i+1}}\bar{\delta }_{s_{i}} +\frac{1}{2n}\sum _{i=1}^{n-1}\bar{\epsilon }_{s_{i}}\bar{\delta }_{s_{i+1}}+\frac{1}{2n}\sum _{i=1}^{n-1}\bar{\epsilon }_{s_{i}}\bar{\delta }_{s_{i}}, \end{aligned}$$

where \(\bar{\epsilon }_{s_i}=\frac{1}{L}\sum _{k=1}^L\epsilon _{t^N_{N_{i-1}+k}}\) with \(\bar{\epsilon }_0=0\). Since \(E[\bar{\epsilon }_{s_i}\bar{\delta }_{s_i}]=\frac{\varphi }{L}\) and \(E[\bar{\epsilon }_{s^n_{i-1}}\bar{\epsilon }_{s^n_{i-1}}]=0\) by Assumption 3, we get

$$\begin{aligned} J_{4n}\xrightarrow {P}\frac{\varphi }{L}, \end{aligned}$$
(72)

by Law of Large Numbers. Combining (64)–(67), (70)–(72), we obtain the result of Theorem 4. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Z. Estimating integrated co-volatility with partially miss-ordered high frequency data. Stat Inference Stoch Process 19, 175–197 (2016). https://doi.org/10.1007/s11203-015-9124-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11203-015-9124-y

Keywords

Navigation