1 Introduction

In financial econometrics, several statistical methods have been proposed to estimate the integrated volatility and co-volatility from high-frequency data. The integrated volatility is one type of Lé vy or Brownian functionals, and the realized volatility (RV) estimate has been often used when there does not exist any market microstructure noise and the underlying continuous time process is directly observed. It has been known that the RV estimator is quite sensitive to the presence of market microstructure noise in high-frequency financial data. Then, several statistical methods have been proposed to estimate the integrated volatility and co-volatility. See Aït-Sahalia and Jacod (2014) for the detail of recent developments of financial econometrics. In particular, Malliavin and Mancino (2002, 2009) have developed the Fourier series method, which is related to the SIML (separating information maximum likelihood) estimation by Kunitomo et al. (2018) used in this paper. See Gloter and Jacod (2001), Bibinger et al. (2014), Christensen et al. (2010), and Mancino et al. (2017) for related topics.

In this paper, we develop a new statistical way of detecting latent factors of quadratic variation of Itô-semimartingales from a set of discrete observations when the market microstructure noise is present. We will use the high-frequency asymptotic method such that the length of observation intervals becomes small as the number of observations grows, which has been often used in recent financial econometrics. In finance, it is important to find several latent factors among many financial prices such as stocks, bonds and other financial products. It might be a practice to find latent factors after calculating various returns from price data and apply statistical tools such as principal component analysis, factor analysis and other statistical multivariate techniques. However, it should be noted that the standard statistical analysis has been developed to analyze independent (or stationary) observations and most financial prices are classified neither as independent nor stationary observations. In addition to this fact, it is important to notice that when we have market microstructure noises or measurement errors for prices, we have another statistical problem when we use high-frequency financial data. Although the multivariate statistical analyses such as principal components and factor models have been applied to financial data, these statistical methods do not necessarily give the right answers when we have market microstructure noise in high-frequency data. The standard statistical procedures could be a misleading way to analyze high-frequency financial data.

To develop a way to determine the number of latent factors of quadratic covariation or the integrated volatility of asset prices, we shall utilize the SIML (separating information maximum likelihood) method, which was originally developed by Kunitomo and Sato (2013, 2021) and Kunitomo et al. (2018). In high-frequency financial data, it is important to investigate the effects of the possible jumps and the market microstructure noise existed in financial markets. We explore the estimation problem of the variance-covariance matrix of the underlying Itô semimartingales, that is, the quadratic variation (QV). We shall show that it is possible to derive the asymptotic properties of the characteristic vectors and roots of the estimated QV and then develop some test statistic for the rank condition. Our estimators of characteristic vectors and roots are consistent and have desirable asymptotic properties. We develop some test statistics based on the characteristic roots and vectors to detect the number of factors of QV. We also give a real data analysis on the Tokyo stock market as an illustration.

There exist some methods for the analysis of volatility structure under high-frequency observations [e.g., Aït-Sahalia and Xiu (2019), Fissler and Podolskij (2017) and Jacod and Podolskij (2013)]. However, these papers aim at detecting volatility structures that are different from our formulation to be investigated in the present study. Aït-Sahalia and Xiu (2019) investigated the estimation problem of number of factors in high-dimensional factor models which are related to our setting, but their estimation method is different from ours. Jacod and Podolskij (2013) developed a method to detect the “maximal” rank of the volatility process with the fixed time interval framework. Fissler and Podolskij (2017) extended their method to noisy high-frequency data. Since our goal is to detect the rank of “integrated” volatility (or quadratic variation) directly, the main purpose of these studies is different from ours. Our method is simple, and it is not difficult to be implemented even when the dimension is not small and there are some jump terms as well as microstructure noise.

In Sect. 2, we define the Itô semimartingale and quadratic variation, which is an extension of the integrated volatility with jump parts. Then, we define the SIML estimation and its asymptotic property for Itô semimartingales. In Sect. 3, we consider the characteristic equation of the estimated (conditional) variance-covariance matrix and give the theoretical results on the asymptotic properties of the associated characteristic roots and vectors when the true (or efficient price) process is an Itô semimartingale and there are market microstructure noises. Then, we propose test statistics for the rank condition of the quadratic variation, which can be applied to detect the number of latent factors of integrated volatilities for continuous diffusion processes as special cases. In Sect. 4, we give some results on the Monte Carlo simulations of our procedure, and in Sect. 5, we illustrate an empirical data analysis on the Tokyo stock market. Then, in Sect. 6, we give concluding remarks. Some mathematical details are given in Appendix.

2 Estimation of quadratic variation under market microstructure noise

We consider a continuous-time financial market in a fixed terminal time T, and we set \(T=1\) without loss of generality. The underlying log-price is a p-dimensional Itô semimartingale, but we focus on the fact that we observe the log-price process in high-frequency financial prices and they are contaminated by the market microstructure noise. We define the filtered probability space on which the prices follow the Itô semimartingale in the presence of market microstructure noise.

Let the first filtered probability space be \((\Omega ^{(0)}, {\mathcal {F}}^{(0)}, ({\mathcal {F}}_t^{(0)})_{t\ge 0}, P^{(0)})\) on which the p-dimensional Itô semimartingale \({\mathbf{X}} = ({\mathbf{X}}(t))_{0\le t\le 1}\) is defined. We adopt the construction of the whole filtered probability space \((\Omega , {\mathcal {F}}, ({\mathcal {F}}_{t})_{t \in [0,1]},P)\), where both the process \({\mathbf{X}}\) and the noise are defined.

We set \((\Omega ^{(1)}, {\mathcal {F}}^{(1)}, ({\mathcal {F}}_{t}^{(1)})_{t \in [0,1]},P^{(1)})\) as the second filtered probability space, where \(\Omega ^{(1)}\) is the total events of micromarket noise, \({\mathcal {F}}^{(1)}\) is the Borel \(\sigma \)-field on \(\Omega ^{(1)}\), and \(P^{(1)}\) is the probability measure. The market microstructure noise process \({\mathbf{v}} = ({\mathbf{v}}(t))_{t \in [0,1]}\) as the process on \((\Omega ^{(1)},{\mathcal {F}}^{(1)},({\mathcal {F}}_{t}^{(1)})_{t \in [0,1]},P^{(1)})\) with the filtration \({\mathcal {F}}^{(1)}_{t} = \sigma ({\mathbf{v}}(s): s \le t)\) for \(0 \le t \le 1\). We use the filtered space \((\Omega , {\mathcal {F}}, ({\mathcal {F}}_{t})_{t \in [0,1]},P)\), where \(\Omega = \Omega ^{(0)}\times \Omega ^{(1)}\), \({\mathcal {F}} = {\mathcal {F}}^{(0)} \times {\mathcal {F}}^{(1)}\), \({\mathcal {F}}_{t} = {\mathcal {F}}_{t}^{(0)} \otimes {\mathcal {F}}^{(1)}_{t}\) and \(P = P^{(0)} \times P^{(1)}\).

When we consider the continuous time stochastic processes, the class of Itô semimartingales is a fundamental one and it includes the diffusion processes and jump processes as special cases. In applications of high-frequency financial data, it has been known that the role of market microstructure noise is important. However, it is not straightforward to estimate the volatility and co-volatilities or quadratic variation in the general case in the presence of market microstructure noise.

2.1 Itô semimartingale and quadratic variation

In this section, we describe the statistical model of the present paper. Let \(\mathbf{Y}(t_i^n)=(Y_{g}(t_i^n))\)(\(g=1,\ldots ,p\)) be the (p-dimensional) observed (log-)prices at \(t_i\in [0,1] \) and \(i=1,\ldots ,n ,\) which satisfies

$$\begin{aligned} \mathbf{Y}(t_i^n)={\mathbf{X}}(t_i^n)+{\mathbf{v}}(t_i^n)\;\;(i=1,\ldots ,n)\;, \end{aligned}$$

where \({\mathbf{X}}(t_i^n)=(X_{g}(t_i^n))\) is the \(p\times 1\) hidden stochastic vector process and \({\mathbf{v}}(t_i^n)\;(=(v_{g}(t_i^n)))\) is a sequence of (mutually) independently and identically distributed market microstructure noises with \(\mathcal{E}[{\mathbf{v}}(t_i^n)]=\mathbf{0} \) and \(\mathcal{E}[ {\mathbf{v}}(t_i^n){\mathbf{v}}(t_i^n)^{'}] ={{\varvec{\Sigma }}_v}\;(>0\) a positive definite matrix). We set the fixed initial condition \({\mathbf{X}}(t_0^n)\), \(\mathbf{Y}(t_0)={\mathbf{X}}(t_0^n)\), and we take \(t_i^n-t_{i-1}^n=1/n\) (an equidistance interval) and consider the situation \(n\rightarrow \infty \).

We assume that these market microstructure noises are independent of the p-dimensional continuous-time stochastic process \({\mathbf{X}}(t) ,\) which follows

$$\begin{aligned} {\mathbf{X}}(t)= & {} {\mathbf{X}}(0) +\int _0^t{{\varvec{b}}}(s) \mathrm{{d}}s+\int _0^t {{\varvec{\sigma }}}(s) \mathrm{{d}}{} \mathbf{W}(s) + \int _{0}^{t}\int _{\Vert \mathbf{u}\Vert <1} {{\varvec{\Delta }}} (s,\mathbf{u}) ({{\varvec{\mu }}} - {{\varvec{\nu }}})(\mathrm{{d}}s,\mathrm{{d}}{} \mathbf{u}) \nonumber \\&+ \int _{0}^{t}\int _{\Vert \mathbf{u}\Vert \ge 1} {{\varvec{\Delta }}} (s,\mathbf{u}) {{\varvec{\mu }}} (\mathrm{{d}}s,\mathrm{{d}}{} \mathbf{u})\;, \end{aligned}$$

where \({{\varvec{b}}}(s)\) and \({{\varvec{\sigma }}}(s)\) are the p-dimensional adapted drift process and the \(p\times q_1\;(1\le q_1\le p)\) instantaneous predictable volatility process, \(\mathbf{W}(s)=(W_{g}(s))\) is the \(q_1\times 1\) standard Brownian motions, \({{\varvec{\Delta }}} (\omega , s, \mathbf{u})\) is a \(\mathbf{R}^{p}\)-valued predictable function on \(\Omega \times [0,\infty ) \times \mathbf{R}^{q_{2}}\) (\(q_{2} \le p\)), \({{\varvec{\mu }}} (\cdot )\) is a Poisson random measure on \([0,\infty ) \times \mathbf{R}^{q_{2}}\), which is independent of \(\mathbf{W}(s)\), and \({{\varvec{\nu }}} (\mathrm{{d}}s,\mathrm{{d}}{} \mathbf{u}) = \mathrm{{d}}s\otimes \lambda (\mathrm{{d}}{} \mathbf{u})\) is the predictable compensator or intensity measure of \({\varvec{\mu }}\) with a \(\sigma \)-finite measure \(\lambda \) on \((\mathbf{R}^{p}, \mathcal {B}^{p})\). We partially follow the notation used in Jacod and Protter (2012). The jump terms are denoted as \(\Delta {\mathbf{X}}(s)=(\Delta X_{g}(s))\) (\(\Delta X_{g}(s)=X_{g}(s)-X_{g}(s-)\), \(X_{g}(s-)=\lim _{u\uparrow s}X_{g}(u)\) at any \(s\in [0,1]\)), and \(\Vert \cdot \Vert \) is the Euclidean norm on \(\mathbf{R}^p\). We use the notation \({{\varvec{c}}}(s) = {{\varvec{\sigma }}}(s){{\varvec{\sigma }}}^{'}(s) = ({{\varvec{c}}}_{gh}(s))\) (\(p\times p\) matrix) and for \(p\times 1\) vectors \(\mathbf{y}_i=\mathbf{Y}(t_i^n),\) \({\mathbf{x}}_i={\mathbf{X}}(t_i^n),\) and \({\mathbf{v}}_i={\mathbf{v}}(t_i^n)\;(i=1,\ldots ,n)\).

In this paper, we restrict our formulation in (1) and (2) although there can be some extensions. We consider the volatility function \({{\varvec{\sigma }}}(s)\;(=({\sigma }_{gh}(s))\), \(g=1,\ldots ,p;h=1,\ldots ,q_1\)) is deterministic or its elements follow It\(\hat{o}\)’s Brownian continuous semi-martingale given by

$$\begin{aligned} \sigma _{gh}^{(x)}(t)= \sigma _{gh}^{(x)}(0) +\int _0^t \mu _{gh}^{\sigma }(s) \mathrm{{d}}s +\int _0^t {{\varvec{\omega }}}_{gh}^{\sigma }(s)\mathrm{{d}}\mathbf{W}^{\sigma }(s)\;, \end{aligned}$$

where \(\mathbf{W}^{\sigma }(s)\) is a \(q_3\times 1\;(q_3\ge 0)\) Brownian motions, which can be correlated with \(\mathbf{W}(s)\), and \({\mu }_{gh}^{\sigma }(s) \) are the drift terms of volatilities, and \( {{\varvec{\omega }}}_{gh}^{\sigma }(s) \) (\(1\times q_3\)) are the diffusion terms of instantaneous volatilities, respectively. They are H\(\ddot{o}\)lder-continuous, predictable and progressively measurable with respect to \(({{\varvec{\Omega }}}^{(0)}, \mathcal{F}^{(0)}, (\mathcal{F}_t^{(0)})_{t\ge 0}, P^{(0)})\). We consider the case when they are bounded and Hölder-continuous such that the volatility and co-volatility processes are smooth.

For the resulting simplicity, we set several assumptions on (1)–(3), which can be certainly relaxed to some extent.

Assumption 1

  1. (a)

    The drift function \({{\varvec{b}}}(\omega , t)=\mathbf{0}\) for \(0\le t\le 1\).

  2. (b)

    The elements of volatility matrix are bounded and the process \({{\varvec{\sigma }}}\) is Hölder-continuous We have \(\int _{t}^{t+u}\Vert {{\varvec{\sigma }}}(s)\Vert \mathrm{{d}}s > 0\) a.s. for all \(t,u>0\).

  3. (c)

    The jump coefficients \(\mathbf{\Delta }(\omega , t, \mathbf{u})\) are bounded and deterministic.

  4. (d)

    The noise terms \({\mathbf{v}}(t_i^n)\;(=v_g(t_i^n))\) (\(i=1,\ldots ,n; g=1,\ldots ,p\)) are a sequence of i.i.d. random variables with \(\mathcal{E}[{\mathbf{v}}(t_i^n)]=0\), \(\mathcal{E}[{\mathbf{v}}(t_i^n){\mathbf{v}}^{'}(t_i^n)]={{\varvec{\Sigma }}}_v\) (a positive definite matrix) and \(\mathcal{E}[v_g^{4}(t_i^n) ]< +\infty \;(g=1,\ldots ,p)\). Furthermore, the stochastic processes of \({\mathbf{v}}\) and \({\mathbf{X}}\) are independent.

  5. (e)

    We have equidistance observations, that is, \(t_0^n=0\) and \(t_i^n=i/n\;(i=1,\ldots ,n)\) in [0, 1].

Conditions (a)–(c) are stronger than the ones in some literature, but they are sufficient for the existence of solutions for the stochastic differential equations (SDE). (See Chapter IV of Ikeda and Watanabe 1989, for instance.) We consider only the cases under conditions (d) and (e) in the following analysis. However, it is possible to extend the following analysis to more general cases. For instance, it may be straightforward to consider the case when the volatility functions have jump terms with some complication of our analysis. Since this paper is the first attempt to develop a new way to handle the problem, we shall consider the simple cases in this paper.

The fundamental quantity for the continuous-time Itô semimartingale with \(p\ge 1\) is the quadratic variation (QV) matrix, which is given by

$$\begin{aligned} {{\varvec{\Sigma }}}_x =\int _0^1{{\varvec{c}}}(s)\mathrm{{d}}s +\sum _{0\le s\le 1}(\Delta {\mathbf{X}}(s))(\Delta {\mathbf{X}}(s))^{'} =({\Sigma }_{gh}^{(x)})\;. \end{aligned}$$

When the stochastic process is the diffusion-type, \({{\varvec{\Sigma }}}_x\) becomes the integrated volatility \( \int _0^1{{\varvec{c}}}(s)\mathrm{{d}}s\). The class of Itô semimartingales and the quadratic variation, which have been standard in stochastic analysis, are fully explained by Ikeda and Watanabe (1989) and Jacod and Protter (2012) as the standard literature.

2.2 On the SIML estimation

Kunitomo and Sato (2013) have developed the separating information maximum likelihood (SIML) estimation for general \(p\ge 1\), but there are no jump terms. The SIML estimator of \({\hat{{\varvec{\Sigma }}}}_x\) for the integrated volatility is defined by

$$\begin{aligned} {\hat{{\varvec{\Sigma }}}}_x = \mathbf{G}_m =\frac{1}{m_n}\sum _{k=1}^{m_n} \mathbf{z}_k\mathbf{z}_k^{'} = (\hat{\Sigma }_{gh}^{(x)})\;, \end{aligned}$$

where \(\mathbf{z}_k=(z_{gk})\;(j=1,\ldots ,p;k=1,\ldots ,m_n),\) which are constructed by the transformation from \(\mathbf{Y}_n=(\mathbf{y}_i^{'})\) (\(n\times p\)) to \(\mathbf{Z}_n\;(=(\mathbf{z}_k^{'}))\) by

$$\begin{aligned} \mathbf{Z}_n= \mathbf{K}_n \left( \mathbf{Y}_n-{\bar{\mathbf{Y}}}_0 \right) \end{aligned}$$

where \( \mathbf{K}_n= h_n^{-1/2}{} \mathbf{P}_n \mathbf{C}_n^{-1} ,\) \(h_n=1/n ,\)

$$\begin{aligned} \mathbf{C}_n^{-1}= & {} \left( \begin{array}{ccccc} 1 &{} 0 &{} \cdots &{} 0 &{} 0 \\ -1 &{} 1 &{} 0 &{} \cdots &{} 0 \\ 0 &{}-1 &{} 1 &{} 0 &{} \cdots \\ 0 &{} 0 &{}-1 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} -1 &{} 1 \\ \end{array} \right) _{n\times n}\;,\\ \mathbf{P}_n= & {} (p_{jk}^{(n)})\;,\; p_{jk}^{(n)} =\sqrt{ \frac{2}{n+\frac{1}{2}} } \cos \left[ \frac{2\pi }{2n+1} \left( k-\frac{1}{2}\right) \left( j-\frac{1}{2}\right) \right] \; \end{aligned}$$

and \( {\bar{\mathbf{Y}}}_0 = \mathbf{1}_n \cdot \mathbf{y}_0^{'} \;\) .

By using the spectral decomposition \( \mathbf{C}_n^{-1}{} \mathbf{C}_n^{' -1} =\mathbf{P}_n \mathbf{D}_n \mathbf{P}_n^{'} \) and \(\mathbf{D}_n\) is a diagonal matrix with the kth element \(d_k= 2 [ 1-\cos (\pi (\frac{2k-1}{2n+1})) ] \;(k=1,\ldots ,n)\;\) and \( a_{k n}\;(=n\times d_k)\; =4 n\sin ^2 \left[ \frac{\pi }{2}\left( \frac{2k-1}{2n+1} \right) \right] \).

To assure some desirable asymptotic properties of the SIML estimator, we need the condition that the number of terms \(m_n\) should be dependent on n and we need the order requirement that \(m_n=O(n^{\alpha })\;(0<\alpha <0.5)\) for the consistency and \(m_n=O(n^{\alpha })\;(0<\alpha <0.4)\) for the asymptotic normality.

The variance-covariance matrix \({{\varvec{\Sigma }}}_v\) can be consistently estimated by

$$\begin{aligned} {\hat{{\varvec{\Sigma }}}}_v =\frac{1}{l_n}\sum _{k=n+1-l_n}^n a_{kn}^{-1}{} \mathbf{z}_k\mathbf{z}_k^{'}\;, \end{aligned}$$

where \(l_n=[n^{\beta }]\;(0<\alpha< \beta <1)\). We can take \(\beta \) being slightly less than 1 and \({\hat{{\varvec{\Sigma }}}}_v={{\varvec{\Sigma }}}_v+O_p(\frac{1}{\sqrt{l_n}})\) such that the effects of estimating \({{\varvec{\Sigma }}}_v\) are negligible. (See Chapter 5 of Kunitomo et al. 2018.)

When X is an Itô semimartingale with possible jumps, the asymptotic properties of the SIML estimator were stated in Chapter 9 of Kunitomo et al. (2018) (Proposition 9.1 and Corollary 9.1) without the detailed exposition. Because they are the starting points of further developments, we state an extended version of their result and we give some supplementary derivations in Appendix for the sake of convenience.

In the following results, we use the stable convergence arguments and \({\mathcal {F}}^{(0)}\)-conditionally Gaussianity, which have been developed and explained by Jacod (2008) and Jacod and Protter (2012), and use the notation \({\mathop {\longrightarrow }\limits ^{\mathcal {L}-s}}\) as stable convergence in law. For the general reference on stable convergence, we refer to Häusler and Luschgy (2015). We use the notation \({\mathop {\rightarrow }\limits ^{d}}\) and \({\mathop {\rightarrow }\limits ^{p}}\) as convergence in distribution and in probability, respectively.

Theorem 2.1

Suppose Assumption 1is satisfied and \(\mathcal{E}[v_g^4(t_i^n)]<+\infty \) (\(g=1,\ldots ,p\)) in (1), (2) and (3).

  1. (i)

    For \(m_n= [n^{\alpha }]\) (\([\cdot ]\) is the floor function) and \(0<\alpha < 0.5 ,\) as \(n \rightarrow \infty \)

    $$\begin{aligned} {\hat{{\varvec{\Sigma }}}}_x - {{\varvec{\Sigma }}}_x {\mathop {\longrightarrow }\limits ^{p}} \mathbf{O}. \end{aligned}$$
  2. (ii)


    $$\begin{aligned} {\hat{\Sigma }}_{gh}^{(*)} =\sqrt{m_n} \left[ {\hat{\Sigma }}_{gh}^{(x)}-{\Sigma }_{gh}^{(x)} \right] , \end{aligned}$$

    where \(\hat{\Sigma }_{gh}^{(x)}\) is the (gh)th component of \(\hat{{\varvec{\Sigma }}}_{x}\) and \(\Sigma _{gh}^{(x)}\) is the (gh)th component of \(\mathbf{\Sigma }_x\). Then, as \(n \rightarrow \infty \), for \(m_n=[n^{\alpha }]\) and \(0<\alpha < 0.4,\) we have that

    $$\begin{aligned} \left[ \begin{array}{c} {\hat{\Sigma }}_{gh}^{(*)}\\ {\hat{\Sigma }}_{kl}^{(*)} \end{array}\right] {\mathop {\longrightarrow }\limits ^{\mathcal {L}-s}} N\left[ \mathbf{0}, \left( \begin{array}{cc}V_{gh}&{} V_{gh,kl}\\ V_{gh,kl}&{} V_{kl} \end{array}\right) \right] \; \end{aligned}$$

    where \(\mathbf{c}(s)=(c_{gh}(s))\,(0\le s\le 1)\),

    $$\begin{aligned} V_{gh}= & {} \int _0^1 \left[ {{\varvec{c}}}_{gg}(s) {{\varvec{c}}}_{hh}(s) + {{\varvec{c}}}_{gh}^{2}(s) \right] \mathrm{{d}}s \\&+ \sum _{0<s\le 1} \left[ {{\varvec{c}}}_{gg}(s)(\Delta X_h(s))^2 +{{\varvec{c}}}_{hh}(s)(\Delta X_g(s))^2 +2{{\varvec{c}}}_{gh}(s)(\Delta X_g(s)\Delta X_h(s)) \right] \; \end{aligned}$$


    $$\begin{aligned} V_{gh,kl}= & {} \int _0^1 \left[ {{\varvec{c}}}_{gk}(s) {{\varvec{c}}}_{hl}(s) + {{\varvec{c}}}_{gl}(s){{\varvec{c}}}_{hk}(s) \right] \mathrm{{d}}s\\&+ \sum _{0<s\le 1} \left[ {{\varvec{c}}}_{gk}(s)\Delta X_h(s)\Delta X_l(s) +{{\varvec{c}}}_{gl}(s)\Delta X_h(s)\Delta X_k(s) \right. \\&\left. +{{\varvec{c}}}_{hk}(s)\Delta X_g(s)\Delta X_l(s) +{{\varvec{c}}}_{hl}(s)\Delta X_g(s)\Delta X_k(s) \right] \;. \end{aligned}$$

Corollary 2.1

When \(p=1 \) in Theorem 2.1, the asymptotic variance \(V_{gg}\) is given by

$$\begin{aligned} V_{gg}= 2\left[ \int _0^1 {{\varvec{c}}}_{gg}^2(s)\mathrm{{d}}s +2\sum _{0<s\le 1} {{\varvec{c}}}_{gg}(s)(\Delta X_g(s))^2\right] \;. \end{aligned}$$

The notable point is the fact that the asymptotic distribution and limiting variance-covariances of the SIML estimator have the same forms as the ones of the realized volatility and co-volatilities when there is no noise term if we replace n by \(m_n,\) which is dependent on n. It is the key fact to obtain the results of asymptotic properties of the characteristic roots and vectors from the estimated QV in the next section. However, in order to deal with the micro-market noise, we need to use a smaller order \(m=[n^{\alpha }]\;(0<\alpha <0.5\;\mathrm{or}\;0.4)\), than n.

3 Use of characteristic roots and vectors

3.1 Reduced rank condition for quadratic covariation

One of important observations on the asset price movements has been the empirical observation that although there are many financial assets traded in markets, many of them move in similar ways with their trends, volatilities and jumps. Then, there is a question how to cope with many asset prices when the number of latent factors of volatilities or quadratic variation of asset prices is less than p, which is the dimension of observed prices. In this section, we consider the case when the underlying continuous time stochastic process is a p-dimensional Itô semimartingale and the number of latent factors of quadratic variation \(q_x\) is less than p.

We set the rank condition for the volatilities and co-volatilities:

Assumption 2

Let \(r_x=p - q_x\;(>0)\) and assume that there exists a \(p\times r_x\) (\(1\le r_x<p\)) matrix \(\mathbf{B}\) with rank \(r_x\) such that

$$\begin{aligned} \mathbf{B}^{'} \left[ {\mathbf{X}}(t)- {\mathbf{X}}(0)-\int _0^t{{\varvec{b}}}(s)\mathrm{{d}}s\right] =\mathbf{O} \;\;(0\le t\le 1)\; \end{aligned}$$

with probability one. (\(\mathbf{B}\) can be random.)

This condition corresponds to the case when the number of latent factors is less than the observed dimension and the latent random components are in the sub-space of \(\mathbf{R}^{q_x}\;(q_x<p)\). A related condition is

$$\begin{aligned} \mathrm{rank}(\mathbf{\Sigma }_x) =\mathrm{rank}\left[ \int _0^1{{\varvec{c}}}(s)\mathrm{{d}}s +\sum _{0\le s\le 1}(\Delta {\mathbf{X}}(s))(\Delta {\mathbf{X}}(s))^{'}\right] =q_x<p\; \end{aligned}$$

with probability 1. Equation (13) implies that there exists \(p\times r_x\) (nonzero) matrix \(\mathbf{B}\) consisting of \(r_x\) (nonzero) vectors such that

$$\begin{aligned} {{\varvec{\Sigma }}}_x\mathbf{B}=\mathbf{O}\;, \end{aligned}$$

where \( {{\varvec{\Sigma }}}_x=(\Sigma _{gh}^{(x)})\).

In statistical multivariate analysis and the multivariate errors-in-variables models, the conditions in (12) and (13) have the similar aspect as the reduced rank regression problem or the test of dimensionality. (See Anderson (1984), Anderson (2003) and Fujikoshi et al. (2010), for instance.) The new feature in our formulation is the fact that we are dealing with the continuous-time stochastic process as the latent process, while we have discrete observations with measurement errors.

Since Assumption 2 is restrictive, there can be several ways to relax this condition. For instance, a more general formulation has been considered by Jacod and Podolskij (2013) and Fissler and Podolskij (2017), which are called the test of maximum rank. To avoid a substantial complication, however, we use Assumption 2 in our development. The statistical procedure under this condition becomes considerably simpler than the general case, and it may be enough for applications as we shall illustrate in Sect. 5.

In the present situation, if we take \(m_n=[ n^{\alpha } ]\) and \(0<\alpha <0.5 \), then from Theorem 2.1 as \(n \rightarrow \infty \)

$$\begin{aligned} {\hat{{\varvec{\Sigma }}}}_x -{{\varvec{\Sigma }}}_m {\mathop {\longrightarrow }\limits ^{ p }} \mathbf{O}\;, \end{aligned}$$

where \( {{\varvec{\Sigma }}}_m=(\Sigma _{gh.m}) ,\)

$$\begin{aligned} {{\varvec{\Sigma }}}_m ={{\varvec{\Sigma }}}_x+ a_m{{\varvec{\Sigma }}}_v =(\Sigma _{gh}^{(x)}+a_m\Sigma _{gh}^{(v)}) \end{aligned}$$


$$\begin{aligned} a_m=\frac{1}{m_n}\sum _{k=1}^m a_{kn}\; \end{aligned}$$

and \( a_{kn}=4n \sin ^2[\frac{\pi }{2}(\frac{2k-1}{2n+1})]\; (k=1,\ldots ,n)\).

We also use

$$\begin{aligned} a_m(2) = \frac{1}{m_n}\sum _{k=1}^m a_{kn}^2 \end{aligned}$$

By using the fact that \(a_m \rightarrow 0\) as \(m_n=n^{\alpha } \rightarrow \infty \) for \(0<\alpha <0.5\), and the relation \(\sin x\sim x-(1/6)x^3+o(x^3)\) when x is small, it is straightforward to obtain the next result. (We often use m instead of \(m_n\) in the following analysis.)

Lemma 3.1

We set \(a_m=(1/m)\sum _{k=1}^ma_{kn}\) and \(a_m(2)=(1/m)\sum _{k=1}^m a_{kn}^2\). Then, as \(n \rightarrow \infty \) and \(m \rightarrow \infty \),

$$\begin{aligned} \frac{n}{m^2} a_m= & {} \left( \frac{n}{m^2}\right) \frac{1}{m}\sum _{k=1}^m a_{kn} \sim \pi ^2\int _0^1 s^2\mathrm{{d}}s \end{aligned}$$


$$\begin{aligned} \frac{n^2}{m^4} a_m(2)= & {} \frac{1}{m}\sum _{k=1}^m \left( \frac{n^2}{m^4}\right) [a_{kn}]^2 \sim \pi ^4 \int _0^1 s^4\mathrm{{d}}s. \end{aligned}$$

In the following derivations, we will investigate the case as if \({{\varvec{\Sigma }}}_v\) is known and \(\vert {{\varvec{\Sigma }}}_v \vert \ne 0\). The results with unknown \({{\varvec{\Sigma }}}_v\) do not depend on this situation if we use a consistent estimator of \({{\varvec{\Sigma }}}_v\). The variance-covariance matrix \({{\varvec{\Sigma }}}_v\) can be consistently estimated by \( {\hat{{\varvec{\Sigma }}}}_v\) in (7) when we take \(l_n=[n^{\beta }]\;(0<\alpha< \beta <1)\). We can take \(\beta \) being slightly less than 1 and \({\hat{{\varvec{\Sigma }}}}_v={{\varvec{\Sigma }}}_v+O_p(\frac{1}{\sqrt{l_n}})\) such that the effects of estimating \({{\varvec{\Sigma }}}_v\) are negligible.

3.2 Asymptotic properties of characteristic vectors

Let the characteristic equation be

$$\begin{aligned} \left| \mathbf{G}_m -\lambda {\hat{{\varvec{\Sigma }}}}_v \right| =0\;, \end{aligned}$$

and \(\hat{\mathbf{B}}\), the estimator of \(\mathbf{B}\) in (12) and (14), is given by

$$\begin{aligned} \left[ \mathbf{G}_m \hat{\mathbf{B}}-{\hat{{\varvec{\Sigma }}}}_v \hat{\mathbf{B}}{{\varvec{\Lambda }}} \right] =\mathbf{O}\;, \end{aligned}$$

where \(\mathbf{G}_m\;(={\hat{{\varvec{\Sigma }}}}_x)\) in (5), \({\hat{{\varvec{\Sigma }}}}_v\) in (7), \(\lambda _i\;(i=1,\ldots ,p)\) are the characteristic roots, \({{\varvec{\Lambda }}}=\mathrm{diag} (\lambda _1,\ldots ,\lambda _{r_x} )\) with \(0\le \lambda _1\le \cdots \le \lambda _p\). For the resulting convenience, we take a \(p\times r_x\;(p=r_x+q_x)\)

$$\begin{aligned} \hat{\mathbf{B}} =\left[ \begin{array}{c} \mathbf{I}_{r_x}\\ -\hat{\mathbf{B}}_2\end{array} \right] \; \end{aligned}$$

for a normalization of characteristic vectors.

We will use \({{\varvec{\Sigma }}}_v\) instead of \({\hat{{\varvec{\Sigma }}}}_v\) and consider \( \left| \mathbf{G}_m -\lambda {{\varvec{\Sigma }}}_v \right| =0\) in the following derivation by the argument we have discussed.

Instead of \({{\varvec{\Sigma }}}_v\) as a metric, we can take \(\mathbf{H}\), which is any positive (known) definite matrix such as the identity matrix \(\mathbf{I}_p\). Then, the following derivation and the resulting expression become slightly complicated than the case with \({{\varvec{\Sigma }}}_v\), and we shall see its consequence at the end of Sect. 3.4.

We take the probability limit of the determinantal equation

$$\begin{aligned} \left| \mathrm{plim}_{m \rightarrow \infty } (\mathbf{G}_m-a_m {{\varvec{\Sigma }}}_v) -( \mathrm{plim}_{m\rightarrow \infty }\lambda -a_m){{\varvec{\Sigma }}}_v \right| =0\;. \end{aligned}$$

The rank of \({{\varvec{\Sigma }}}_x\) is \(q_x ,\) which is less than p, and \(a_m=O(m^2/n)\). Although \( \mathbf{G}_m-{{\varvec{\Sigma }}}_x=O_p(1/\sqrt{m}) \) from (61), we find that the dominant term for the determinantal equation (21) under Assumption 2 should be \( \mathbf{G}_m-({{\varvec{\Sigma }}}_x +a_m {{\varvec{\Sigma }}}_v) =O_p(\frac{a_m}{\sqrt{m}})=O_p(\frac{\sqrt{m^3}}{n})\) and we have

$$\begin{aligned} \lambda _i-a_m {\mathop {\longrightarrow }\limits ^{p}} 0\;(i=1,\ldots , r_x)\; \end{aligned}$$

if \(\frac{\sqrt{m^3}}{n}\longrightarrow 0\) as \(n\rightarrow +\infty \).


$$\begin{aligned}{}[ \mathrm{plim}_{n\rightarrow \infty }(\mathbf{G}_m -a_m{{\varvec{\Sigma }}}_v) ] [ \mathrm{plim}_{n\rightarrow \infty }\hat{\mathbf{B}} -\mathbf{B}]=\mathbf{O}\;\;. \end{aligned}$$

By multiplying \({{\varvec{\Pi }}}_{*}^{'}\)(\(q_x\times p\)) from the left hand to

$$\begin{aligned} {{\varvec{\Pi }}}_{*}^{'}[\mathrm{plim}_{n\rightarrow \infty } (\mathbf{G}_m -a_m{{\varvec{\Sigma }}}_v )] \mathrm{plim}_{n\rightarrow \infty } \left[ \hat{\mathbf{B}}-\mathbf{B} \right] =\mathbf{O}\;, \end{aligned}$$

such that we can take \({{\varvec{\Pi }}}^{*'}\mathrm{plim}_{n\rightarrow \infty }{} \mathbf{G}_m [\begin{array}{c} \mathbf{O}\\ \mathbf{I}_{q_x}\end{array} ]\) is non-singular. By using the facts that the rank is \(q_x \) and the normalization of \(\mathbf{B}\), we find

$$\begin{aligned} \mathrm{plim}_{n\rightarrow \infty } \hat{\mathbf{B}} =\mathbf{B}\;. \end{aligned}$$

In order to proceed the further step to evaluate the limiting random variables, we use the \(\mathbf{K}_n\)-transformation in (6), and we decompose the resulting random variables

$$\begin{aligned} \mathbf{z}_k= {\mathbf{x}}_k^{*}+{\mathbf{v}}_{k}^{*} ={\mathbf{x}}_k^{*}+\sqrt{a_{kn}}{} \mathbf{u}_{k}^{*}\;(k=1,\ldots ,m) \end{aligned}$$

with \(\mathcal{E}(\mathbf{u}_k^{*}{} \mathbf{u}_k^{*'})={{\varvec{\Sigma }}}_v\) and

$$\begin{aligned} \mathbf{G}_m=\frac{1}{m}\sum _{k=1}^m ( {\mathbf{x}}_k^{*} +\sqrt{a_{kn}}{} \mathbf{u}_{k}^{*}) ( {\mathbf{x}}_k^{*} +\sqrt{a_{kn}}{} \mathbf{u}_{k}^{*})^{'}\;, \end{aligned}$$

where the \(p\times 1\) random vectors \({\mathbf{x}}_k^{*}\) and \({\mathbf{v}}_k^{*}\) are defined by \(( {\mathbf{x}}_k^{*'})=\mathbf{K}_n( {\mathbf{x}}_k^{'})\) and \(({\mathbf{v}}_k^{*'})=\mathbf{K}_n({\mathbf{v}}_k^{'})\), which are \(n\times p\) matrices, and \(\mathbf{u}_k^{*'}=a_{kn}^{-1/2}{} {\mathbf{v}}_k^{*'}\).

Under the null-hypothesis \(\mathrm{H}_0: {{\varvec{\Sigma }}}_x\mathbf{B}=\mathbf{O},\) we have \( \mathbf{B}^{'}{{\varvec{\Sigma }}}_x\mathbf{B}=\mathbf{O}\) (\(r_x\times r_x\) zero matrix). We consider the representation that

$$\begin{aligned} \mathbf{B}^{'}{} \mathbf{G}_m =\frac{1}{m}\sum _{k=1}^m \mathbf{B}^{'}({\mathbf{x}}_k^{*}+ \sqrt{a_{kn}}{} \mathbf{u}_{k}^{*}) ({\mathbf{x}}_k^{*} +\sqrt{a_{kn}}{} \mathbf{u}_{k}^{*})^{'}\;. \end{aligned}$$

Let \({{\varvec{\beta }}}_j\;(=(\beta _{hj}))\) be the jth column vector of \(\mathbf{B}\) (\(j=1,\ldots ,r_x\)) and

$$\begin{aligned} \sqrt{m}[ \mathbf{G}_m-{{\varvec{\Sigma }}}_m ]{{\varvec{\beta }}}_j =\sqrt{m}\left[ \sum _{h=1}^p({\hat{\Sigma }}_{gh}^{(x)} -\Sigma _{gh.m} )\beta _{hj} \right] _g\;, \end{aligned}$$

where \({{\varvec{\Sigma }}}_m={{\varvec{\Sigma }}}_x +a_m{{\varvec{\Sigma }}}_v\;(=(\Sigma _{gh.m}))\) and \(a_m=(1/m)\sum _{k=1}^m a_{kn}\).

Since we have the relations \( \sum _{h=1}^p \Sigma _{gh}^{(x)}\beta _{hj}=0\) for \(g,j=1,\ldots ,p\) under the rank condition, we decompose

$$\begin{aligned}{}[ \mathbf{G}_m-{{\varvec{\Sigma }}}_m ]{{\varvec{\beta }}}_j= & {} \frac{1}{m}\sum _{k=1}^m ({\mathbf{x}}_k^{*}{} {\mathbf{x}}_{k}^{*'}-{{\varvec{\Sigma }}}_x){{\varvec{\beta }}}_j +\frac{1}{m}\sum _{k=1}^m \sqrt{a_{kn}} \mathbf{u}_k^{*}({\mathbf{x}}_{k}^{*'}{{\varvec{\beta }}}_j)\nonumber \\&+ \frac{1}{m}\sum _{k=1}^m \sqrt{a_{kn}} {\mathbf{x}}_k^{*}(\mathbf{u}_{k}^{*'}{{\varvec{\beta }}}_j) +\frac{1}{m}\sum _{k=1}^m a_{kn}(\mathbf{u}_k^{*}{} \mathbf{u}_{k}^{*'}-{{\varvec{\Sigma }}}_v){{\varvec{\beta }}}_j\;. \end{aligned}$$

Then, we need to evaluate the order of four terms in the above decomposition. Under Assumption 2, we need to evaluate the last two terms. The third term of (29) is \(O_p(\sqrt{a_{m}/m})\;(=O_p(\sqrt{m/n}))\), and the order of the last term is \(O_p(\sqrt{a_m(2)/m)}\;(=O_p(\sqrt{m^3/n^2}))\) by using Lemma 3.1. Hence, we find that the dominant term is the third term

$$\begin{aligned} \mathbf{g}_m=\frac{1}{m}\sum _{k=1}^m \sqrt{a_{kn}} {\mathbf{x}}_k^{*}(\mathbf{u}_{k}^{*'}{{\varvec{\beta }}}_j) \end{aligned}$$

as \(n\rightarrow \infty \). We summarize the result of order evaluation, and a brief derivation is in Appendix.

Lemma 3.2

We take \(m=[n^{\alpha }]\) with \(0<\alpha <1\). Under Assumptions  1and 2, as \(m,n \rightarrow \infty \) while \(m/n \rightarrow 0\), we have

$$\begin{aligned} \sqrt{\frac{m}{a_m}} \left[ \mathbf{g}_m \right] =O_p(1)\;. \end{aligned}$$

Now we shall consider the asymptotic distribution of the characteristic vectors in (22). Let an \(r_x\times r_x\) diagonal matrix be \({{\varvec{\Lambda }}}=(\mathrm{diag}\; (\lambda _i))\) and

$$\begin{aligned} \mathbf{G}_m\hat{\mathbf{B}} - {{\varvec{\Sigma }}}_v\hat{\mathbf{B}} [ a_m\mathbf{I}_{r_x}+({{\varvec{\Lambda }}}-a_m\mathbf{I}_{r_x})] =\mathbf{O}\;, \end{aligned}$$

which can be written as:

$$\begin{aligned}{}[ \mathbf{G}_m-a_m{{\varvec{\Sigma }}}_v] \hat{\mathbf{B}} ={{\varvec{\Sigma }}}_v\hat{\mathbf{B}} [{{\varvec{\Lambda }}}-a_m\mathbf{I}_{r_x}]\;. \end{aligned}$$

By multiplying \(\hat{\mathbf{B}}^{'}\)(\(r_x\times r_x\)) from the left hand to (33), we find

$$\begin{aligned} \hat{\mathbf{B}}^{'}{{\varvec{\Sigma }}}_v\hat{\mathbf{B}} [{{\varvec{\Lambda }}}-a_m\mathbf{I}_{r_x}] =\hat{\mathbf{B}}^{'} [ \mathbf{G}_m-a_m{{\varvec{\Sigma }}}_v] \hat{\mathbf{B}}\;. \end{aligned}$$

Also by multiplying \( [\mathbf{O},\mathbf{I}_{q_x}]\) (\(q_x\times p\) matrix) from the left hand to (33), the resulting equation can be rewritten as:

$$\begin{aligned}{}[\mathbf{O},\mathbf{I}_{q_x}] [ \mathbf{G}_m-a_m{{\varvec{\Sigma }}}_v] [\mathbf{B}+(\hat{\mathbf{B}}-\mathbf{B})] =[\mathbf{O}, \mathbf{I}_{q_x}]{{\varvec{\Sigma }}}_v [\mathbf{B}+(\hat{\mathbf{B}}-\mathbf{B})] [{{\varvec{\Lambda }}}-a_m\mathbf{I}_{r_x}]\;. \end{aligned}$$

Then, we evaluate the order of each terms of the above equation given that \(\hat{\mathbf{B}}-\mathbf{B}{\mathop {\rightarrow }\limits ^{p}} \mathbf{O}\) as \(n\rightarrow \infty \). If we have the condition \(m/n \rightarrow 0\) as \(n \rightarrow \infty \), it is asymptotically equivalent to

$$\begin{aligned}{}[\mathbf{O},\mathbf{I}_{q_x}] \sqrt{\frac{n}{m}} [ \mathbf{G}_m-a_m{{\varvec{\Sigma }}}_v] \mathbf{B} =[\mathbf{O},\mathbf{I}_{q_x}]\mathbf{\Sigma }_x \left[ \begin{array}{c}{} \mathbf{O}\\ \mathbf{I}_{q_x}\end{array}\right] \sqrt{\frac{n}{m}}(\hat{\mathbf{B}}_2-\mathbf{B}_2)+o_p(1) \;. \end{aligned}$$

[The above method has been standard in the statistical multivariate analysis. See Anderson (2003), or Fujikoshi et al. (2010).]

Then, we find the normalization factor \(c_m\) as

$$\begin{aligned} c_m= \sqrt{ \frac{m}{a_m}} =O\left( \sqrt{\frac{n}{m}}\right) \;, \end{aligned}$$

which goes to infinity as \(n \rightarrow \infty \) when \(m/n\rightarrow 0\) as \(n \rightarrow \infty \).

By noting that (24) is true when \(0<\alpha <2/3\), we summarize the asymptotic order of \(c_m(\hat{\mathbf{B}}_2-\mathbf{B}_2)\) as the next proposition.

Theorem 3.1

Suppose Assumptions 1and 2are satisfied. We take \(m=m_n=[n^{\alpha }], \) \(l_n=[n^{\beta }]\) with \(0<\alpha<\beta<1, 0< \alpha <2/3\), and \({\hat{{\varvec{\Sigma }}}}_v\). Let \(c_m \rightarrow \infty \) as \(n \rightarrow \infty \). Then, as \(n \rightarrow \infty ,\) we have that

$$\begin{aligned} c_m \left[ \hat{\mathbf{B}}_2-\mathbf{B}_2 \right] =O_p(1)\;. \end{aligned}$$

3.3 Asymptotic properties of characteristic roots

Next, we investigate the limiting distribution of the characteristic roots of \({{\varvec{\Lambda }}}\) and the related statistics.

From (29), let

$$\begin{aligned} \sqrt{m} \mathbf{B}^{'}(\mathbf{G}_m-{{\varvec{\Sigma }}}_m)\mathbf{B}= & {} \sqrt{m}\Big [ \frac{1}{m} \sum _{k=1}^m \mathbf{B}^{'}({\mathbf{x}}_k^{*}{\mathbf{x}}_{k}^{*'}-{{\varvec{\Sigma }}}_x)\mathbf{B} +\frac{1}{\sqrt{m}}\sum _{k=1}^m \sqrt{a_{kn}} \mathbf{B}^{'}\mathbf{u}_k^{*}({\mathbf{x}}_{k}^{*'}{} \mathbf{B})\\&+ \frac{1}{ \sqrt{m} } \sum _{k=1}^m \sqrt{a_{kn}} \mathbf{B}^{'}{} {\mathbf{x}}_k^{*}{} \mathbf{u}_{k}^{*'}{} \mathbf{B} +\frac{1}{ \sqrt{m} } \sum _{k=1}^m a_{kn}(\mathbf{u}_k^{*}{} \mathbf{u}_k^{*'}-{{\varvec{\Omega }}}_v)\; \end{aligned}$$

and \({{\varvec{\Omega }}}_v=\mathbf{B}^{'}{{\varvec{\Sigma }}}_v\mathbf{B}\;(=(\omega _{gh}))\).

Under Assumption 2, we only need to evaluate the last term, which can be written as

$$\begin{aligned} \mathbf{E}_m=\frac{1}{\sqrt{m}}\sum _{k=1}^m a_{kn} (\mathbf{u}_k^{*}{} \mathbf{u}_k^{*'}-{{\varvec{\Omega }}}_v)\;. \end{aligned}$$

By applying CLT to (38), under the assumption of existence of 4th order moments of noise terms, we have the asymptotic normality, which is summarized in the next lemma and the derivation of a CLT with asymptotic covariances is given in Appendix.

Lemma 3.3

Let \( {{\varvec{\Omega }}}_v=\mathbf{B}^{'}{{\varvec{\Sigma }}}_v\mathbf{B}=(\omega _{gh})\). Under Assumptions 1and 2, the asymptotic distribution of each elements of \(\; \sqrt{\frac{1}{a_m(2)}} \mathbf{E}_m =(e_{gh}^{*}) \;\) is asymptotically normal as \(m,n \rightarrow \infty \) while \(m/n \rightarrow 0\).

The asymptotic covariance of \(e_{gh}^{*}\) and \(e_{kl}^{*}\) (\(g,h,k,l=1,\ldots , r_x\)) is given by \( \omega _{gk}\omega _{hl}+\omega _{gl}\omega _{hk}\).

Because of Lemmas 3.1 and A.3 in Appendix,

$$\begin{aligned} \mathbf{B}^{'} (\mathbf{G}_m- {{\varvec{\Sigma }}}_m)\mathbf{B} =O_p\left( \sqrt{\frac{a_m(2)}{m} }\right) =O_p\left( \frac{a_m}{\sqrt{m}}\right) \end{aligned}$$


$$\begin{aligned} \mathbf{B}^{'} \mathbf{G}_m \left( \begin{array}{c}{} \mathbf{O}\\ \mathbf{I}_q\end{array}\right) (\hat{\mathbf{B}}_2-\mathbf{B}_2) =O_p\left( \sqrt{\frac{a_m}{m}}\right) \times O_p\left( \sqrt{\frac{a_m}{m}}\right) =O_p\left( \frac{a_m}{m}\right) ;. \end{aligned}$$

By using the decomposition \(\hat{\mathbf{B}}=\mathbf{B}+ [\hat{\mathbf{B}}-\mathbf{B}]\), we can evaluate as

$$\begin{aligned}&\hat{\mathbf{B}}^{'} [ \mathbf{G}_m-a_m{{\varvec{\Sigma }}}_v] \hat{\mathbf{B}}\nonumber \\&\quad = \mathbf{B}^{'} [ \mathbf{G}_m-a_m{{\varvec{\Sigma }}}_v] \mathbf{B} -\mathbf{B}^{'} [ \mathbf{G}_m-a_m{{\varvec{\Sigma }}}_v] [\begin{array}{c}\mathbf{O}\\ \mathbf{I}_{q_x}\end{array}] {{\varvec{\Sigma }}}_{*}^{-1}[\mathbf{O},\mathbf{I}_{q_x}] [\mathbf{G}_m-a_m{{\varvec{\Sigma }}}_v] \mathbf{B}\nonumber \\&\quad = O_p\left( \frac{a_m}{\sqrt{m}}\right) +O_p\left( \frac{m}{n}\right) \;, \end{aligned}$$


$$\begin{aligned} {{\varvec{\Sigma }}}_{*} =[\mathbf{O},\mathbf{I}_{q_x}] {{\varvec{\Sigma }}}_{x} \left[ \begin{array}{c}{} \mathbf{O}\\ \mathbf{I}_{q_x}\end{array}\right] \;. \end{aligned}$$

Since the first term of (40) is \(O_p(\frac{a_m}{\sqrt{m}})=O_p(\frac{m\sqrt{m}}{n})\) and the second term is \(O_p(\frac{a_m}{m})=O_p(\frac{m}{n})\), the second term of the right-hand side is asymptotically negligible. Hence, the limiting distribution of

$$\begin{aligned} \sqrt{m}(\mathbf{B}^{'}{{\varvec{\Sigma }}}_v\mathbf{B}) \left[ \left( \frac{1}{a_m}\right) {{\varvec{\Lambda }}}-\mathbf{I}_{r_x} \right] \sim \frac{\sqrt{m}}{a_m}{} \mathbf{B}^{'}(\mathbf{G}_m-{{\varvec{\Sigma }}}_m )\mathbf{B} \end{aligned}$$

is asymptotically normal by applying CLT to (38).

We write \( \mathbf{E}_m=(e_{ij})= \frac{1}{\sqrt{m}}\sum _{k=1}^m a_{kn} (\mathbf{u}_k^{*}{} \mathbf{u}_k^{*'}-{{\varvec{\Omega }}}_v)\; \) and \(\mathbf{D}={{\varvec{\Omega }}}^{-1}_v\mathbf{E}\;(=(d_{ij}))\). Then, by using Lemmas 3.3 and A.3 in Appendix,

$$\begin{aligned} d_{ij}= & {} \mathrm{Cov} \left[ \sum _{k=1}^r \omega ^{ik}e_{ki}, \sum _{k^{'}=1}^r \omega ^{jk^{'}}e_{k^{'}j} \vert \mathcal{F}^{(0)} \right] \nonumber \\= & {} \frac{a_m(2)}{a_m^2} \sum _{k,k^{'}=1}^r\omega ^{ik}\omega ^{jk^{'}} [ \omega _{kk^{'}}\omega _{ij} + \omega _{kj}\omega _{k^{'}i} ]+o_p(1) \nonumber \\= & {} \frac{a_m(2)}{a_m^2} [ \delta (i,j)+ \omega _{ij}\omega ^{ij} ]+o_p(1)\;, \end{aligned}$$

for \(e_{kj}=(\sqrt{m}/a_m)(\mathbf{B}^{'}(\mathbf{G}_m-{{\varvec{\Sigma }}}_m)\mathbf{B})_{kj}\) and \({{\varvec{\Omega }}}_v^{-1}=(\omega ^{ij})\). (We shall ignore the last term \(o_p(1)\) of (42) in the following expression for the resulting simplicity.)

When we use a consistent estimator of \({{\varvec{\Sigma }}}_v\), the resulting expression of the limiting distribution becomes simple, which may be useful in practice. By using the fact that \(\sqrt{a_m(2)}/a_m\sim 3/\sqrt{5}\) and Lemma 3.3, we obtain the next result on the asymptotic distributions of the smaller characteristic roots \(\lambda _i\;(i=1,\ldots ,r_x)\).

Theorem 3.2

Assume the conditions on the Itô semimartingale in (1) and (2) as Theorem 3.1. We take \(m=m_n=[n^{\alpha }]\;(0<\alpha < 2/3)\), \(l_n=[n^{\beta }]\) with \(0<\alpha<\beta <1\) and \({\hat{{\varvec{\Sigma }}}}_v\) for \({{\varvec{\Sigma }}}_v\). As \(n \rightarrow \infty ,m \rightarrow \infty ,\)

$$\begin{aligned} \frac{\sqrt{m}}{a_m} \left[ \lambda _i-a_m \right] {\mathop {\longrightarrow }\limits ^{d}} N\left( 0, \frac{9}{5} d_{ii}\right) \end{aligned}$$

for \(i=1,\ldots ,r_x\;(=p-q_x)\) and \([\sqrt{m}/a_m ] \left[ \lambda _i-a_m \right] \) are asymptotically normal jointly. The covariances of \( [\sqrt{m}/a_m][\lambda _i-a_m]\) and \( [\sqrt{m}/a_m][\lambda _j-a_m]\;\;(i,j=1,\ldots ,r_x)\) are given by

$$\begin{aligned} d_{ij} = \frac{9}{5} [\delta (i,j) +\omega _{ij}\omega ^{ij} ]\;. \end{aligned}$$


$$\begin{aligned} \lambda _i^{*}=\frac{\sqrt{m}}{a_m}[\lambda _i-a_m]\; \;(i=1,\ldots ,r_x). \end{aligned}$$

Since the effect of estimating \(\mathbf{\Sigma }_v\) is asymptotically negligible, we find the asymptotic variance as

$$\begin{aligned} \mathrm{AVar}\Big [\sum _{i=1}^{r_x}\lambda _i^{*}\Big ]= & {} \mathcal{E}\left[ \sum _{i,i^{'}=1}^{r_x} \lambda _i^{*}\lambda ^{*}_{i^{'}} \right] \nonumber \\= & {} \left[ \frac{a_m(2)}{a_m^2}\sum _{i,i^{'}=1}^{r_x} \left[ \delta (i,i^{'}) +\omega _{ii^{'}}\omega ^{ii^{'}}\right] \right] =\frac{9}{5} 2 r_x\;. \end{aligned}$$

Then, we summarize our main result on the trace-statistic \(\sum _{i=1}^{r_x}\lambda _i\), which will be used as the key statistic for the application.

Theorem 3.3

Assume the conditions in Theorem 3.2. We take \(m=m_n=[n^{\alpha }]\;(0<\alpha < 2/3)\), \(l_n=[n^{\beta }]\) with \(0<\alpha<\beta <1\) and \({\hat{{\varvec{\Sigma }}}}_v\). As \(n \rightarrow \infty \) and \(m \rightarrow \infty \),

$$\begin{aligned} \sqrt{ \frac{ m}{a_m(2)} } \left[ \sum _{i=1}^{r_x} (\lambda _i-a_m ) \right] {\mathop {\longrightarrow }\limits ^{d}} N(0, 2 r_x)\;. \end{aligned}$$


$$\begin{aligned} \frac{ m }{a_m(2)}\frac{1}{2 r_x} \left[ \sum _{i=1}^{r_x} (\lambda _i-a_m) \right] ^{2} {\mathop {\longrightarrow }\limits ^{d}} \chi ^2(1)\;. \end{aligned}$$

When the true rank \(r_x^{*}\) in (12) is less than \(r_x\), which is the number of roots for constructing the statistic in Theorem 3.3, the probability limit of the \(r_x-\)th sample characteristic root is nonzero. Hence, we immediately obtain the following result.

Corollary 3.1

Assume the conditions in Theorem 3.2except the condition that the true rank \(q_x^{*}>q_x=p-r_x\) in (12). Then,

$$\begin{aligned} \frac{ m }{a_m(2)}\frac{1}{2 r_x} \left[ \sum _{i=1}^{r_x} (\lambda _i-a_m) \right] ^{2} {\mathop {\longrightarrow }\limits ^{p}} \infty \;. \end{aligned}$$

3.4 On eigenvalues in the metric \(\mathbf{H}\)

If we take a non-singular (known) matrix \(\mathbf{H}\) in (21) and (22) instead of \({\hat{{\varvec{\Sigma }}}}_v\), then we need a restrictive condition and the derivation in Sect. 3.3 should be modified slightly. It is because we cannot use (23), but the resulting derivation is similar to the one in Sect. 3.3. When we use any \(\mathbf{H},\) let

$$\begin{aligned} {{\varvec{\Lambda }}}^{**}_0 =(\mathbf{B}^{'}{} \mathbf{H}\mathbf{B})^{-1}{{\varvec{\Omega }}}_v \;(=(\lambda _{ij}^{**}))\;, \end{aligned}$$

which corresponds to the probability limits of \((1/a_m){{\varvec{\Lambda }}}\).

Then, we can replace \(\mathbf{B}\) and \({{\varvec{\Omega }}}_v\;(=\mathbf{B}^{'}{{\varvec{\Sigma }}}_v\mathbf{B})\) by their consistent estimators. In this case, however, the requirement on m is different from Theorems 3.2 and 3.3. We present the following result, but we omit the details of derivation because the basic argument is similar to the one in Sect. 3.3.

Proposition 3.1

Assume the conditions in Theorem 3.2. We take \(m=m_n=[n^{\alpha }]\;(0<\alpha < 1/2)\) and a non-singular (constant) matrix \(\mathbf{H}\) instead of \({\hat{{\varvec{\Sigma }}}}_v\). As \(n \rightarrow \infty \), \(m\rightarrow \infty \) and

$$\begin{aligned} \frac{ m }{a_m(2)}\frac{1}{r_{*}} \left[ \;\sum _{i=1}^{r_x} (\lambda _i-a_m \lambda _{ii}^{**})\; \right] ^{2}{\mathop {\longrightarrow }\limits ^{d}} \chi ^2(1)\;, \end{aligned}$$

where \(r_{*}=\sum _{i,i^{'}=1}^{r_x}\mathbf{Cov}(U_{ii}^{*},U_{i^{'},i^{'}}^{*})\) and the covariances are given as Lemma A.3in Appendix with \(U_{ii}^{*}\).

It may be convenient to take \(\mathbf{H}=\mathbf{I}_{p}\). But there is a complication in the expression of asymptotic distribution as well as we need an additional estimation of unknown parameters from observations.

3.5 A procedure of detecting the number of factors of quadratic variation

It is straightforward to develop the testing procedure for the hypothesis \(H_0\; :\; r_x=r_0\) (\(p>r_0\ge 1\) is a specified number) against \(H_A\;:\; r_x=r_0+1\;\). It may be reasonable to use the \(r_0\)th smaller characteristic root, and the rejection region can be constructed by the limiting normal or \(\chi ^2\) distribution under \(H_0\). (\(H_0\) corresponds to the case of \(q_x=p-r_0\) while \(H_A\) corresponds to \(q_x=p-(r_0+1)\).) Hence, it may be natural to use the sum of smaller characteristic roots as

$$\begin{aligned} R_0= \sum _{i=1}^{r_0} (\lambda _i - a_m)\; \end{aligned}$$

where \(0\le \lambda _1\le \cdots \le \lambda _p\). From Corollary 3.1, we can use

$$\begin{aligned} T_{n}(r_{0}) = \frac{ m }{a_m(2)}\frac{1}{2 r_0} \left[ \sum _{i=1}^{r_0} (\lambda _i-a_m) \right] ^{2} \end{aligned}$$

as the test statistics for estimating the number of latent factors of the underlying Itô semimartingale. The rejection region with \(1-\alpha _{n}\) (\(\alpha _{n} = \alpha / \log m_{n}\) for some constant \(\alpha \in (0,1)\)) significance level should be

$$\begin{aligned} T_{n}(r_{0}) \ge \chi _{1-\alpha _{n}}^2(1), \end{aligned}$$

where \(\chi _{1-\alpha _{n}}^2(1)\) is the \((1-\alpha _{n})\)-quantile of \(\chi ^{2}(1)\). Under \(H_A ,\) the \(r_0+1\)th characteristic root \(\lambda _{r_0+1} {\mathop {\longrightarrow }\limits ^{p}} \infty \) and the test should be consistent, that is, we can consistently estimate the true number of latent factors \(q_{x}^{*} = p - r_{x}^{*}\).

Formally, we employ the following stopping rule for the proposed sequential test.

  1. 1.

    Compute the test statistics \(T_{n}(p)\). If \(T_{n}(p) < \chi ^{2}_{1-\alpha _{n}}(1)\), we conclude \(r_{x} = p\). (i.e., \(\hat{q}_{x}^{*} = 0\)).

    If \(T_{n}(p) \ge \chi ^{2}_{1-\alpha _{n}}(1)\), we proceed to the next step to test \(H_0\; :\; r_x= p-1\) against \(H_A\; :\; r_x=p\) by using \(T_{n}(p-1)\).

  2. 2.

    We iterate the test \(H_0\; :\; r_x= r_0-1\) against \(H_A\; :\; r_x=r_0\) (\(r_0 = p, p-1,\ldots , 2\)) sequentially until the null hypothesis is accepted.

  3. 3.

    We finish the sequential test and conclude \(r_{x} = r_0-1\) (i.e., \(\hat{q}_{x}^{*} = p-(r_{0}-1)\)) at the time when \(H_0\; :\; r_x= r_0-1\) is accepted for the first time.

  4. 4.

    When the null hypothesis \(H_0\; :\; r_x= 1\), is rejected, we conclude \(r_{x} = 0\) (i.e., \(\hat{q}_{x}^{*} = p\)).

We can show the validity of the proposed method, which is summarized in the next proposition and the proof is given in Appendix.

Theorem 3.4

Under the same conditions in Theorem 3.2, we have that \(\hat{q}_{x}^{*} {\mathop {\longrightarrow }\limits ^{p}} q_{x}^{*}\) as \(n \rightarrow \infty \).

Our method corresponds to an extension of the standard statistical method to the case when we have latent continuous stochastic process and to detect the number of factors in the statistical multivariate analysis. See Anderson (1984), Anderson (2003), Robin and Smith (2000), and Fujikoshi et al. (2010).

In the framework of traditional hypothesis testing, there is a multiple testing problem when we use a sequence of test statistics with a given significance level. In the time series analysis, there is a literature on model selection for determining the number of factors by Akaike’s information criterion (AIC). See Kitagawa (2010), for instance. These problems are important for practical purposes, which will be explored in another occasion.

4 Simulations

In this section, we give some simulation results on the characteristic roots and test statistics when \(p=10\) (the observed dimension). We also discuss the finite sample properties of the characteristic roots and test statistics we have developed in the previous sections.

4.1 Simulated models

First, we simulate the latent process with three-dimensional Itô semimartingales. Let \(\tilde{\mathbf{X}} = (\tilde{\mathbf{X}}(t))_{t \ge 0}\; (=(\tilde{X}_{1}(t), \tilde{X}_{2}(t), \tilde{X}_{3}(t))^{'})_{t \ge 0})\) be the vector of Itô semimartingale satisfying

$$\begin{aligned} \mathrm{{d}}\tilde{X}_{g}(t)= & {} \sigma _{g}(t)\mathrm{{d}}W_{g}(t),\ g=1,2, \end{aligned}$$
$$\begin{aligned} \mathrm{{d}}\tilde{X}_{3}(t)= & {} Z(t)\mathrm{{d}}N(t), \end{aligned}$$

where \(\mathbf{W} = (W_{1}, W_{2})^{'}\) is the two-dimensional (standard) Brownian motion vector, N is the Poisson process with intensity 10. We assume that N is independent of \(\mathbf{W}\) and \(Z = (Z(t))_{t \ge 0}\) is the jump sizes with \(Z(t) \sim {N}(0,5^{-2})\). (See Cont and Tankov (2004) for the generation of jump processes.)

For the volatility process \(\sigma \) of the diffusion part, we set

$$\begin{aligned} d(\sigma _{g}(t))^{2} = a_{g}(\mu _{g} - (\sigma _{g}(t))^{2})\mathrm{{d}}t + \kappa _{g}\sigma _{g}(t)\mathrm{{d}}W^{\sigma }(t),\quad g=1,2, \end{aligned}$$

where \(\sigma _{1}(t)\) and \(\sigma _{2}(t)\) are independent, \(\mathbf{W}^{\sigma } = (W^{\sigma _{1}}, W^{\sigma _{2}})^{'}\) is the two dimensional (standard) Brownian motion. We have set \(a_{1}=2\), \(a_{2} = 3\), \(\mu _{1} = 0.8\), \(\mu _{2} = 0.7\), \(\kappa _{1} = \kappa _{2} = 0.5\), \(\mathcal{E}[\mathrm{{d}}W^{\sigma _{j}}(t)\mathrm{{d}}W_{j}(t)]=\rho _{j}\mathrm{{d}}t\), and \(\rho _{1} = \rho _{2} = -0.5\). In our simulation, we consider the following two models:

$$\begin{aligned} \mathrm{Model\;1}\;:\;\; \mathbf{Y}(t) = {{\varvec{\Gamma }}}_{1}(\tilde{X}_{1}(t), \tilde{X}_{2}(t))^{'} + {\mathbf{v}}(t) \end{aligned}$$


$$\begin{aligned} \mathrm{Model\;2}\;:\;\; \mathbf{Y}(t) = {{\varvec{\Gamma }}}_{1}(\tilde{X}_{1}(t), \tilde{X}_{2}(t))^{'} + {{\varvec{\Gamma }}}_{2}{\tilde{X}_{3}}(t) + {\mathbf{v}}(t). \end{aligned}$$

Here, we denote the coefficient matrices (\(p\times 2\) and \(p\times 1\), respectively) as \({{\varvec{\Gamma }}}_{1} = ({{\varvec{\gamma }}}_{1}^{(1)},\dots , {{\varvec{\gamma }}}_{1}^{(p)})^{'}\) (\({{\varvec{\gamma }}}_{1}^{(j)'} = ({{\varvec{\gamma }}}_{1,1}^{(j)}, {{\varvec{\gamma }}}_{1,2}^{(j)})\)) and \({{\varvec{\Gamma }}}_{2} = ({{\varvec{\gamma }}}_{2}^{(1)},\dots , {{\varvec{\gamma }}}_{2}^{(p)})^{'}\), which are sampled as \({{\varvec{\gamma }}}_{1,1}^{(j)} \sim U([0.25, 1.75])\), \({{\varvec{\gamma }}}_{1,2}^{(j)} \sim U([0.1, 0.25])\) and \({{\varvec{\gamma }}}_{2}^{(j)} \sim U([0.25,1.75])\), \(j=1,\dots ,p\). The observation vectors are

$$\begin{aligned} \mathbf{Y}(t_i^n) = (Y_{1}(t_i^n),\dots , Y_{p}(t_i^n))^{'},\ i = 1,\dots , n \end{aligned}$$

and we set \(t_i^n=\frac{i}{n}\;(i=1,\ldots ,n)\) and \(\Delta = \Delta _{n} = 1/n\). As the market microstructure noise vectors, we set \({\mathbf{v}}(t_i^n) = (v_{1}(t_i^n),\dots ,v_{p}(t_i^n))^{'}\), and use independent Gaussian noises for each component, that is,

$$\begin{aligned} (v_{1}(t_i^n),\dots ,v_{p}(t_i^n))^{'} \sim i.i.d. N_{p}(\mathbf{0},c_v\mathbf{I}_{p})\;\;(i=1,\dots , n)\;, \end{aligned}$$

with a pre-specified value \(c_v\). In all simulations, we set \(p=10\) and \(n = 20{,}000\). (Models 1 and 2 can be seen as special cases, which investigated in Li et al. (2017a, b).)

In our simulation, we have set \(m_n=2\times [n^{0.646}]\) because of the conditions in Theorems 3.2 and 3.3. We have done a large number of simulations, but we report only some results, which seem to be representative cases.

4.2 Simulation results

Let N be the number of Monte Carlo iterations. We plotted the mean value of the eigenvalues of the SIML estimator for the quadratic variation in Figs. 1 and 2. To compute the SIML estimators, we set \(m_n=2\times [n^{0.646}]\;(=2\times 600)\) and \(l_n=1.5m_n\).

$$\begin{aligned} \hat{{\varvec{\Sigma }}}_{x} = \frac{1}{m_{n}} \sum _{j=1}^{m_{n}}\mathbf{z}_{j}{} \mathbf{z}_{j}^{'}\;,\;\; \hat{{\varvec{\Sigma }}}_{v} = \frac{1}{l_{n}}\sum _{j=n-l_{n}+1}^{n}a_{jn}^{-1}{} \mathbf{z}_{j}\mathbf{z}_{j}^{'}, \end{aligned}$$

where \(a_{kn}=4n\sin ^{2}\left[ \frac{\pi }{2}\left( {2k-1 \over 2n+1}\right) \right] \).

In the following figures, we set \(0\le {\lambda }_{1} \le \dots \le {\lambda }_{p}\) are eigenvalues of \(\hat{{\varvec{\Sigma }}}_{v}^{-1}\hat{{\varvec{\Sigma }}}_{x}\).

In Model-1 and Model-2, we have 10 dimensions observation vectors (\(p=10\)). Model-1 has two factor of diffusion type (\(q_x=2,r_x=8\)), while Model-2 has two diffusion type factors and one jump factor (\(q_x=3,r_x=7\)). Figures 1 and 2 show that the estimated characteristic roots reflect the true rank of hidden stochastic process. Figures 3 and 4 shows the distributions of the test statistic we have developed in this paper. In the first case (Fig. 3), there is a clear indication that there are two nonzero characteristic roots, which are far from zero by any meaningful criterion, while in the second case (Fig. 4), there is a clear indication that there are three nonzero characteristic roots, which are far from zero with any meaningful criterion.

To illustrate the asymptotic normality of characteristic roots obtained by Theorem 3.2, we give one qq-plot as Fig. 5 for Model-2 when (\(q_x=3,r_x=7\)). We have confirmed the result of Theorem 3.2 by using our simulation.

As an illustration of the power of test statistic developed, we report the relative number of rejection of the hypothesis for \(10\%\) and \(5\%\) levels by using \(\chi ^2\)-statistic. (The number of replication was 300.) In Model 1 (the null is \(r_x=8\)), the size was 0.840 (\(90\%\)) and it was 0.913 (\(95\%\)). In Model 2 (the null is \(r_x=7\)), the size was 0.853 (\(90\%\)) and it was 0.917 (\(95\%\)).

It seems that our method of evaluating the rank condition of latent volatility factors based on the characteristic roots and the SIML estimation detects the number of latent factors properly by our simulations.

Fig. 1
figure 1

Mean of estimated log characteristic roots (log eigenvalues) of Model 1 when \(c_v = 10^{-6}\)(left) and \(c_v = 10^{-8}\)(right). We set \(\Delta = 1/20{,}000\), \(m_n=2\times [n^{0.646}]\;(=2\times 600)\) and \(l_n=1.5m_n\). The number of Monte Carlo iteration is 300

Fig. 2
figure 2

Mean of estimated log characteristic roots (log eigenvalues) of Model 2 when \(c = 10^{-6}\)(left) and \(c_v = 10^{-8}\)(right). We set \(\Delta = 1/20{,}000\), \(m_n=2\times [n^{0.646}]\;(=2\times 600)\) and \(l_n=1.5m_n\). The number of Monte Carlo iteration is 300

Fig. 3
figure 3

Empirical distributions of test statistic \(T_{n}(r_{0})\) of Model 1 when \(r_{0} = 7\)(left), \(r_{0} = 8\)(center) and \(r_{0} = 9\)(right) when \(r_{x} = 8\). We set \(\Delta = 1/20{,}000\), \(c_v = 10^{-8}\), \(m_n=2\times [n^{0.646}]\;(=2\times 600)\) and \(l_n=1.5m_n\). The number of Monte Carlo iteration is 300. The red line is the density of the Chi-square distribution with 1 degree of freedom

Fig. 4
figure 4

Empirical distributions of test statistic \(T_{n}(r_{0})\) of Model 2 when \(r_{0} = 6\)(left), \(r_{0} = 7\)(center) and \(r_{0} = 8\)(right) when \(r_{x} = 7\). We set \(\Delta = 1/20{,}000\), \(c_v = 10^{-8}\), \(m_n=2\times [n^{0.646}]\;(=2\times 600)\) and \(l_n=1.5m_n\). The number of Monte Carlo iteration is 300. The red line is the density of the Chi-square distribution with 1 degree of freedom

Fig. 5
figure 5

A qq-plot of eigenvalues (Model-2 when \(r_x=7\))

5 An empirical example

In this section, we report one empirical data analysis by using the proposed method developed in the previous sections. It is no more than an illustration on our proposed method in this paper. We have used the intra-day observations of top five financial stocks (Mitsubishi UFJ Financial Group, Inc., Mizuho Financial Group, Inc., Nomura Holdings, Inc., Resona Holdings, Inc., and Sumitomo Mitsui Financial Group, Inc.) traded in the Tokyo Stock Exchange (TSE) on January 25 in 2016, which may be regarded as a typical 1 day.Footnote 1 We have picked 5 major financial stocks listed at TSE because they are actively traded in each day with high liquidity. Hence, we do not have serious disturbing effects due to actual non-synchronous trading process in TSE market. We sub-sampled returns of each asset every 1 s (\(\Delta = n^{-1} = 1/18{,}000\)), and we have taken the nearest trading (past) prices at every unit of time.

For the SIML estimation, we have set \(m_n=2 \times [n^{0.51}]\;(=294)\) and \(l_n=1.5m_n\). Since all companies belong to the same market division (first section) of TSE, it would be reasonable to expect that the number of factor of these assets is smaller than 5 (i.e., \(q_{x}<5\)). Figure 6 shows the estimated eigenvalues of the quadratic variation of these stocks by using (21) and (22). In this example, we have two large eigenvalues, while there are three smaller eigenvalues and two roots are dominant. Then, we have the statistics \( T_n(5) = 91.37832\), \(T_n(4) = 40.41634\), \(T_n(3) = 5.479696\) and \(T_n(2) = 10.60642\). Therefore, at a significance level of \(\alpha _{n} = 0.05/\log m_{n}\) (\(\chi ^{2}_{1-\alpha _{n}}(1) \approx 6.8635\)), the null hypotheses \(H_0\; :\; r_x= 5\) and \(H_0\; :\; r_x= 4\) are rejected, but the null hypothesis \(H_0\; :\; r_x= 3\) is not rejected. In this example, there is a large root and the second larger root is much smaller than the largest root, but we cannot ignore it because other roots are much smaller than two roots. It implies that the quadratic variation has two factors in 1 day.

Fig. 6
figure 6

Estimated eigenvalues. In this case, \(\Delta = 1/18{,}000\), \(m_n=2\times [n^{0.51}]\;(=294)\) and \(l_n=1.5m_n\)

Figure 7 shows the intra-day movements of five stock prices in the TSE afternoon session of January 25, 2016. (There is a lunch break in Tokyo stock market.) We set the same values for the starting prices because we want to focus on the volatility structure (or quadratic variation) of five asset prices. There is a strong evidence on two types of intra-day movements of stock prices, which is consistent with our data analysis reported.

Fig. 7
figure 7

Intra-day movements of 5 stock prices at Tokyo (January 25, 2016)

6 Conclusions

In financial markets, we usually have many assets traded and then it is important to find latent (small number of) factors behind the observed random fluctuations of underlying assets. It is straightforward to detect the number of latent factors by using the SIML method when the true latent stochastic process is the class of Itô semimartingales, and there can be market microstructure noises. Our procedure is essentially the same as the standard method in statistical multivariate analysis except the fact that we have Itô semimartingales as the latent state variables. We have derived the asymptotic properties of characteristic roots and vectors, which are new. Then, it is possible to develop the test statistics for the rank condition. From our limited simulations and an empirical application, our approach works well in practical situations.

There can be possible extensions of our approach we have developed in this paper. Since Assumptions 1 and 2 are restrictive, it would be interesting to explore the mathematical details and find less restrictive conditions when we can lead to useful results. For instance, there are some discussion on the presence of common jumps such as Jacod and Todorov (2009), Bibinger and Winkelmann (2015), Aït-Sahalia and Jacod (2014) and Kurisu (2018) on this point. Also some comparison with the existing literature would be useful although it requires a further analysis. Also there are several unsolved problems remained. Although we need to choose \(m_n\) and \(l_n\) in the SIML testing in practice, there is a different aspect from the problem of choosing \(m_n\) and \(l_n\) in the SIML estimation. The testing power of test procedure will be another unsolved problem although the trace statistic we used may be a natural choice for test procedure.

More importantly, the notable feature of our approach under Assumption 2 is that the method is simple and is useful. We have a promising experiment even in the case when the dimension of observations is not small, say 100. The number of asset prices in actual financial markets is large in practical financial risk management, and there would be a number of empirical applications. These problems are currently under investigation.