Abstract
We establish the asymptotic normality of a quadratic form \(Q_n\) in martingale difference random variables \(\eta _t\) when the weight matrix A of the quadratic form has an asymptotically vanishing diagonal. Such a result has numerous potential applications in time series analysis. While for i.i.d. random variables \(\eta _t\), asymptotic normality holds under condition \(||A||_{sp}=o(||A||) \), where \(||A||_{sp}\) and ||A|| are the spectral and Euclidean norms of the matrix A, respectively, finding corresponding sufficient conditions in the case of martingale differences \(\eta _t\) has been an important open problem. We provide such sufficient conditions in this paper.
Similar content being viewed by others
1 Main results
We study here quadratic forms
where \(\{\eta _k\}\) is a stationary ergodic martingale difference (m.d.) sequence with respect to some natural filtration \({{\mathcal {F}}}_t\), with moments
The real-valued coefficients \(a_{n;tk}\) in (1.1) are entries of a symmetric matrix \(A_n=(a_{n;tk})_{t,k=1,\ldots , n}\). We denote by
the Euclidean norm and by
the spectral norm of the matrix \(A_n\). For convenience, we set \(a_{n;tk}=0\) for \(t \le 0, t > n \) or \( k \le 0, k > n\).
The asymptotic normality property of the quadratic form \(Q_n\) has been well investigated when the random variables \(\eta _j\) are i.i.d. If \(A_n\) has vanishing diagonal: \(a_{n;tt}=0\) for all t, then asymptotic normality is implied by the condition
see Rotar (1973), De Jong (1987), Guttorp and Lockhart (1988), Mikosch (1991) and Bhansali et al. (2007a).
The aim of this paper is to extend these results to the m.d. noise \(\eta _j\). We will need the following additional assumptions on the m.d. noise \(\eta _t\):
The assumption (1.3) bounds the conditional variance of \(\eta _j\) away from zero. We also assume that \(A_n\) has an asymptotically “vanishing” diagonal in the sense:
Relation (1.4) implies
The following theorem shows that in case of m.d. noise \(\{\eta _k\}\), the condition
above needs to be strengthened by including the assumptions (1.8) and (1.9) on the weights \(a_{n;ts}\). Its proof is based on the martingale central limit theorem.
Theorem 1.1
Let \(Q_n\) be as in (1.1), where \(\{\eta _j\}\) is a stationary ergodic m.d. noise such that \(E\eta _j^4<\infty \) and (1.3) hold. Suppose that the \( a_{n;ts}\)’s are such that, as \(n \rightarrow \infty \),
Then there exist \(c_1, c_2>0\) such that
If in addition,
and
then the following normal convergence holds:
As usual, “\(\mathop {\rightarrow }\limits ^{\scriptstyle d}N(0, 1)\)” denotes convergence in distribution to a normal random variable with mean zero and variance one.
Theorem 1.1 plays an important instrumental role in establishing asymptotic properties of various estimation and testing procedures in parametric and non-parametric time series analysis where the object of interest can be written as a quadratic form
of a linear (moving-average) process
of uncorrelated noise \(\eta _t\) and the weights \(e_n(s)\) may depend on n. In the case of i.i.d. noise \(\eta _t\), the asymptotic normality for \(Q_{n,X}\) is established by approximating it by a simpler quadratic form
with some different weights \(b_n(t)\) and then deriving the asymptotic normality for \(Q_{n,\eta }\), as in Bhansali et al. (2007b). For example, one sets
where f(x) is the spectral density of the sequence \(X_t\), and where \(u_n(x)\) is some convenient function related to \(e_n(t)\), typically such that
In general, obtaining simple asymptotic normality conditions for \(Q_{n,X}\) is a hard theoretical problem but of great practical importance, which for an i.i.d. noise \(\eta _t\) was solved in Bhansali et al. (2007b). In addition, in Sect. 6.2 in Giraitis et al. (2012) one considers discreet frequencies and shows that a sum
of weighted periodograms
of the sequence \(X_t\) at Fourier frequencies \(u_j\) can be also effectively approximated by a quadratic form \(Q_{n,\eta }\). This allows, by theorem like Theorem 1.1, to establish the asymptotic normality for such sums \(S_n\). However, assumption of i.i.d. noise is restrictive and may be not satisfied in practical applications and in some theoretical, i.e. ARCH, settings. In a subsequent paper we will derive corresponding normal approximation results for \(Q_{n,X}\) and \(S_n\) when \(\eta _t\) is a martingale difference process.
The following Corollary 1.1 displays situations where the conditions of Theorem 1.1 are easily satisfied. For a Toeplitz matrix \(A_n\), that is with entries
the assumption (1.9) is clearly satisfied, since
The following lemma provides a useful bound that can be used to prove (1.6).
Lemma 1.1
Suppose that
where \( g_n(x)\), \(|x|\le \pi \) is an even real function. If there exists
and a sequence of constants \(k_n>0\) such that
then
For the proof see Theorem 2.2(i) in Bhansali et al. (2007a).
Suppose now, in addition, that \(g_n(x) \equiv g(x)\), \(n \ge 1\) and \(|g(x)|\le C|x|^{-\alpha }\), \(|x|\le \pi \). Then
and, in addition, \(k_n=1\) in (1.11). Hence
and
which implies (1.6). Moreover,
Since \(\sum _{|k|\ge L}b^2(|k|)\rightarrow 0\) as \(L\rightarrow \infty \), we obtain (1.8). This together with Theorem 1.1 implies the following corollary.
Corollary 1.1
Let
where \(b(t)=b(-t)\), \(b(0)=0\) are real weights and \(\{\eta _j\}\) is a stationary ergodic m.d. noise such that \(E\eta _j^4<\infty \) and (1.3) hold.
-
(i)
If \(\sum _{t=0}^\infty |b(t)|<\infty \), then
$$\begin{aligned}&\exists c_1, c_2>0:\ c_1n\le \mathrm{Var}(Q_{n})\le c_2n, \quad n\ge 1, \qquad \end{aligned}$$(1.12)$$\begin{aligned}&(\mathrm{Var}(Q_{n}))^{-1/2}(Q_{n}-EQ_{n})\mathop {\rightarrow }\limits ^{\scriptstyle d}N(0, 1). \end{aligned}$$(1.13) -
(ii)
If \(b(t)= \int _{-\pi }^\pi e^{itx}g(x)dx\), \(t=0,1,\ldots ,\) where g(x), \(|x|\le \pi \) is an even real function such that for some \(0\le \alpha <1/2\) and \(C>0\),
$$\begin{aligned} |g(x)|\le C|x|^{-\alpha }, \qquad |x|\le \pi \end{aligned}$$(1.14)
Next we consider two quadratic forms
with corresponding matrices \( A_n^{(1)} \), \( A_n^{(2)} \) and a m.d. sequence \(\eta _t\) which satisfy the assumptions of Theorem 1.1, so that
The next corollary provides additional sufficient condition that implies asymptotic normality of their sum.
Corollary 1.2
Suppose that the quadratic forms \(Q_n^{(1)}\), \(Q_n^{(2)}\) in (1.15) satisfy the assumptions of Theorem 1.1. Set
If in addition
then the quadratic form \(Q_n:=Q_n^{(1)}+Q_n^{(2)}\) satisfies
and
Proof
We have \(Q_n = \sum _{t,s=1}^n a_{n;ts} \eta _t \eta _s\) where \(a_{n;ts}=a^{(1)}_{n;ts}+a^{(2)}_{n;ts}\). Thus, to prove the corollary, it suffices to show that \(A_n\) satisfies assumptions of Theorem 1.1. This easily follows from the fact that both \(A_n^{(1)}\) and \(A_n^{(2)}\) satisfy assumptions of Theorem 1.1, and the property
The latter follows from
because the matrices \(A^{(1)}_n\) and \(A^{(2)}_n\) are symmetric so the cross term
Hence
where
Since \(||A^{(1)}_n||^2+ ||A^{(2)}_n||^2\ge 2 ||A^{(1)}_n||\, ||A^{(2)}_n||\) we get \(r_n=o(1)\) by (1.16). \(\square \)
Corollary 1.2 indicates that we need the additional condition (1.16) in order to obtain the asymptotic normality of \(Q_n\). It does not imply that in this case the components \(Q_n^{(1)}\) and \(Q_n^{(2)}\) are asymptotically uncorrelated and hence asymptotically independent. We conjecture that \(Q_n^{(1)}\) and \(Q_n^{(2)}\) will be asymptotically independent in the case when \(\eta _t\) is an i.i.d. noise.
2 Proof of Theorem 1.1
In the proof of Theorem 1.1 we shall use the following result.
Lemma 2.1
(Dalla et al. (2014), Lemma 10).
-
(i)
Let
$$\begin{aligned} T_n=\sum _{j\in Z}c_{nj}V_j, \end{aligned}$$where \(\{V_j\},\ j\in Z=\{\cdots ,-1,0,1,\cdots \} \) is a stationary ergodic sequence, \(E|V_1|<\infty \), and \(c_{nj}\) are real numbers such that for some \(0<\alpha _n<\infty \), \(n\ge 1\),
$$\begin{aligned} \sum \nolimits _{j\in Z}|c_{nj}|=O(\alpha _n),\quad \sum \nolimits _{j\in Z}|c_{nj}-c_{n,j-1}|=o(\alpha _n). \end{aligned}$$(2.1)Then
$$\begin{aligned} E|T_n-ET_n|=o(\alpha _n). \end{aligned}$$In particular, if \(\alpha _n=1\), then
$$\begin{aligned} T_n=ET_n+o_p(1). \end{aligned}$$ -
(ii)
If the m.d. sequence \(\eta _t\) satisfies \(\max _tE|\eta _t|^p<\infty \), for some \(p\ge 2\), then
$$\begin{aligned} E\Big |\sum \nolimits _{j\in Z}d_j\eta _j\Big |^p\le C\Big (\sum \nolimits _{j\in Z}d_j^2\Big )^{p/2}, \end{aligned}$$(2.2)for any \(d_j\)’s such that \( \sum _{j\in Z}d_j^2<\infty \), where \(C<\infty \) does not depend on \(d_j\)’s.
For the convenience of the reader we provide the proof of the following lemma.
Lemma 2.2
One has
Proof
We drop the index n and let \(A=(a_{ts})\). The second inequality \(|a_{ts}|\le ||A_n||_{sp}\) follows from the first since
Turning to the first inequality, we have \(\left| \left| A_n\right| \right| _{sp}^2= \sup _{\left| \left| x\right| \right| =1}\left| \left| Ax\right| \right| ^2\) where \(x=(x_1,\ldots , x_n)'\) and
Set \(y=(0,\ldots ,0,1,0,\ldots ,0)'\) where 1 is at the \(t_0\) position. Note that \(||y||=1\). Then
since A is symmetric. Hence
\(\square \)
Proof of Theorem 1.1
Using (1.6), the second claim of (2.3) implies
Relation (2.4) implies that no single \(a_{n;ku}\) dominates.
\(\bullet \) Proof of (1.7) Below we write \(a_{ts}\) instead of \(a_{n;ts}\). Let
Then
Observe that \(E\eta _t\eta _s=0\) for \(t>s\) and hence \(ES_n=0\) since \(\eta _s\) is a m.d. sequence. In addition,
Using \(E\eta _t^4\le C\) and (1.4),
Now we show that
The lower bound follows by using (1.3) and (1.5) because of the fact that \(c>0\):
for large n.
To prove the upper bound, notice that
by (2.2) and assumption \(E\eta _t^4=E\eta _1^4<\infty \). To obtain (1.7), note that
by (2.8) and (2.10). In addition, (2.6)–(2.10) imply
Indeed, by (2.6),
so that \(\mathrm{Var}(Q_n)=\mathrm{Var}(S_n)+o(||A||^2)\) and by (2.9) we have \(ES_n^2\ge ||A||^2\), which leads to (2.11).
\(\bullet \) Proof of (1.10) We now prove the asymptotic normality of \(Q_n\). Let \(B^2_n=\mathrm{Var}(Q_n)\), \(X_{nt}=B_n^{-1}z_{nt}\) and \( X^\prime _{t}=B_n^{-1}z^\prime _{nt}\). Then, by (2.6)
Observe that by (1.7) and (2.8), \(E|\sum _{t=1}^{n}X^\prime _{t}|=B_n^{-1}E|\sum _{s=1}^n z'_{nt}|\le C ||A_n||^{-1}\sum _{t=1}^{n}|a_{tt}|=o(1) \). Therefore, to prove (1.10) it remains to show that
Since \(X_{nt}\) is a m.d. sequence, then by Theorem 3.2 of Hall and Heyde (1980), it suffices to show
\(\bullet \bullet \) To verify (a) and (b), it suffices to show that for any \(\varepsilon >0\),
which clearly implies (a), while (b) follows from (2.15) noting that
To prove (2.15), let \(K>0\) be large. We consider two cases: \(\eta ^2_t\le K\) and \(\eta ^2_t> K\). Then,
by (2.2) and (2.3). Recall that by (1.7), \(B_n^{-2}\le C||A||^{-2}\). Thus, for any \(\varepsilon >0\) and \(K>0\),
by (1.6) as \(n \rightarrow \infty \) for any finite K.
We now focus on the case \(\eta ^2_t \ge K\). Since \(E\eta ^4_t <\infty \) and, by stationarity of \(\eta _t\), \(\delta _K:=E\eta _1^4I(\eta ^2_1> K)\rightarrow 0\) as \(K\rightarrow \infty \), this implies
by (2.2). Hence,
Since (2.16) holds for any fixed K as \(n\rightarrow \infty \), and since (2.17) holds as \(K\rightarrow \infty \) uniformly in n, we get (2.15).
\(\bullet \bullet \) The verification of (c) in (2.14) is particularly delicate. We want to show that \(s_n \rightarrow _p1\). Recall that \(x_{nt}=B^{-1}z_{nt}\) where \(z_{nt}\) is defined in (2.5). We shall decompose \(s_n=\sum _{s=1}^n X_{ns}^2\) into two parts involving \(L>1\). Write
where
Then,
We show that as \(n \rightarrow \infty \),
which proves (2.14)(c) since \(E|s_{n}|\rightarrow 0\) implies \(s_{n}\rightarrow _P 0\) as \(n\rightarrow \infty \) and \(L\rightarrow \infty \).
\(\bullet \bullet \bullet \) The claim (2.19)(i) follows from (2.11),
noting that \(B_n^{-2}ES_n^2=Es_n\), which holds by definition of \(s_n\) and (2.7).
\(\bullet \bullet \bullet \) To show (2.19)(ii), open up the squares, set \(s=t-k\) and \(s'=t-u\), to get
It suffices to verify that for any fixed k and u, \(g_{n,ku}=o_p(1)\). Setting
and
write
Since the noise \(\{\eta _t\}\) is stationary ergodic and such that \(E\eta _1^4<\infty \), by Theorem 3.5.8 in Stout (1974), the process \(\{V_j\}\) is stationary and ergodic, and \(E|V_1|<\infty \). Because of the centering, \(Eg_{n,ku}=0\). Thus, by Lemma 2.1(i), to prove \(g_{n,ku}=o_p(1)\), it remains to show that \(c_{nt}\)’s satisfy (2.1) with \(\alpha _n=1\). Observe that
by (1.7). On the other hand,
by (1.9), (2.3) and (1.7). Hence (2.1) holds. By Lemma 2.1(i) we conclude that \(g_{n,ku}=o_p(1)\) and, thus, \(s_{n,1}-Es_{n,1}=o_p(1)\). Hence (2.19)(ii) holds.
\(\bullet \bullet \bullet \) To verify \(E|s_{n,2}|\rightarrow 0\) in (2.19)(iii), write
We use the identity \(a^2-b^2=(a-b)^2+ 2(a-b)b\), to obtain
where
Hence, \(E|s_{n,2}|\le 4Eq_{n,1}+4(Eq_{n,1}E{s_{n,1}})^{1/2}.\) To bound \(E q_{n,1}\), we argue partly as in (2.10):
by (1.8). We also have
Hence \(E|s_{n,2}|\rightarrow 0\) as \(n\rightarrow \infty \) and \(L\rightarrow \infty \). This completes the proof of (2.19)(iii) and the theorem. \(\square \)
References
Bhansali R, Giraitis L, Kokoszka P (2007a) Convergence of quadratic forms with nonvanishing diagonal. Statistics & Probability Letters 77:726–734
Bhansali R, Giraitis L, Kokoszka P (2007b) Approximations and limit theory for quadratic forms of linear processes. Stochastic Processes and their Applications 117:71–95
Dalla V, Giraitis L, Koul HL (2014) Studentizing weighted sums of linear processes. Journal of Time Series Analysis 35:151–172
De Jong P (1987) A central limit theorem for generalized quadratic forms. Probability Theory and Related Fields 75:261–277
Giraitis L, Koul HL, Surgailis D (2012) Large Sample Inference for Long Memory Processes. Imperial College Press, London
Guttorp P, Lockhart RA (1988) On the asymptotic distribution of quadratic forms in uniform order statistics. Annals of Statistics 16:433–449
Hall P, Heyde CC (1980) Martingale Limit Theory and Applications. Academic Press, New York
Mikosch T (1991) Functional limit theorems for random quadratic forms. Stochastic Processes and their Applications 37:81–98
Rotar VI (1973) Certain limit theorems for polynomials of degree two. Teoria Verojatnosti i Primenenia 18:527–534 in Russian
Stout W (1974) Almost Sure Convergence. Academic Press, New York
Acknowledgments
Liudas Giraitis and Murad S. Taqqu would like to thank Masanobu Taniguchi for his hospitality in Japan and support by the JSPS grant 15H02061. Murad S. Taqqu was partially supported by the NSF grant DMS-1309009 at Boston University.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Giraitis, L., Taniguchi, M. & Taqqu, M.S. Asymptotic normality of quadratic forms of martingale differences. Stat Inference Stoch Process 20, 315–327 (2017). https://doi.org/10.1007/s11203-016-9143-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11203-016-9143-3