Statistical Inference for Stochastic Processes

, Volume 20, Issue 3, pp 315–327

# Asymptotic normality of quadratic forms of martingale differences

• Liudas Giraitis
• Masanobu Taniguchi
Open Access
Article

## Abstract

We establish the asymptotic normality of a quadratic form $$Q_n$$ in martingale difference random variables $$\eta _t$$ when the weight matrix A of the quadratic form has an asymptotically vanishing diagonal. Such a result has numerous potential applications in time series analysis. While for i.i.d. random variables $$\eta _t$$, asymptotic normality holds under condition $$||A||_{sp}=o(||A||)$$, where $$||A||_{sp}$$ and ||A|| are the spectral and Euclidean norms of the matrix A, respectively, finding corresponding sufficient conditions in the case of martingale differences $$\eta _t$$ has been an important open problem. We provide such sufficient conditions in this paper.

## Keywords

Asymptotic normality Quadratic form Martingale differences

62E20 60F05

## 1 Main results

\begin{aligned} Q_n=\sum _{t,k=1}^{n}a_{n;tk}\eta _{t}\eta _k \end{aligned}
(1.1)
where $$\{\eta _k\}$$ is a stationary ergodic martingale difference (m.d.) sequence with respect to some natural filtration $${{\mathcal {F}}}_t$$, with moments
\begin{aligned} E\eta _k=0,\quad E\eta _k^2=1\quad \text {and}\quad E\eta _k^4<\infty . \end{aligned}
The real-valued coefficients $$a_{n;tk}$$ in (1.1) are entries of a symmetric matrix $$A_n=(a_{n;tk})_{t,k=1,\ldots , n}$$. We denote by
\begin{aligned} ||A_n||=\left( \sum _{t,k=1}^n a^2_{n;tk}\right) ^{1/2} \end{aligned}
the Euclidean norm and by
\begin{aligned} ||A_n||_{sp}=\max _{||x||=1}||A_n x|| \end{aligned}
the spectral norm of the matrix $$A_n$$. For convenience, we set $$a_{n;tk}=0$$ for $$t \le 0, t > n$$ or $$k \le 0, k > n$$.
The asymptotic normality property of the quadratic form $$Q_n$$ has been well investigated when the random variables $$\eta _j$$ are i.i.d. If $$A_n$$ has vanishing diagonal: $$a_{n;tt}=0$$ for all t, then asymptotic normality is implied by the condition
\begin{aligned} ||A_n||_{sp}=o(||A_n||), \end{aligned}
(1.2)
see Rotar (1973), De Jong (1987), Guttorp and Lockhart (1988), Mikosch (1991) and Bhansali et al. (2007a).
The aim of this paper is to extend these results to the m.d. noise $$\eta _j$$. We will need the following additional assumptions on the m.d. noise $$\eta _t$$:
\begin{aligned} E\left( \eta ^2_j|{{\mathcal {F}}}_{j-1}\right) \ge c>0, \qquad (\exists c>0). \end{aligned}
(1.3)
The assumption (1.3) bounds the conditional variance of $$\eta _j$$ away from zero. We also assume that $$A_n$$ has an asymptotically “vanishing” diagonal in the sense:
\begin{aligned} \sum _{t=1}^n |a_{n;tt}|=o(||A_n||), \quad n\rightarrow \infty . \end{aligned}
(1.4)
Relation (1.4) implies
\begin{aligned} \sum _{t=1}^n a_{n;tt}^2=o\left( ||A_n||^2\right) , \quad n\rightarrow \infty . \end{aligned}
(1.5)
The following theorem shows that in case of m.d. noise $$\{\eta _k\}$$, the condition
\begin{aligned} ||A_n||_{sp}/||A_n||\rightarrow 0 \end{aligned}
above needs to be strengthened by including the assumptions (1.8) and (1.9) on the weights $$a_{n;ts}$$. Its proof is based on the martingale central limit theorem.

### Theorem 1.1

Let $$Q_n$$ be as in (1.1), where $$\{\eta _j\}$$ is a stationary ergodic m.d. noise such that $$E\eta _j^4<\infty$$ and (1.3) hold. Suppose that the $$a_{n;ts}$$’s are such that, as $$n \rightarrow \infty$$,
\begin{aligned}&||A_n||_{sp}/||A_n||\rightarrow 0. \end{aligned}
(1.6)
Then there exist $$c_1, c_2>0$$ such that
\begin{aligned} c_1||A_n||^2\le \mathrm{Var}(Q_{n})\le c_2||A_n||^2, \quad n\ge 1. \end{aligned}
(1.7)
\begin{aligned} \sum \nolimits _{t,s=1: |t-s|\ge L}^n a^2_{n;ts}=o\left( ||A_n||^2\right) , \quad n\rightarrow \infty , \quad L\rightarrow \infty , \end{aligned}
(1.8)
and
\begin{aligned} \sum \nolimits _{t=k+2}^n(a_{n;t,t-k}-a_{n;t-1,t-1-k})^2= o\left( ||A_n||^2\right) , \qquad \forall k\ge 1 \end{aligned}
(1.9)
then the following normal convergence holds:
\begin{aligned} (\mathrm{Var}(Q_{n}))^{-1/2}(Q_{n}-EQ_{n})\mathop {\rightarrow }\limits ^{\scriptstyle d}N(0, 1). \end{aligned}
(1.10)

As usual, “$$\mathop {\rightarrow }\limits ^{\scriptstyle d}N(0, 1)$$” denotes convergence in distribution to a normal random variable with mean zero and variance one.

Theorem 1.1 plays an important instrumental role in establishing asymptotic properties of various estimation and testing procedures in parametric and non-parametric time series analysis where the object of interest can be written as a quadratic form
\begin{aligned} Q_{n,X}=\sum _{t,s=1}^n e_n(t-s)X_tX_s \end{aligned}
of a linear (moving-average) process
\begin{aligned} X_t=\sum _{j=0}^\infty a_j \eta _{t-j} \end{aligned}
of uncorrelated noise $$\eta _t$$ and the weights $$e_n(s)$$ may depend on n. In the case of i.i.d. noise $$\eta _t$$, the asymptotic normality for $$Q_{n,X}$$ is established by approximating it by a simpler quadratic form
\begin{aligned} Q_{n,\eta }=\sum _{t,s=1}^n b_n(t-s)\eta _t\eta _s \end{aligned}
with some different weights $$b_n(t)$$ and then deriving the asymptotic normality for $$Q_{n,\eta }$$, as in Bhansali et al. (2007b). For example, one sets
\begin{aligned} b_n(t)= \int _{-\pi }^\pi u_n(x)f(x) e^{itx} dx \end{aligned}
where f(x) is the spectral density of the sequence $$X_t$$, and where $$u_n(x)$$ is some convenient function related to $$e_n(t)$$, typically such that
\begin{aligned} e_n(t)= \int _{-\pi }^\pi u_n(x) e^{itx} dx. \end{aligned}
In general, obtaining simple asymptotic normality conditions for $$Q_{n,X}$$ is a hard theoretical problem but of great practical importance, which for an i.i.d. noise $$\eta _t$$ was solved in Bhansali et al. (2007b). In addition, in Sect. 6.2 in Giraitis et al. (2012) one considers discreet frequencies and shows that a sum
\begin{aligned} S_n=\sum _{j=1}^{n/2}b_{nj}I(u_j) \end{aligned}
of weighted periodograms
\begin{aligned} I(u_j)=(2\pi n)^{-1}\left| \sum _{k=1}^ne^{iku_j}X_k\right| ^2 \end{aligned}
of the sequence $$X_t$$ at Fourier frequencies $$u_j$$ can be also effectively approximated by a quadratic form $$Q_{n,\eta }$$. This allows, by theorem like Theorem 1.1, to establish the asymptotic normality for such sums $$S_n$$. However, assumption of i.i.d. noise is restrictive and may be not satisfied in practical applications and in some theoretical, i.e. ARCH, settings. In a subsequent paper we will derive corresponding normal approximation results for $$Q_{n,X}$$ and $$S_n$$ when $$\eta _t$$ is a martingale difference process.
The following Corollary 1.1 displays situations where the conditions of Theorem 1.1 are easily satisfied. For a Toeplitz matrix $$A_n$$, that is with entries
\begin{aligned} a_{n;ts}=b_n(t-s), \end{aligned}
the assumption (1.9) is clearly satisfied, since
\begin{aligned} a_{n;t,t-k}-a_{n;t-1,t-1-k}=b_n(k)-b_n(k)=0. \end{aligned}
The following lemma provides a useful bound that can be used to prove (1.6).

### Lemma 1.1

Suppose that
\begin{aligned} b_n(t)= \int _{-\pi }^\pi e^{itx}g_n(x)dx,\quad t=0,1,\ldots , \end{aligned}
where $$g_n(x)$$, $$|x|\le \pi$$ is an even real function. If there exists
\begin{aligned} 0\le \alpha <1/2 \end{aligned}
and a sequence of constants $$k_n>0$$ such that
\begin{aligned} |g_n(x)|\le k_n|x|^{-\alpha }, \quad |x|\le \pi , \end{aligned}
then
\begin{aligned} ||A_n||_{sp}\le Ck_n n^\alpha , \quad n\ge 1. \end{aligned}
(1.11)

For the proof see Theorem 2.2(i) in Bhansali et al. (2007a).

Suppose now, in addition, that $$g_n(x) \equiv g(x)$$, $$n \ge 1$$ and $$|g(x)|\le C|x|^{-\alpha }$$, $$|x|\le \pi$$. Then
\begin{aligned} \int _{-\pi }^\pi g^2(x)dx<\infty ,\quad b_n(t)=b(t) \quad \text {and} \quad \sum _{t=-\infty }^\infty b^2(t)<\infty \end{aligned}
and, in addition, $$k_n=1$$ in (1.11). Hence
\begin{aligned} ||A||^2=\sum \nolimits _{t,s=1}^nb^2(t-s)= \sum \nolimits _{k=-n}^{n}b^2(k)(n-|k|) \sim n \sum \nolimits _{t=-\infty }^\infty b^2(t)\,\, \mathrm{as }\quad n\rightarrow \infty \end{aligned}
and
\begin{aligned} ||A_n||_{sp}\le Cn^\alpha =o(n^{1/2})=o(||A||) \end{aligned}
which implies (1.6). Moreover,
\begin{aligned} \sum _{t,s=1: |t-s|\ge L}^n a^2_{n;ts}=\sum _{t,s=1:\, |t-s|\ge L}^n b^2(t-s)\le n \sum _{|k|\ge L}b^2(|k|). \end{aligned}
Since $$\sum _{|k|\ge L}b^2(|k|)\rightarrow 0$$ as $$L\rightarrow \infty$$, we obtain (1.8). This together with Theorem 1.1 implies the following corollary.

### Corollary 1.1

Let
\begin{aligned} Q_n=\sum _{t,k=1}^{n}b(t-k)\eta _{t}\eta _k, \end{aligned}
where $$b(t)=b(-t)$$, $$b(0)=0$$ are real weights and $$\{\eta _j\}$$ is a stationary ergodic m.d. noise such that $$E\eta _j^4<\infty$$ and (1.3) hold.
1. (i)
If $$\sum _{t=0}^\infty |b(t)|<\infty$$, then
\begin{aligned}&\exists c_1, c_2>0:\ c_1n\le \mathrm{Var}(Q_{n})\le c_2n, \quad n\ge 1, \qquad \end{aligned}
(1.12)
\begin{aligned}&(\mathrm{Var}(Q_{n}))^{-1/2}(Q_{n}-EQ_{n})\mathop {\rightarrow }\limits ^{\scriptstyle d}N(0, 1). \end{aligned}
(1.13)

2. (ii)
If $$b(t)= \int _{-\pi }^\pi e^{itx}g(x)dx$$, $$t=0,1,\ldots ,$$ where g(x), $$|x|\le \pi$$ is an even real function such that for some $$0\le \alpha <1/2$$ and $$C>0$$,
\begin{aligned} |g(x)|\le C|x|^{-\alpha }, \qquad |x|\le \pi \end{aligned}
(1.14)
then (1.12) and (1.13) hold.

Next we consider two quadratic forms
\begin{aligned} Q_n^{(1)}= & {} \sum _{t,s=1}^n a^{(1)}_{n;ts} \eta _t \eta _s, \quad \mathrm{and}\nonumber \\ Q_n^{(2)}= & {} \sum _{t,s=1}^n a^{(2)}_{n;ts} \eta _t \eta _s, \end{aligned}
(1.15)
with corresponding matrices $$A_n^{(1)}$$, $$A_n^{(2)}$$ and a m.d. sequence $$\eta _t$$ which satisfy the assumptions of Theorem 1.1, so that
\begin{aligned} \left( \mathrm{Var}(Q_{n}^{(i)})\right) ^{-1/2} \left( Q^{(i)}_{n}-EQ^{(i)}_{n}\right) \mathop {\rightarrow }\limits ^{\scriptstyle d}N(0, 1), \quad i=1,2. \end{aligned}
The next corollary provides additional sufficient condition that implies asymptotic normality of their sum.

### Corollary 1.2

Suppose that the quadratic forms $$Q_n^{(1)}$$, $$Q_n^{(2)}$$ in (1.15) satisfy the assumptions of Theorem 1.1. Set
\begin{aligned} A_n=A_n^{(1)} +A_n^{(2)}. \end{aligned}
\begin{aligned} \lim _{n\rightarrow \infty } \left| \left| A_n^{(1)} \right| \right| ^{-1} \left| \left| A_n^{(2)} \right| \right| ^{-1} \mathrm{tr} \left( A_n^{(1)} A_n^{(2)}\right) = 0 \end{aligned}
(1.16)
then the quadratic form $$Q_n:=Q_n^{(1)}+Q_n^{(2)}$$ satisfies
\begin{aligned} \exists c_1, c_2>0:\quad c_1\left( \left| \left| A_n^{(1)}\right| \right| +\left| \left| A_n^{(2)}\right| \right| \right) \le \mathrm{Var}(Q_{n})\le c_2\left( \left| \left| A_n^{(1)} \right| \right| + \left| \left| A_n^{(2)}\right| \right| \right) , \quad n\ge 1, \end{aligned}
and
\begin{aligned} (\mathrm{Var}(Q_{n}))^{-1/2}(Q_{n}-EQ_{n})\mathop {\rightarrow }\limits ^{\scriptstyle d}N(0, 1). \end{aligned}

### Proof

We have $$Q_n = \sum _{t,s=1}^n a_{n;ts} \eta _t \eta _s$$ where $$a_{n;ts}=a^{(1)}_{n;ts}+a^{(2)}_{n;ts}$$. Thus, to prove the corollary, it suffices to show that $$A_n$$ satisfies assumptions of Theorem 1.1. This easily follows from the fact that both $$A_n^{(1)}$$ and $$A_n^{(2)}$$ satisfy assumptions of Theorem 1.1, and the property
\begin{aligned} ||A_n||^2=\left( \left| \left| A_n^{(1)}\right| \right| ^2+ \left| \left| A_n^{(2)}\right| \right| ^2\right) (1+o(1)). \end{aligned}
The latter follows from
\begin{aligned} ||A_n||^2=||A^{(1)}_n||^2+||A^{(2)}_n||^2+ 2\mathrm{tr} \left( A_n^{(1)}A_n^{(2)}\right) \end{aligned}
because the matrices $$A^{(1)}_n$$ and $$A^{(2)}_n$$ are symmetric so the cross term
\begin{aligned} 2\sum _{t,s}a^{(1)}_{n;ts}a^{(2)}_{n;ts}=2\sum _{t,s}a^{(1)}_{n;ts} a^{(2)}_{n;st}=2\mathrm{tr} \left( A_n^{(1)}A_n^{(2)}\right) . \end{aligned}
Hence
\begin{aligned} ||A_n||^2=\left( \left| \left| A^{(1)}_n\right| \right| ^2 +\left| \left| A^{(2)}_n\right| \right| ^2\right) (1+r_n) \end{aligned}
where
\begin{aligned} r_n=2\mathrm{tr} \left( A_n^{(1)}A_n^{(2)}\right) \Big / \left( \left| \left| A^{(1)}_n\right| \right| ^2 +\left| \left| A^{(2)}_n\right| \right| ^2\right) . \end{aligned}
Since $$||A^{(1)}_n||^2+ ||A^{(2)}_n||^2\ge 2 ||A^{(1)}_n||\, ||A^{(2)}_n||$$ we get $$r_n=o(1)$$ by (1.16). $$\square$$

Corollary 1.2 indicates that we need the additional condition (1.16) in order to obtain the asymptotic normality of $$Q_n$$. It does not imply that in this case the components $$Q_n^{(1)}$$ and $$Q_n^{(2)}$$ are asymptotically uncorrelated and hence asymptotically independent. We conjecture that $$Q_n^{(1)}$$ and $$Q_n^{(2)}$$ will be asymptotically independent in the case when $$\eta _t$$ is an i.i.d. noise.

## 2 Proof of Theorem 1.1

In the proof of Theorem 1.1 we shall use the following result.

### Lemma 2.1

(Dalla et al. (2014), Lemma 10).
1. (i)
Let
\begin{aligned} T_n=\sum _{j\in Z}c_{nj}V_j, \end{aligned}
where $$\{V_j\},\ j\in Z=\{\cdots ,-1,0,1,\cdots \}$$ is a stationary ergodic sequence, $$E|V_1|<\infty$$, and $$c_{nj}$$ are real numbers such that for some $$0<\alpha _n<\infty$$, $$n\ge 1$$,
\begin{aligned} \sum \nolimits _{j\in Z}|c_{nj}|=O(\alpha _n),\quad \sum \nolimits _{j\in Z}|c_{nj}-c_{n,j-1}|=o(\alpha _n). \end{aligned}
(2.1)
Then
\begin{aligned} E|T_n-ET_n|=o(\alpha _n). \end{aligned}
In particular, if $$\alpha _n=1$$, then
\begin{aligned} T_n=ET_n+o_p(1). \end{aligned}

2. (ii)
If the m.d. sequence $$\eta _t$$ satisfies $$\max _tE|\eta _t|^p<\infty$$, for some $$p\ge 2$$, then
\begin{aligned} E\Big |\sum \nolimits _{j\in Z}d_j\eta _j\Big |^p\le C\Big (\sum \nolimits _{j\in Z}d_j^2\Big )^{p/2}, \end{aligned}
(2.2)
for any $$d_j$$’s such that $$\sum _{j\in Z}d_j^2<\infty$$, where $$C<\infty$$ does not depend on $$d_j$$’s.

For the convenience of the reader we provide the proof of the following lemma.

### Lemma 2.2

One has
\begin{aligned} \max _{t=1, \ldots , n} \sum _{s=1}^n a_{n;ts}^2\le \left| \left| A_n\right| \right| _{sp}^2,\qquad \max _{t,s=1,\ldots , n}|a_{n;ts}|\le ||A_n||_{sp}. \end{aligned}
(2.3)

### Proof

We drop the index n and let $$A=(a_{ts})$$. The second inequality $$|a_{ts}|\le ||A_n||_{sp}$$ follows from the first since
\begin{aligned} \max _{t,s}a_{ts}^2\le \max _{t}\sum _{s=1}^na_{ts}^2\le \left| \left| A_n\right| \right| _{sp}^2. \end{aligned}
Turning to the first inequality, we have $$\left| \left| A_n\right| \right| _{sp}^2= \sup _{\left| \left| x\right| \right| =1}\left| \left| Ax\right| \right| ^2$$ where $$x=(x_1,\ldots , x_n)'$$ and
\begin{aligned} ||Ax||^2= \left| \left| \sum _{s=1}^na_{1s}x_s, \ldots , \sum _{s=1}^na_{ns}x_s \right| \right| ^2=\left( \sum _{s=1}^na_{1s}x_s\right) ^2+ \cdots +\left( \sum _{s=1}^na_{ns}x_s\right) ^2. \end{aligned}
Set $$y=(0,\ldots ,0,1,0,\ldots ,0)'$$ where 1 is at the $$t_0$$ position. Note that $$||y||=1$$. Then
\begin{aligned} \left| \left| A_n\right| \right| _{sp}^2\ge ||Ay||^2=a_{1t_0}^2+\cdots +a_{nt_0}^2= \sum _{s=1}^na_{st_0}^2=\sum _{s=1}^na_{t_0s}^2 \end{aligned}
since A is symmetric. Hence
\begin{aligned} \left| \left| A_n\right| \right| _{sp}^2\ge \max _{t_0=1,\ldots , n}\sum _{s=1}^na_{t_0s}^2. \end{aligned}
$$\square$$

### Proof of Theorem 1.1

Using (1.6), the second claim of (2.3) implies
\begin{aligned} \max _{1\le k,u\le L}|a_{n;ku}|=o(||A||), \qquad \forall L\ge 1 \hbox { fixed}. \end{aligned}
(2.4)
Relation (2.4) implies that no single $$a_{n;ku}$$ dominates.
$$\bullet$$ Proof of (1.7) Below we write $$a_{ts}$$ instead of $$a_{n;ts}$$. Let
\begin{aligned} z_{nt}=2\eta _{t}\sum _{s=1}^{t-1}a_{ts}\eta _{s} \quad \hbox {and} \quad z^\prime _{t}=a_{tt}\left( \eta _t^2-E\eta _t^2\right) . \end{aligned}
(2.5)
Then
\begin{aligned} Q_{n}-EQ_n =\sum _{t=2}^{n}z_{nt}+\sum _{t=1}^{n}z^\prime _{nt}=S_n+S_n'. \end{aligned}
(2.6)
Observe that $$E\eta _t\eta _s=0$$ for $$t>s$$ and hence $$ES_n=0$$ since $$\eta _s$$ is a m.d. sequence. In addition,
\begin{aligned} ES^2_{n}=4\sum _{t=2}^nE\Bigg [\eta _t^2 \Big (\sum _{s=1}^{t-1}a_{ts}\eta _{s}\Big )^2\Bigg ]. \end{aligned}
(2.7)
Using $$E\eta _t^4\le C$$ and (1.4),
\begin{aligned} E |S_n'|\le C\sum _{t=1}^{n}|a_{tt}|=o(||A_n||), \quad E S_n'^2\le C\left( \sum _{t=1}^{n}|a_{tt}|\right) ^2=o\left( ||A_n||^2\right) . \end{aligned}
(2.8)
Now we show that
\begin{aligned} c_1||A_n||^2\le ES_{n}^2\le c_2||A_n||^2. \end{aligned}
The lower bound follows by using (1.3) and (1.5) because of the fact that $$c>0$$:
\begin{aligned} ES^2_{n}= & {} 4\sum _{t=2}^{n}E\Big [\eta _{t}^2 \Big (\sum _{s=1}^{t-1}a_{ts}\eta _{s}\Big )^2\Big ] =4\sum _{t=2}^{n}E\Big [ E[\eta _{t}^2|{{\mathcal {F}}}_{t-1}] \Big (\sum _{s=1}^{t-1}a_{ts}\eta _{s}\Big )^2\Big ]\nonumber \\\ge & {} 4c \sum _{t=2}^{n}E \left( \sum _{s=1}^{t-1}a_{ts}\eta _{s}\right) ^2=4c \sum _{t=2}^{n}\sum _{s=1}^{t-1}a^2_{ts}\nonumber \\= & {} 2c\sum _{t,s=1}^{n}a^2_{ts} -2c\sum _{t=1}^{n}a^2_{tt}= 2||A||^2-o\left( ||A||^2\right) \ge ||A||^2, \end{aligned}
(2.9)
for large n.
To prove the upper bound, notice that
\begin{aligned} ES^2_{n}= & {} 4\sum _{t=2}^nE\Big [\eta _t^2\Big (\sum _{s=1}^{t-1}a_{ts} \eta _{s}\Big )^2\Big ]\nonumber \\\le & {} 4\sum _{t=2}^n\left( E\eta _t^4\right) ^{1/2} \Big (E\Big (\sum _{s=1}^{t-1}a_{ts} \eta _{s}\Big )^4\Big )^{1/2} \Big ]\le C\sum _{t,s=1}^na^2_{ts}=C||A||^2 \end{aligned}
(2.10)
by (2.2) and assumption $$E\eta _t^4=E\eta _1^4<\infty$$. To obtain (1.7), note that
\begin{aligned} \mathrm{Var}(Q_n)\le 2 ES_n^2+2{ES_n'}^2\le C||A||^2+o(||A||^2)\le 2C||A||^2 \end{aligned}
by (2.8) and (2.10). In addition, (2.6)–(2.10) imply
\begin{aligned} \mathrm{Var}(Q_n)= \left( ES_n^2\right) (1+o(1)), \quad n\rightarrow \infty . \end{aligned}
(2.11)
Indeed, by (2.6),
\begin{aligned} |\mathrm{Var}(Q_n)-\mathrm{Var}(S_n)|= & {} |\mathrm{Var}(S_n')+2\mathrm{Cov}(S_n,S_n')|\le \mathrm{Var}(S_n')+2\left( \mathrm{Var}(S_n)\mathrm{Var}(S_n')\right) ^{1/2}\\= & {} o(||A||^2)+\left( O(||A||^2)o(||A||^2)\right) ^{1/2}=o(||A||^2) \end{aligned}
so that $$\mathrm{Var}(Q_n)=\mathrm{Var}(S_n)+o(||A||^2)$$ and by (2.9) we have $$ES_n^2\ge ||A||^2$$, which leads to (2.11).
$$\bullet$$ Proof of (1.10) We now prove the asymptotic normality of $$Q_n$$. Let $$B^2_n=\mathrm{Var}(Q_n)$$, $$X_{nt}=B_n^{-1}z_{nt}$$ and $$X^\prime _{t}=B_n^{-1}z^\prime _{nt}$$. Then, by (2.6)
\begin{aligned} B_n^{-1}(Q_{n}-EQ_n) =\sum _{t=2}^{n}X_{nt}+\sum _{t=1}^{n}X^\prime _{nt}. \end{aligned}
(2.12)
Observe that by (1.7) and (2.8), $$E|\sum _{t=1}^{n}X^\prime _{t}|=B_n^{-1}E|\sum _{s=1}^n z'_{nt}|\le C ||A_n||^{-1}\sum _{t=1}^{n}|a_{tt}|=o(1)$$. Therefore, to prove (1.10) it remains to show that
\begin{aligned} \sum _{t=2}^{n}X_{nt}\mathop {\rightarrow }\limits ^{\scriptstyle d}N(0, 1). \end{aligned}
(2.13)
Since $$X_{nt}$$ is a m.d. sequence, then by Theorem 3.2 of Hall and Heyde (1980), it suffices to show
\begin{aligned}&\mathrm{(a)} E\max _{1\le j\le n}X^2_{nj}\rightarrow 0, \quad \mathrm{(b)} \max _{1\le j\le n}|X_{nj}|\rightarrow _p0, \quad \mathrm{(c)} \sum \nolimits _{j=1}^n X^2_{nj}\rightarrow _p1. \end{aligned}
(2.14)
$$\bullet \bullet$$ To verify (a) and (b), it suffices to show that for any $$\varepsilon >0$$,
\begin{aligned} \sum _{j=1}^n EX^2_{nj}I(|X_{nj}|\ge \varepsilon ) \rightarrow 0, \end{aligned}
(2.15)
which clearly implies (a), while (b) follows from (2.15) noting that
\begin{aligned} P\Big (\max _{1\le j\le n}|X_{nj}|\ge \varepsilon \Big ) \le \varepsilon ^{-2} \sum _{j=1}^n EX^2_{nj}I(|X_{nj}|\ge \varepsilon ) \rightarrow 0. \end{aligned}
To prove (2.15), let $$K>0$$ be large. We consider two cases: $$\eta ^2_t\le K$$ and $$\eta ^2_t> K$$. Then,
\begin{aligned} EX_{nt}^2I(X^2_{nt}\ge \varepsilon )I\left( \eta ^2_t\le K\right)\le & {} \varepsilon ^{-1}EX_{nj}^4I\left( \eta ^2_t\le K\right) \le \varepsilon ^{-1}2^4K^2B_n^{-4}E\Big (\sum _{s=1}^{t-1}a_{ts}\eta _{s}\Big )^4\\\le & {} C\varepsilon ^{-1}K^2B_n^{-4} \left( \sum _{s=1}^{t-1}a^2_{ts}\right) ^2\le C\varepsilon ^{-1}K^2B_n^{-4}||A||_{sp}^2\sum _{s=1}^{t-1}a^2_{ts} \end{aligned}
by (2.2) and (2.3). Recall that by (1.7), $$B_n^{-2}\le C||A||^{-2}$$. Thus, for any $$\varepsilon >0$$ and $$K>0$$,
\begin{aligned} \sum _{t=2}^nEX_{nt}^2I(X^2_{nt}\ge \varepsilon )I\left( \eta ^2_t\le K\right)\le & {} C\varepsilon ^{-1}K^2B_n^{-4}||A||_{sp}^2 \sum _{t=2}^n\sum _{s=1}^{t-1}a^2_{ts}\nonumber \\\le & {} C\varepsilon ^{-1}K^2\left( ||A||_{sp}/||A||\right) ^2 \rightarrow 0 \end{aligned}
(2.16)
by (1.6) as $$n \rightarrow \infty$$ for any finite K.
We now focus on the case $$\eta ^2_t \ge K$$. Since $$E\eta ^4_t <\infty$$ and, by stationarity of $$\eta _t$$, $$\delta _K:=E\eta _1^4I(\eta ^2_1> K)\rightarrow 0$$ as $$K\rightarrow \infty$$, this implies
\begin{aligned} EX_{nt}^2I\left( X^2_{nt}\ge \varepsilon \right) I\left( \eta ^2_t> K\right)\le & {} EX_{nt}^2I\left( \eta ^2_t> K\right) \le B_n^{-2}2^2E\left[ \eta _t^2I\left( \eta ^2_t> K\right) \left( \sum _{s=1}^{t-1}a_{ts}\eta _{s}\right) ^2\right] \\\le & {} C||A||^{-2} \delta _K^{1/2}\left( E\left( \sum _{s=1}^{t-1}a_{ts}\eta _{s}\right) ^4\right) ^{1/2} \le C||A||^{-2}\delta _K^{1/2}\sum _{s=1}^{t-1}a^2_{ts} \end{aligned}
by (2.2). Hence,
\begin{aligned}&\sum _{t=2}^nEX_{nt}^2I\left( X^2_{nt}\ge \varepsilon \right) I\left( \eta ^2_t> K\right) \le C\delta _K^{1/2} ||A||^{-2}\sum _{t=2}^n\sum _{s=1}^{t-1}a^2_{ts}\nonumber \\&\le C\delta _K^{1/2} \rightarrow 0, \qquad K\rightarrow \infty . \end{aligned}
(2.17)
Since (2.16) holds for any fixed K as $$n\rightarrow \infty$$, and since (2.17) holds as $$K\rightarrow \infty$$ uniformly in n, we get (2.15).
$$\bullet \bullet$$ The verification of (c) in (2.14) is particularly delicate. We want to show that $$s_n \rightarrow _p1$$. Recall that $$x_{nt}=B^{-1}z_{nt}$$ where $$z_{nt}$$ is defined in (2.5). We shall decompose $$s_n=\sum _{s=1}^n X_{ns}^2$$ into two parts involving $$L>1$$. Write
\begin{aligned} s_n =4B_{n}^{-2}\sum _{t=1}^n\eta _t^2\left( \sum _{s=1}^{t-1} a_{ts}\eta _{s}\right) ^2=s_{n,1}+s_{n,2}, \end{aligned}
(2.18)
where
\begin{aligned} s_{n,1}:=4B_{n}^{-2} \sum _{t=1}^n\eta _t^2\left( \sum _{s=t-L}^{t-1}a_{ts}\eta _{s}\right) ^2,\quad s_{n,2}:=s_n-s_{n,1}. \end{aligned}
Then,
\begin{aligned} s_n=Es_n+(s_{n,1}-Es_{n,1})+(s_{n,2}-Es_{n,2}). \end{aligned}
We show that as $$n \rightarrow \infty$$,
\begin{aligned}&(i)\quad Es_{n}\rightarrow 1;\qquad (ii)\quad s_{n,1}-Es_{n,1}\rightarrow _p 0, \quad \forall L\ge 1;\nonumber \\&(iii) \quad E|s_{n,2}|\rightarrow 0, \quad n\rightarrow \infty , \quad L\rightarrow \infty \end{aligned}
(2.19)
which proves (2.14)(c) since $$E|s_{n}|\rightarrow 0$$ implies $$s_{n}\rightarrow _P 0$$ as $$n\rightarrow \infty$$ and $$L\rightarrow \infty$$.
$$\bullet \bullet \bullet$$ The claim (2.19)(i) follows from (2.11),
\begin{aligned} \left( ES_n^2\right) /\mathrm{Var}(Q_n)=B_n^{-2}ES_n^2\rightarrow 1, \end{aligned}
noting that $$B_n^{-2}ES_n^2=Es_n$$, which holds by definition of $$s_n$$ and (2.7).
$$\bullet \bullet \bullet$$ To show (2.19)(ii), open up the squares, set $$s=t-k$$ and $$s'=t-u$$, to get
\begin{aligned} s_{n,1}-Es_{n,1}=4\sum _{k,u=1}^L\left\{ B_n^{-2} \sum _{t=1}^na_{t,t-k}a_{t, t-u}\left[ \eta _t^2\eta _{t-k} \eta _{t-u}-E\eta _t^2\eta _{t-k}\eta _{t-u}\right] \right\} = 4\sum _{k,u=1}^Lg_{n,ku}. \end{aligned}
It suffices to verify that for any fixed k and u, $$g_{n,ku}=o_p(1)$$. Setting
\begin{aligned} c_{nt}:=B_n^{-2}a_{t,t-k}a_{t, t-u} \end{aligned}
and
\begin{aligned} V_t:=\eta _t^2\eta _{t-k}\eta _{t-u}-E\eta _t^2\eta _{t-k}\eta _{t-u}, \end{aligned}
write
\begin{aligned} g_{n,ku}=\sum _{t=1}^nc_{nt}V_t. \end{aligned}
Since the noise $$\{\eta _t\}$$ is stationary ergodic and such that $$E\eta _1^4<\infty$$, by Theorem 3.5.8 in Stout (1974), the process $$\{V_j\}$$ is stationary and ergodic, and $$E|V_1|<\infty$$. Because of the centering, $$Eg_{n,ku}=0$$. Thus, by Lemma 2.1(i), to prove $$g_{n,ku}=o_p(1)$$, it remains to show that $$c_{nt}$$’s satisfy (2.1) with $$\alpha _n=1$$. Observe that
\begin{aligned} \sum _{t\in Z}|c_{nt}|=B_n^{-2}\sum _{t=1}^n|a_{t,t-k}a_{t, t-u}|\le 2B_n^{-2}\sum _{t,s = 1}^na^2_{t,s}=2B_n^{-2}||A||^2\le C, \quad n\rightarrow \infty \end{aligned}
by (1.7). On the other hand,
\begin{aligned}&\sum _{t\in Z}|c_{nt}-c_{n,t-1}| = B_n^{-2}\sum _{t=1}^{n+1}|a_{t,t-k}a_{t, t-u}-a_{t-1,t-1-k}a_{t-1, t-1-u}|\\&\quad \le B_n^{-2}\sum _{t=1}^{n+1}\{|a_{t,t-k}-a_{t-1,t-1-k}||a_{t, t-u}|+|a_{t-1,t-1-k}||a_{t, t-u}-a_{t-1, t-1-u}|\}\\&\quad \le B_n^{-2}\left\{ \left( \sum _{t=1}^{n+1}\left( a_{t,t-k}-a_{t-1,t-1-k}\right) ^2\right) ^{1/2}+ \left( \sum _{t=1}^{n+1}\left( a_{t,t-u}-a_{t-1,t-1-u}\right) ^2\right) ^{1/2}\right\} \\&\qquad \times \left( \sum _{t,s=1}^na^2_{t, s}\right) ^{1/2}\\&\quad =o\left( B_n^{-2}||A||^2\right) =o(1), \end{aligned}
by (1.9), (2.3) and (1.7). Hence (2.1) holds. By Lemma 2.1(i) we conclude that $$g_{n,ku}=o_p(1)$$ and, thus, $$s_{n,1}-Es_{n,1}=o_p(1)$$. Hence (2.19)(ii) holds.
$$\bullet \bullet \bullet$$ To verify $$E|s_{n,2}|\rightarrow 0$$ in (2.19)(iii), write
\begin{aligned} s_{n,2}=s_{n}-s_{n,1}= 4B_{n}^{-2} \sum _{t=1}^n\eta _t^2 \left[ \left( \sum _{s=1}^{t-1}a_{ts}\eta _{s}\right) ^2 -\left( \sum _{s=t-L}^{t-1}a_{ts}\eta _{s}\right) ^2\right] . \end{aligned}
We use the identity $$a^2-b^2=(a-b)^2+ 2(a-b)b$$, to obtain
\begin{aligned} |s_{n,2}|= & {} 4B_{n}^{-2} \left| \sum _{t=1}^n\eta _t^2 \left\{ \left( \sum _{s=1}^{t-1}a_{ts}\eta _{s}\right) ^2 -\left( \sum _{s=t-L}^{t-1}a_{ts}\eta _{s}\right) ^2 \right\} \right| \\= & {} 4B_{n}^{-2} \left| \sum _{t=1}^n\eta _t^2 \left\{ \left( \sum _{s=1}^{t-L-1} a_{ts}\eta _{s}\right) ^2 +2\left( \sum _{s=1}^{t-L-1}a_{ts}\eta _{s}\right) \left( \sum _{s=t-L}^{t-1} a_{ts}\eta _{s}\right) \right\} \right| \\\le & {} 4q_{n,1} +4\left( B_{n}^{-2} \sum _{t=1}^n\eta _t^2 \left( \sum _{s=1}^{t-L-1}a_{ts} \eta _{s}\right) ^2 \right) ^{1/2}\\&\times \, \left( 4B_{n}^{-2} \sum _{t=1}^n\eta _t^2 \left( \sum _{s=t-L}^{t-1}a_{ts} \eta _{s}\right) ^2\right) ^{1/2}\le 4\left( q_{n,1}+q_{n,1}^{1/2}s_{n,1}^{1/2}\right) , \end{aligned}
where
\begin{aligned} q_{n,1}:=B_{n}^{-2} \sum _{t=1}^n\eta _t^2\left( \sum _{s=1}^{t-L-1}a_{ts}\eta _{s}\right) ^2. \end{aligned}
Hence, $$E|s_{n,2}|\le 4Eq_{n,1}+4(Eq_{n,1}E{s_{n,1}})^{1/2}.$$ To bound $$E q_{n,1}$$, we argue partly as in (2.10):
\begin{aligned} Eq_{n,1}\le & {} C||A_n||^{-2}\sum _{t=1}^n \sum _{s=1}^{t-L-1}a^2_{ts} \rightarrow 0,\quad n \rightarrow \infty , \quad L\rightarrow \infty \end{aligned}
by (1.8). We also have
\begin{aligned} Es_{n,1}\le C||A_n||^{-2}\sum _{t=1}^n \sum _{s=t-L}^{t-1}a^2_{ts} \le C. \end{aligned}
Hence $$E|s_{n,2}|\rightarrow 0$$ as $$n\rightarrow \infty$$ and $$L\rightarrow \infty$$. This completes the proof of (2.19)(iii) and the theorem. $$\square$$

## Notes

### Acknowledgments

Liudas Giraitis and Murad S. Taqqu would like to thank Masanobu Taniguchi for his hospitality in Japan and support by the JSPS grant 15H02061. Murad S. Taqqu was partially supported by the NSF grant DMS-1309009 at Boston University.

## References

1. Bhansali R, Giraitis L, Kokoszka P (2007a) Convergence of quadratic forms with nonvanishing diagonal. Statistics & Probability Letters 77:726–734
2. Bhansali R, Giraitis L, Kokoszka P (2007b) Approximations and limit theory for quadratic forms of linear processes. Stochastic Processes and their Applications 117:71–95
3. Dalla V, Giraitis L, Koul HL (2014) Studentizing weighted sums of linear processes. Journal of Time Series Analysis 35:151–172
4. De Jong P (1987) A central limit theorem for generalized quadratic forms. Probability Theory and Related Fields 75:261–277
5. Giraitis L, Koul HL, Surgailis D (2012) Large Sample Inference for Long Memory Processes. Imperial College Press, LondonGoogle Scholar
6. Guttorp P, Lockhart RA (1988) On the asymptotic distribution of quadratic forms in uniform order statistics. Annals of Statistics 16:433–449
7. Hall P, Heyde CC (1980) Martingale Limit Theory and Applications. Academic Press, New York
8. Mikosch T (1991) Functional limit theorems for random quadratic forms. Stochastic Processes and their Applications 37:81–98
9. Rotar VI (1973) Certain limit theorems for polynomials of degree two. Teoria Verojatnosti i Primenenia 18:527–534 in Russian
10. Stout W (1974) Almost Sure Convergence. Academic Press, New York

## Authors and Affiliations

• Liudas Giraitis
• 1
Email author
• Masanobu Taniguchi
• 2