1 Corrections and Clarifications of Mason (2005)

This note has two purposes. First is to correct some statements in the Introduction and Statements of Results of [4], and second is to provide the result given in Proposition [A] below, which clarifies a claim at the end of the proof of Theorem 2.

Our corrections are needed since it is not clear that (1.9) always implies (1.2). They are the following:

  1. (i)

    On page 855, line 10, change the “further shows” to “further shows that under the setup of Proposition [A] in this note” and on line 13 change “(1.4)” to “(1.4) nondegenerate”.

  2. (ii)

    On page 855, line 14, replace “or equivalently (1.9) holds” with “\(P\left( V>0\right) =1\)”.

  3. (iii)

    On page 856, line 3, replace “ Actually” with “Actually under the setup of Proposition [A] in this note”.

We remark in passing that the statements in [4] about triangular arrays of the form \(X_{1,n},\dots , X_{n,n}\), \(n\ge 1\), are equally valid for triangular arrays of the form \(X_{1,n_{k}},\dots , X_{n_{k},n_{k}}\), \(k\ge 1\), where \(\left\{ n_{k}\right\} _{k\ge 1}\) is an infinite subsequence of the positive integers. Also we point out that everywhere triangular array of infinitesimal independent random variables should be changed to infinitesimal triangular array of independent random variables, as stated in Proposition [A]. The word infinitesimal was everywhere put in the wrong place.

The following Proposition [A] and Remark 1 justify the claim towards the end of the proof of Theorem 2 on page 868 that says, “This means that every subsequential distributional limit random variable \(T\) must be of the form (1.10).” They should have been included in an appendix in the original paper.

Proposition [A]

Let \(\left\{ n_{k}\right\} _{k\ge 1}\) be an infinite subsequence of the positive integers and \(X_{1,n_{k}},\dots , X_{n_{k},n_{k}}\), \(k\ge 1\), be an infinitesimal triangular array of independent random variables such that for each \(k\ge 1\), \( X_{1,n_{k}},\dots ,X_{n_{k},n_{k}}\) are i.i.d. \(X_{1,n_{k}}\). Assume that for a necessarily infinitely divisible random variable \(U\),

$$\begin{aligned} \sum _{i=1}^{n_{k}}X_{i,n_{k}}\rightarrow _{d}U, \ {\text{ as}} \ k\rightarrow \infty . \end{aligned}$$
(1.1)

Then

$$\begin{aligned} \left( \sum _{i=1}^{n_{k}}X_{i,n_{k}},\sum _{i=1}^{n_{k}}X_{i,n_{k}}^{2}\right) \rightarrow _{d}\left( U,V\right) , \ {\text{ as}} \ k\rightarrow \infty , \end{aligned}$$
(1.2)

where the two dimensional infinitely divisible random vector \(\left( U,V\right) \) in (1.2) has the representation:

$$\begin{aligned} \left( U,V\right) =_{d}\left( b+W+\tau Z,S+\tau ^{2}\right) , \end{aligned}$$
(1.3)

with \(b\) and \(\tau \ge 0\) being suitable constants,

$$\begin{aligned} W&= \int _{0}^{1}\varphi _{1}\left( s\right) dN_{1}(s)+\int _{1}^{\infty }\varphi _{1}\left( s\right) d\left\{ N_{1}(s)-s\right\} \nonumber \\&\quad -\int _{0}^{1}\varphi _{2}\left( s\right) dN_{2}(s)-\int _{1}^{\infty }\varphi _{2}\left( s\right) d\left\{ N_{2}(s)-s\right\} \end{aligned}$$
(1.4)

and

$$\begin{aligned} S=\int _{0}^{\infty }\varphi _{1}^{2}\left( s\right) dN_{1}(s)+\int _{0}^{\infty }\varphi _{2}^{2}\left( s\right) dN_{2}(s), \end{aligned}$$
(1.5)

with \(N_{1}\) and \(N_{2}\) being independent right continuous Poisson processes on \([0,\infty )\) with rate 1, \(Z\) being a standard normal random variable independent of \(N_{1}\) and \(N_{2}\), and \(\varphi _{1}\) and \( \varphi _{2}\) being two left continuous, nonincreasing, nonnegative functions defined on \(\left( 0,\infty \right) \) satisfying for all \(\delta >0,\)

$$\begin{aligned} \int _{\delta }^{\infty }\varphi _{i}^{2}\left( s\right) ds<\infty \quad \mathrm{for} \ i=1, 2. \end{aligned}$$
(1.6)

Proof

The proof that (1.1) implies (1.2) follows along very similar lines to that of Lemma 4 in [2]. To relieve the notational burden, in the following we shall write \(n=n_{k}\). By parts (ii) and (iii) of Theorem 4.7 on page 61 of [1], the distributional convergence (1.1) implies that there exists a Lévy measure \(\mu \) such that for every \(\delta >0\) such that \( \mu \left\{ -\delta ,\delta \right\} =0\),

$$\begin{aligned}&w-\lim _{n\rightarrow \infty }\sum _{i=1}^{n}n\mathcal L \left( X_{i,n}\right) |\left( |x|>\delta \right) \nonumber \\&\quad =w-\lim _{n\rightarrow \infty }n\mathcal L \left( X_{1,n}\right) |\left( |x|>\delta \right) =\mu |\left( |x|>\delta \right) ; \end{aligned}$$
(1.7)

and for some \(a_{\delta }\)

$$\begin{aligned} \lim _{n\rightarrow \infty }ES_{n,\delta }=\lim _{n\rightarrow \infty }\left( nEX_{1,n,\delta }\right) =a_{\delta }, \end{aligned}$$
(1.8)

where

$$\begin{aligned} S_{n,\delta }:=\sum _{i=1}^{n}X_{i,n}1\left\{ \left| X_{i,n}\right| \le \delta \right\} =:\sum _{i=1}^{n}X_{i,n,\delta }. \end{aligned}$$
(1.9)

(Note that we use here the notation of [1].) Now by part (i) of the same theorem, (1.1) also implies that for some \( 0\le \sigma ^{2}<\infty \),

$$\begin{aligned} \lim _{\delta \searrow 0}\left\{ \begin{array}{c} \limsup \nolimits _{n\rightarrow \infty } \\ \liminf \nolimits _{n\rightarrow \infty } \end{array} \right\} \sum _{i=1}^{n}E\left( X_{i,n,\delta }-EX_{i,n,\delta }\right) ^{2}=\sigma ^{2}. \end{aligned}$$
(1.10)

Notice that

$$\begin{aligned} \sum _{i=1}^{n}E\left( X_{i,n,\delta }-EX_{i,n,\delta }\right) ^{2}=nEX_{1,n,\delta }^{2}-n^{-1}\left( nEX_{1,n,\delta }\right) ^{2}. \end{aligned}$$
(1.11)

Further by (1.8) for every \(\delta >0\) such that \(\mu \left\{ -\delta ,\delta \right\} =0\), \(n^{-1}\left( nEX_{1,n,\delta }\right) ^{2}\rightarrow 0\), which by (1.10) and (1.11), implies

$$\begin{aligned} \lim _{\delta \searrow 0}\left\{ \begin{array}{c} \limsup \nolimits _{n\rightarrow \infty } \\ \liminf \nolimits _{n\rightarrow \infty } \end{array} \right\} EV_{n,\delta }=\sigma ^{2}, \end{aligned}$$
(1.12)

where

$$\begin{aligned} V_{n,\delta }:=\sum _{i=1}^{n}X_{i,n}^{2}1\left\{ \left| X_{i,n}\right| \le \delta \right\} =\sum _{i=1}^{n}X_{i,n,\delta }^{2}, \end{aligned}$$
(1.13)

with

$$\begin{aligned} EV_{n,\delta }=nEX_{1,n,\delta }^{2}=\int _{|x|\le \delta }nx^{2}d\mathcal L \left( X_{1,n}\right) . \end{aligned}$$
(1.14)

Now let \(\delta _{m}, m\ge 1\), be a sequence of constants converging to zero such that \(0<\delta _{m+1}<\delta _{m}<\delta _{0}=\delta \), and \( \mu \left\{ -\delta _{m},\delta _{m}\right\} =0\), \(m\ge 0\). Then for each \( m\ge 1\), by (1.7) and \(\mu \left\{ -\delta _{m},\delta _{m}\right\} =\mu \left\{ -\delta ,\delta \right\} =0\),

$$\begin{aligned}&\liminf _{n\rightarrow \infty }\int _{|x|\le \delta _{m}} nx^{2}d\mathcal L \left( X_{1,n}\right) +\int _{\delta _{m}<|x|\le \delta }x^{2}d\mu \left( x\right) \\&\quad =\liminf _{n\rightarrow \infty }\int _{|x|\le \delta _{m}}nx^{2}d\mathcal L \left( X_{1,n}\right) +\lim _{n\rightarrow \infty } \int _{\delta _{m}<|x|\le \delta }nx^{2}d\mathcal L \left( X_{1,n}\right) \\&\quad =\liminf _{n\rightarrow \infty }\int _{|x|\le \delta }nx^{2}d\mathcal L \left( X_{1,n}\right) \le \limsup _{n\rightarrow \infty }\int _{|x|\le \delta }nx^{2}d \mathcal L \left( X_{1,n}\right) \\&\quad =\limsup _{n\rightarrow \infty }\int _{|x|\le \delta _{m}}nx^{2}d\mathcal L \left( X_{1,n}\right) +\lim _{n\rightarrow \infty }\int _{\delta _{m}<|x|\le \delta }nx^{2}d\mathcal L \left( X_{1,n}\right) \\&\quad =\limsup _{n\rightarrow \infty }\int _{|x|\le \delta _{m}}nx^{2}d\mathcal L \left( X_{1,n}\right) +\int _{\delta _{m}<|x|\le \delta }x^{2}d\mu \left( x\right) . \end{aligned}$$

Now by letting \(m\rightarrow \infty \), we see by (1.12) that

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{|x|\le \delta }nx^{2}d\mathcal L \left( X_{1,n}\right) =\sigma ^{2}+\int _{0<|x|\le \delta }x^{2}d\mu \left( x\right) =:b_{\delta }. \end{aligned}$$
(1.15)

Moreover, we get from (1.15) that for every \(k>2\),

$$\begin{aligned} \lim _{\delta \searrow 0}\limsup _{n\rightarrow \infty }\int _{|x|\le \delta }n\left| x\right| ^{k}d\mathcal L \left( X_{1,n}\right) =0. \end{aligned}$$
(1.16)

We now proceed as in the proof of Lemma 4 in [2]. We see using (1.16) that for any \(\alpha \), \(\beta \in \mathbb R \),

$$\begin{aligned} \lim _{\delta \searrow 0}\left\{ \begin{array}{c} \limsup \nolimits _{n\rightarrow \infty } \\ \liminf \nolimits _{n\rightarrow \infty } \end{array} \right\} E\left( \alpha \left( S_{n,\delta }-ES_{n,\delta }\right) +\beta \left( V_{n,\delta }-EV_{n,\delta }\right) \right) ^{2}=\alpha ^{2}\sigma ^{2}.\qquad \end{aligned}$$
(1.17)

Write \(\rho =\mu \circ T^{-1}\), where \(T\left( x\right) =\left( x,x^{2}\right) \). Clearly by (1.7) for every \(\delta >0\) such that \(\mu \left\{ -\delta ,\delta \right\} =0\)

$$\begin{aligned} w-\lim _{n\rightarrow \infty }n\mathcal{L }\left( X_{1,n},X_{1,n}^{2}\right) |\left( \left\| x\right\| >\delta \right) =\rho |\left( \left\| {x}\right\| >\delta \right) . \end{aligned}$$
(1.18)

Furthermore, by (1.8) and (1.15) we have for every \(\delta >0\) such that \(\mu \left\{ -\delta ,\delta \right\} =0\)

$$\begin{aligned} \left( ES_{n,\delta },EV_{n,\delta }\right) \rightarrow \left( a_{\delta },b_{\delta }\right) . \end{aligned}$$

Thus by the central limit theorem in \(\mathbb R ^{2}\) on pp. 67–68 of [1] and arguing just as in [2] we get that (1.2) holds with \((U,V)\) having characteristic function \( E\exp \left( sU+tV\right) =\)

$$\begin{aligned} \exp \left\{ -\frac{\sigma ^{2}s^{2}}{2}+i\big ( a_{\delta }s+\sigma ^{2}t\big ) +\int \big ( \exp \big ( i\big ( su+tu^{2}\big ) \big ) -1-isu1\left\{ \left| u\right| \le \delta \right\} \big ) d\mu \left( u\right) \right\} ,\nonumber \\ \end{aligned}$$
(1.19)

for any \(\delta >0\) such that \(\mu \left\{ -\delta ,\delta \right\} =0\). It can be shown using Proposition 5.7 in [3] that a pair of random variables \((U,V)\) with this characteristic function has the distributional representation (1.3) where \(\tau ^{2}=\sigma ^{2}\) and \(b\) is a suitable constant. It is shown there how \(\varphi _{1}\) and \(\varphi _{2}\) are defined via the Lévy measure \(\mu \). \(\square \)

Remark 1

We note that if \(X\) is in the centered Feller class with \(a_{n}\) an appropriate sequence of norming constants and \(X_{1},X_{2},\dots ,\) are i.i.d. \(X\), then for every subsequence of \(\left\{ n\right\} \) there exists a further subsequence \(\left\{ n_{k}\right\} \) such that the triangular array \( X_{i,n_{k}}=X_{i}/a_{n_{k}}, 1\le i\le n_{k}\), \(k\ge 1\), satisfies (1.1), with \(U\) nondegenerate, and thus (1.2) and (1.19) hold, as was pointed out in [2]. Also we mention that it can be inferred using the Theorem in [5] that necessarily “\(P\left( V>0\right) =1\)”.

Remark 2

A special case of the Proposition 1 implies that for any triangular array \(X_{1,n_{k}},\dots , X_{n_{k},n_{k}}, k\ge 1\), satisfying its assumptions, and

$$\begin{aligned} \sum _{i=1}^{n_{k}}X_{i,n_{k}}\rightarrow _{d}N(0,\sigma ^{2}),\quad \text{ as} k\rightarrow \infty , \end{aligned}$$
(1.20)

then

$$\begin{aligned} \sum _{i=1}^{n_{k}}X_{i,n_{k}}^{2}\rightarrow _{P}\sigma ^{2},\quad \text{ as} k\rightarrow \infty . \end{aligned}$$
(1.21)