1 Introduction

For an array of mean 0 random variables \(\{X_{n,j}, 1 \leq j \leq k _{n}, n \geq 1 \}\) and an array of constants \(\{a_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\), Chandra, Li, and Rosalsky [2, Theorem 3.1] recently provided conditions under which the weighted averages \(\sum_{j=1}^{k_{n}} a_{n,j}X_{n,j}\) obey the degenerate mean convergence law

$$ \sum_{j=1}^{k_{n}} a_{n,j}X_{n,j} \stackrel{\mathscr{L}_{1}}{\longrightarrow } 0. $$

The random variables comprising the array are assumed to be (i) rowwise pairwise negative quadrant dependent and (ii) stochastically dominated by a random variable. (Technical definitions such as these will be reviewed in Sect. 2.) In this note, Theorem 3.1 of Chandra, Li, and Rosalsky [2] is extended to \(\mathscr{L}_{r}\) convergence where \(1 \leq r < 2\) and is shown to hold under weaker conditions. This is accomplished by applying a result of Sung [3] and an inequality of Adler, Rosalsky, and Taylor [1]. This note owes much to the work of Sung [3].

2 Preliminaries

In this section, some definitions will be reviewed and the needed results of Sung [3] and Adler, Rosalsky, and Taylor [1] will be stated.

Definition 2.1

The random variables comprising an array \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) are said to be rowwise pairwise negative quadrant dependent (PNQD) if for all \(n \geq 1\) and all \(i, j \in \{1,\ldots, k_{n}\}\) (\(i \neq j\)),

$$ \mathbb{P} (X_{n,i} \leq x, X_{n,j} \leq y ) \leq \mathbb{P} (X_{n,i} \leq x ) \mathbb{P} (X_{n,j} \leq y ) \quad \text{for all } x, y \in \mathbb{R}. $$

Definition 2.2

The random variables comprising an array \(\{Y_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) are said to be stochastically dominated by a random variable Y if there exists a constant D such that

$$ \mathbb{P} \bigl( \vert Y_{n,j} \vert > y \bigr) \leq D \mathbb{P} \bigl( \vert DY \vert > y \bigr), \quad y \geq 0, 1 \leq j \leq k_{n}, n \geq 1. $$
(2.1)

Lemma 2.1

(Adler, Rosalsky, and Taylor [1, Lemma 2.3])

If the random variables in the array \(\{Y_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) are stochastically dominated by a random variable Y, then for all \(n \geq 1\) and \(j \in \{1,\ldots, k_{n} \}\),

$$ \mathbb{E} \bigl( \vert Y_{n,j} \vert I \bigl( \vert Y_{n,j} \vert > y \bigr) \bigr) \leq D^{2} \mathbb{E}\bigl( \vert Y \vert I\bigl( \vert DY \vert > y\bigr)\bigr) \quad \textit{for all } y \geq 0, $$

where D is as in (2.1).

Proposition 2.1

(Sung [3, Theorem 2.1])

Let \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) be an array of rowwise PNQD random variables and let \(r \in [1, 2)\). Let \(\{a_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) be an array of constants. Suppose that

$$ \sup_{n \geq 1} \sum_{j=1}^{k_{n}} \vert a_{n,j} \vert ^{r} \mathbb{E} \vert X_{n,j} \vert ^{r} < \infty $$
(2.2)

and

$$ \lim_{n \rightarrow \infty } \sum_{j=1}^{k_{n}} \vert a_{n,j} \vert ^{r} \mathbb{E} \bigl( \vert X_{n,j} \vert ^{r} I \bigl( \vert a_{n,j} \vert ^{r} \vert X_{n,j} \vert ^{r} > \varepsilon \bigr) \bigr) = 0 \quad \textit{for all } \varepsilon > 0. $$
(2.3)

Then

$$ \sum_{j=1}^{k_{n}} a_{n,j} (X_{n,j} - \mathbb{E}X_{n,j} ) \stackrel{\mathscr{L}_{r}}{\longrightarrow } 0 $$

and, a fortiori,

$$ \sum_{j=1}^{k_{n}} a_{n,j} (X_{n,j} - \mathbb{E}X_{n,j} ) \stackrel{\mathbb{P}}{ \longrightarrow } 0. $$

3 Improved version of the Chandra, Li, and Rosalsky [2] result

We will now use Lemma 2.1 and Proposition 2.1 to present the following improved version of Theorem 3.1 of Chandra, Li, and Rosalsky [2].

Theorem 3.1

Let \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) be an array of rowwise PNQD mean 0 random variables which are stochastically dominated by a random variable X with \(\mathbb{E}\vert X\vert ^{r} < \infty \) for some \(r \in [1, 2)\). Let \(\{a_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) be an array of constants such that

$$ \sup_{n \geq 1} \sum_{j=1}^{k_{n}} \vert a_{n,j} \vert ^{r} < \infty $$
(3.1)

and

$$ \lim_{n \rightarrow \infty } \sup_{1 \leq j \leq k_{n}} \vert a_{n,j} \vert = 0. $$
(3.2)

Then

$$ \sum_{j=1}^{k_{n}} a_{n,j} X_{n,j} \stackrel{\mathscr{L}_{r}}{\longrightarrow } 0 $$
(3.3)

and, a fortiori,

$$ \sum_{j=1}^{k_{n}} a_{n,j} X_{n,j} \stackrel{\mathbb{P}}{\longrightarrow } 0. $$

Remark 3.1

Before proving Theorem 3.1, we point out that Theorem 3.1 of Chandra, Li, and Rosalsky [2]

  1. (i)

    only treated the case \(r = 1\),

  2. (ii)

    had the additional condition

    $$ \text{for each } n \geq 1, \text{either } \min_{1 \leq j \leq k_{n}} a _{n,j} \geq 0 \text{ or } \max_{1 \leq j \leq k_{n}} a_{n,j} \leq 0, $$
  3. (iii)

    had the condition

    $$ \sup_{n \geq 1} \sum_{j=1}^{k_{n}} \vert a_{n,j} \vert < \infty \quad \text{and} \quad \lim_{n \rightarrow \infty } \sum_{j=1}^{k_{n}} a_{n,j}^{2} = 0, $$

the second half of which is clearly stronger than (3.2).

Proof of Theorem 3.1

Letting D be as in (2.1) with \(Y_{n,j}\) replaced by \(X_{n,j}\), \(1 \leq j \leq k_{n}\), \(n \geq 1\) and Y replaced by X, it follows that

$$ \mathbb{E} \vert X_{n,j} \vert ^{r} \leq D^{r+1} \mathbb{E} \vert X \vert ^{r}, \quad 1 \leq j \leq k_{n}, n \geq 1. $$

Thus

$$ \sup_{n \geq 1} \sum_{j=1}^{k_{n}} \vert a_{n,j} \vert ^{r} \mathbb{E} \vert X_{n,j} \vert ^{r} \leq D^{r+1} \Biggl(\sup_{n \geq 1} \sum_{j=1}^{k_{n}} \vert a_{n,j} \vert ^{r} \Biggr) \mathbb{E} \vert X \vert ^{r} < \infty $$

by (3.1) and \(\mathbb{E}\vert X\vert ^{r} < \infty \), thereby verifying (2.2).

Next, we show that (2.3) holds. Let

$$ \lambda _{n} = D \sup_{1 \leq j \leq k_{n}} \vert a_{n,j} \vert , \quad n \geq 1. $$

Then \(\lim_{n \rightarrow \infty } \lambda _{n} = 0\) by (3.2). Now the stochastic domination hypothesis ensures that

$$ \mathbb{P} \bigl( \vert X_{n,j} \vert ^{r} > x \bigr) \leq D \mathbb{P} \bigl( \vert DX \vert ^{r} > x \bigr) = D \mathbb{P} \bigl(D \bigl(D ^{r-1} \vert X \vert ^{r} \bigr) > x \bigr), \quad x \geq 0, 1 \leq j \leq k_{n}, n \geq 1 $$

and so by Lemma 2.1 with \(Y_{n,j}\) replaced by \(\vert X_{n,j}\vert ^{r}\), \(1 \leq j \leq k_{n}\), \(n \geq 1\) and Y replaced by \(D^{r-1} \vert X\vert ^{r}\),

$$ \begin{aligned}[b] & \mathbb{E} \bigl( \vert X_{n,j} \vert ^{r} I \bigl( \vert X_{n,j} \vert ^{r} > x \bigr) \bigr) \\ &\quad \leq D^{2} \mathbb{E} \bigl(D^{r-1} \vert X \vert ^{r} I \bigl(D^{r} \vert X \vert ^{r} > x \bigr) \bigr) \\ &\quad = D^{r+1} \mathbb{E} \bigl( \vert X \vert ^{r} I \bigl(D^{r} \vert X \vert ^{r} > x \bigr) \bigr), \quad x \geq 0, 1 \leq j \leq k_{n}, n \geq 1. \end{aligned} $$
(3.4)

Then for arbitrary \(\varepsilon > 0\),

$$\begin{aligned} \sum_{j=1}^{k_{n}} \vert a_{n,j} \vert ^{r} \mathbb{E} \bigl( \vert X_{n,j} \vert ^{r} I \bigl( \vert a_{n,j} \vert ^{r} \vert X_{n,j} \vert ^{r} > \varepsilon \bigr) \bigr) \leq & D^{r+1} \sum_{j=1}^{k_{n}} \vert a_{n,j} \vert ^{r} \mathbb{E} \biggl( \vert X \vert ^{r} I \biggl(D^{r} \vert X \vert ^{r} > \frac{\varepsilon }{ \vert a_{n,j} \vert ^{r}} \biggr) \biggr) \\ \leq & D^{r+1} \Biggl(\sum_{j=1}^{k_{n}} \vert a_{n,j} \vert ^{r} \Biggr) \mathbb{E} \biggl( \vert X \vert ^{r} I \biggl( \vert X \vert ^{r} > \frac{\varepsilon }{\lambda _{n} ^{r}} \biggr) \biggr) \\ \leq & D^{r+1} \Biggl(\sup_{m \geq 1} \sum _{j=1}^{k_{m}} \vert a_{m,j} \vert ^{r} \Biggr) \mathbb{E} \biggl( \vert X \vert ^{r} I \biggl( \vert X \vert ^{r} > \frac{\varepsilon }{ \lambda _{n}^{r}} \biggr) \biggr) \\ \rightarrow & 0 \quad \text{as } n \rightarrow \infty \end{aligned}$$

by (3.1), \(\lambda _{n} \rightarrow 0\), and \(\mathbb{E}\vert X\vert ^{r} < \infty \). Thus (2.3) holds, and conclusion (3.3) follows from Proposition 2.1. □

Remark 3.2

See Chandra, Li, and Rosalsky [2] for examples

  1. (i)

    showing that Theorem 3.1 can fail if the PNQD hypothesis is dispensed with,

  2. (ii)

    showing that \(\sum_{j=1}^{k_{n}} a_{n,j}X_{n,j} \rightarrow 0\) almost surely does not necessarily hold under the hypotheses of Theorem 3.1.

4 Conclusions

For an array of rowwise PNQD random variables \(\{X_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\), conditions are provided under which the following degenerate mean convergence law holds:

$$ \sum_{j=1}^{k_{n}}a_{n,j}X_{n,j} \stackrel{\mathscr{L}_{r}}{\longrightarrow } 0, $$

where \(1 \leq r < 2\), \(\mathbb{E}X_{n,j} = 0\), \(1 \leq j \leq k_{n}\), \(n \geq 1\), and \(\{a_{n,j}, 1 \leq j \leq k_{n}, n \geq 1 \}\) is an array of constants. This result is an improved version of Theorem 3.1 of Chandra, Li, and Rosalsky [2] in that \(\mathscr{L}_{1}\) convergence is extended to \(\mathscr{L}_{r}\) convergence and the hypotheses are weakened. The result is obtained by applying a result of Sung [3] and an inequality of Adler, Rosalsky, and Taylor [1].