1 Introduction and main results

In past decades, the almost sure central limit theorem (ASCLT) has been studied for independent and dependent random variables more and more profoundly. Cheng et al. [1], Fahrner and Stadtmüller [2] and Berkes and Csáki [3] considered the ASCLT for the maximum of i.i.d. random variables. For more related works on ASCLT, see [412]. An influential work is Csáki and Gonchigdanzan [13], which proved the following almost sure limit theorem for the maximum of a stationary weakly dependent sequence.

Theorem A

Let \(\{X_{n}:n\geq1\}\) be a standardized stationary Gaussian sequence with \(r_{n}=\operatorname{Cov}(X_{1}, X_{n+1})\) satisfying \(r_{n}\ln n(\ln\ln n)^{1+\varepsilon}=O(1)\) for some constant \(\varepsilon>0\), as \(n\rightarrow\infty\). Let \(M_{k}=\max_{i\leq k}X_{i}\). If

$$ a_{n}=(2\ln n)^{1/2}, \qquad b_{n}=(2 \ln n)^{1/2}-\frac{1}{2}(2\ln n)^{-1/2}\bigl(\ln\ln n+\ln(4 \pi)\bigr), $$
(1.1)

then

$$ \lim_{n\rightarrow\infty}\frac{1}{\ln n}\sum ^{n}_{k=1}\frac{1}{k}I \bigl(a_{k}(M_{k}-b_{k}) \leq x \bigr)=\exp\bigl(-\mathrm{e}^{-x}\bigr) \quad \textit{a.s.}, $$
(1.2)

where I denotes an indicator function. Furthermore, Chen and Lin [14] extended it to the non-stationary Gaussian sequences. Chen et al. [15] extended the results to the multivariate stationary case.

Lin [16] considered the following theorem which is ASCLT version of the theorem proved by Leadbetter et al. [17].

Theorem B

Let \(\{X_{n}:n\geq1\}\) be a sequence of stationary standard Gaussian random variables with covariances \(r_{n}=\operatorname{Cov}(X_{1}, X_{n+1})\) satisfying \(|r_{n}-\frac{r}{\ln n}|\ln n(\ln\ln n)^{1+\varepsilon}=O(1)\). Let \(M_{n}=\max_{i\leq n}X_{i}\), then

$$ \lim_{n\rightarrow\infty}\frac{1}{\ln n}\sum ^{n}_{k=1}\frac{1}{k}I \bigl(a_{k}(M_{k}-b_{k}) \leq x \bigr)=\int^{\infty}_{-\infty}\exp\bigl(- \mathrm{e}^{-x-r+\sqrt{2r}z}\bigr)\phi (z)\,\mathrm{d}z \quad\textit{a.s.}, $$
(1.3)

where \(a_{n}\), \(b_{n}\) are defined by (1.1) and ϕ is the standard normal density function.

If \(r=0\), then (1.3) becomes (1.2). Thus Theorem A is a special case of Theorem B. The purpose of this paper is to give substantial improvements for both weight sequence and the range of random variables of Theorem B.

Throughout the paper, let \(\{\boldsymbol {Z}_{i}=(Z_{i}(1),Z_{i}(2), \ldots,Z_{i}(d)):i\geq1\}\) be a standardized stationary Gaussian vector sequence with

$$\begin{aligned}& \mathbb{E}\boldsymbol {Z}_{n}=\bigl(\mathbb{E}Z_{n}(1), \mathbb{E}Z_{n}(2), \ldots, \mathbb{E}Z_{n}(d)\bigr)=(0, 0, \ldots, 0), \\& \operatorname{Var}\boldsymbol {Z}_{n}=\bigl(\operatorname{Var}Z_{n}(1), \operatorname{Var}Z_{n}(2), \ldots, \operatorname{Var}Z_{n}(d) \bigr)=(1, 1, \ldots, 1), \\& r_{ij}(p)=\operatorname{Cov}\bigl(Z_{i}(p),Z_{j}(p) \bigr)=r_{|i-j|}(p), \\& r_{ij}(p,q)=\operatorname{Cov}\bigl(Z_{i}(p),Z_{j}(q) \bigr)=r_{|i-j|}(p,q). \end{aligned}$$

We write \(\boldsymbol {M} _{n}=(M_{n}(1), M_{n}(2), \ldots, M_{n}(d))\) and \(M_{n}(p)=\max_{1\leq i\leq n}Z_{i}(p)\) and shall always take \(1\leq p\neq q\leq d\); \(\boldsymbol {u}_{n}= (u_{n}(1), u_{n}(2), \ldots,u_{n}(d))\) will be a real vector, and \(\boldsymbol {u}_{n}>\boldsymbol {u}_{k}\) means \(u_{n}(p)>u_{k}(p)\) for \(p= 1, 2, \ldots, d\). For some \(\varepsilon>0\), suppose

$$ r_{ij}(p)\ln|j-i|\rightarrow r\geq0,\qquad r_{ij}(p,q)\ln |j-i|\rightarrow r\geq0, \quad\mbox{as }|j-i|\rightarrow \infty. $$
(1.4)

\(\{\boldsymbol {Z}_{n}:n\geq1\}\) is called dependent: weakly dependent for \(r =0\) and strongly dependent for \(r>0\). Let \(n=|j-i|\),

$$ \rho_{n}=\frac{r}{\ln n}, $$
(1.5)

r is defined by (1.4). In the paper, a very natural and mild assumption is

$$ \bigl|r_{n}(p)-\rho_{n}\bigr|\ln n(\ln D_{n})^{1+\varepsilon}=O(1),\qquad \bigl|r_{n}(p,q)- \rho_{n}\bigr|\ln n(\ln D_{n})^{1+\varepsilon}=O(1), $$
(1.6)

where

$$ d_{k}=\frac{\exp(\ln^{\alpha}k)}{k},\qquad D_{n}=\sum _{k=1} ^{n}d_{k} \quad \mbox{for } 0\leq\alpha< \frac{1}{2}. $$
(1.7)

We mainly consider the ASCLT of the maximum of stationary Gaussian vector sequence satisfying (1.4), under the mild condition (1.6), which is crucial to consider other versions of the ASCLT such as that of the maximum of non-stationary strongly dependent sequence and the function of the maximum. In the sequel, \(a\ll b\) stands for \(a=O(b)\). We also define the normalized real vectors \(\boldsymbol {a}_{n}= (a_{n}, a_{n},\ldots, a_{n})\) and \(\boldsymbol {b}_{n}= (b_{n}, b_{n},\ldots, b_{n})\), where \(a_{n}\) and \(b_{n}\) are defined by (1.1). The main results are as follows.

Theorem 1

Let \(\{\boldsymbol {Z}_{n}:n\geq1\}\) be a standardized stationary Gaussian vector sequence with covariances satisfying (1.6) and \(\max_{p\neq q} (\sup_{n\geq0}|r_{n}(p,q)| )<1\). Suppose \(0\leq \alpha<1/2\), then

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum _{k=1} ^{n}d_{k} I \bigl( \boldsymbol {a}_{k}(\boldsymbol {M}_{k}-\boldsymbol {b}_{k})\leq \boldsymbol {x} \bigr)=\prod^{d}_{p=1}\int _{\mathbb{R}} \exp\bigl(-\mathrm{e}^{-x(p)-r+\sqrt{2r}z}\bigr)\,\mathrm{d} \Phi(z) \quad \textit{a.s.} $$
(1.8)

for \(\boldsymbol {x}=(x(1), x(2), \ldots, x(d))\in\mathbb{R}^{d}\), where \(\Phi(z)\) denotes the distribution function of a standard normal random variable.

By the terminology of summation procedures, we have the following corollary.

Corollary 1

Equation (1.8) remains valid if we replace the weight sequence \(\{d_{k}:k\geq1\}\) by \(\{d_{k}^{\ast}:k\geq1\}\) such that \(0\leq d_{k}^{\ast}\leq d_{k}\), \(\sum_{k=1}^{\infty}d_{k}^{\ast}=\infty\).

Remark 1

Our results give substantial improvements for the weight sequence in Theorem B.

Remark 2

We extend Theorem B to the stationary Gaussian vector sequences under some regularity conditions.

Remark 3

If \(\{\boldsymbol {Z}_{n}:n\geq1\}\) is a standardized stationary Gaussian sequence and \(\alpha=0\), then (1.8) becomes (1.3). Thus Theorem B is a special case of Theorem 1.

Remark 4

Essentially, the problem whether Theorem 1 holds also for some \(1/2\leq\alpha<1\) remains open.

2 Auxiliary lemmas

In this section, we present and prove some lemmas which are useful in our proof of the main result.

Lemma 1

Let \(\{\boldsymbol {Z}_{n}:n\geq1\}\) and \(\{\boldsymbol {Z}^{\prime}_{n}:n\geq1\}\) be two d-dimensional independent standardized stationary Gaussian sequences with

$$r_{ij}^{0}(p)=\operatorname{Cov}\bigl(Z_{i}(p),Z_{j}(p) \bigr),\qquad r_{ij}^{0}(p,q)=\operatorname{Cov} \bigl(Z_{i}(p),Z_{j}(q)\bigr), $$

and

$$r_{ij}^{\prime}(p)=\operatorname{Cov}\bigl(Z^{\prime}_{i}(p),Z^{\prime}_{j}(p) \bigr),\qquad r_{ij}^{\prime}(p,q)=\operatorname{Cov} \bigl(Z^{\prime}_{i}(p),Z^{\prime}_{j}(q)\bigr). $$

Write

$$\beta_{ij}(p)=\max \bigl(\bigl|r_{ij}^{0}(p)\bigr|,\bigl|r_{ij}^{\prime}(p)\bigr| \bigr),\qquad \beta_{ij}(p,q)=\max \bigl(\bigl|r_{ij}^{0}(p,q)\bigr|,\bigl|r_{ij}^{\prime}(p,q)\bigr| \bigr). $$

Assume that (1.6) holds. Let \(\boldsymbol {u}_{i} =(u_{i}(1),u_{i}(2),\ldots,u_{i}(d))\) for \(i\geq1\) be real vectors such that \(n(1-\Phi(u_{n}(p)))\) is bounded where Φ is the standard normal distribution function. If

$$\mathop{\max_{1\leq i< j\leq n}}_{1\leq p\leq d} \beta_{ij}(p)< 1 \quad\textit{and}\quad \mathop{\max_{1\leq i< j\leq n}}_{1\leq p\neq q\leq d} \beta_{ij}(p,q)< 1, $$

then

$$\begin{aligned} &\bigl|\mathbb{P}(\boldsymbol {Z}_{j}\leq \boldsymbol {u}_{j},\forall j=1,2,\ldots,n)-\mathbb{P}\bigl( \boldsymbol {Z}^{\prime}_{j}\leq\boldsymbol {u}_{j},\forall j=1,2, \ldots,n\bigr)\bigr| \\ &\quad\leq K_{1}\sum_{p=1}^{d} \sum_{1\leq i< j\leq n}\bigl|r_{ij}^{0}(p)-r_{ij}^{\prime}(p) \bigr|\exp \biggl(-\frac{u_{i}^{2}(p)+u^{2}_{j}(p)}{2(1+\beta_{ij}(p))} \biggr) \\ &\qquad{}+K_{2}\sum_{1\leq p\neq q\leq d}\sum _{1\leq i< j\leq n}\bigl|r_{ij}^{0}(p,q)-r_{ij}^{\prime}(p,q) \bigr|\exp \biggl(-\frac{u_{i}^{2}(p)+u^{2}_{j}(q)}{2(1+\beta_{ij}(p,q))} \biggr), \end{aligned}$$

where \(K_{1}\), \(K_{2}\) are absolute constants.

Proof

See Lemma 3.1 of [15], we get the desired result. □

Lemma 2

Let \(\{Z_{n}:n\geq1\}\) be a standardized stationary Gaussian vector sequence such that condition (1.6) holds, and further suppose that \(n(1 -\Phi (u_{n}(p)))\) is bounded for \(p=1, 2, \ldots, d\) and \(\max_{p\neq q} (\sup_{n\geq0}|r_{n}(p,q)| )<1\). Then, for some \(\varepsilon>0\),

$$ \sup_{1\leq k\leq n}k\sum_{p=1}^{d} \sum_{j=1}^{n}\bigl|r_{j}(p)- \rho_{n} \bigr|\exp \biggl(-\frac{u^{2}_{n}(p)}{2(1+\omega_{j})} \biggr)\ll(\ln D_{n})^{-(1+\varepsilon)}, $$
(2.1)

and

$$ \sup_{1\leq k\leq n}k\sum_{1\leq p\neq q\leq d} \sum_{j=1}^{n}\bigl|r_{j}(p,q)- \rho_{n} \bigr|\exp \biggl(-\frac{u_{k}^{2}(p)+u^{2}_{n}(q)}{2(1+\omega_{j}^{\prime })} \biggr)\ll(\ln D_{n})^{-(1+\varepsilon)}, $$
(2.2)

where \(\omega_{j}=\max\{|r_{j}(p)|,\rho_{n}\}\), \(\omega_{j}^{\prime}=\max\{|r_{ij}(p,q)|,\rho_{n}\}\).

Proof

Using (1.6) and Lemma 2.1 in [17], we get the desired result. □

Lemma 3

Let \(\{\tilde{\boldsymbol {Z}}_{n}:n\geq1\}\) be a standard stationary Gaussian vector sequence with constant covariance \(\rho_{n}(p)=r/\ln n\) for \(p=1, 2, \ldots, d\) and \(\{\boldsymbol {Z}_{n}:n\geq1\}\) satisfy the conditions of Theorem  1. Denote \(\tilde{\boldsymbol {M}}_{n}=\max_{i\leq n}\tilde{\boldsymbol {Z}}_{i}\) and \(\boldsymbol {M}_{n}=\max_{i\leq n}\boldsymbol {Z}_{i}\). Assume that \(n(1 -\Phi (u_{n}(p)))\) is bounded for \(p=1, 2, \ldots, d\) and (1.6) is satisfied. Then

$$ \bigl\vert \mathbb{E} \bigl(I(\boldsymbol {M}_{n}\leq \boldsymbol {u}_{n}) -I(\tilde{\boldsymbol {M}}_{n}\leq \boldsymbol {u}_{n}) \bigr)\bigr\vert \ll(\ln D_{n})^{-(1+\varepsilon)} \quad \textit{for some } \varepsilon>0. $$
(2.3)

Proof

Using Lemmas 1 and 2, the proof can be gained simply. □

Lemma 4

Let \(\{\boldsymbol {Z}_{n}:n\geq1\}\) be a standardized stationary Gaussian vector sequence with covariances satisfying (1.6). Suppose that the assumptions of Lemma  1 hold. Then

$$ \lim_{n\rightarrow\infty} \mathbb{P} \bigl( \boldsymbol {a}_{n} (\boldsymbol {M}_{n}-\boldsymbol {b}_{n})\leq \boldsymbol {x} \bigr)=\prod^{d}_{p=1} \int _{\mathbb{R}}\exp\bigl(-\mathrm{e}^{-x(p)-r+\sqrt{2r}z}\bigr)\,\mathrm{d} \Phi(z) \quad \textit{a.s.}, $$
(2.4)

where \(\boldsymbol {x}=(x(1), x(2), \ldots, x(d))\in\mathbb{R}^{d}\).

Proof

Let \(\{Z_{1}^{\prime}(p), Z_{2}^{\prime}(p), \ldots, Z_{n}^{\prime}(p)\}\) have the same distribution as \(\{Z_{1}(p), Z_{2}(p), \ldots, Z_{n}(p)\}\) for \(p=1, 2, \ldots, d\), but \(\{Z_{1}^{\prime}(p), Z_{2}^{\prime}(p), \ldots, Z_{n}^{\prime }(p)\}\) is independent of \(\{Z_{1}^{\prime}(q), Z_{2}^{\prime}(q), \ldots, Z_{n}^{\prime}(q)\}\), as \(p\neq q\). Further, \(M_{n}(p)=\max_{1\leq i\leq n}Z_{i}(p)\) and \(M^{\prime}_{n}(p)=\max_{1\leq i\leq n}Z^{\prime}_{i}(p)\). By Lemma 1, we have

$$\begin{aligned} & \Biggl|\mathbb{P} \bigl(\boldsymbol {a}_{n}(\boldsymbol {M}_{n}- \boldsymbol {b}_{n})\leq \boldsymbol {x} \bigr)-\prod^{d}_{p=1} \mathbb{P} \bigl(a_{n}\bigl(M_{n}(p)-b_{n}\bigr)\leq x(p) \bigr) \Biggr| \\ &\quad= \bigl|\mathbb{P} \bigl(a_{n}\bigl(M_{n}(p)-b_{n} \bigr)\leq x(p), p=1, 2, \ldots, d \bigr) \\ &\qquad{}-\mathbb{P} \bigl(a_{n}\bigl(M^{\prime}_{n}(p)-b_{n} \bigr)\leq x(p), p=1, 2, \ldots, d \bigr) \bigr| \\ &\quad\leq K_{1}\sum_{p=1}^{d} \sum_{1\leq i< j\leq n}\bigl|r_{ij}^{0}(p)-r_{ij}^{\prime}(p) \bigr|\exp \biggl(-\frac{u_{i}^{2}(p)+u^{2}_{j}(p)}{2(1+\rho_{ij}(p))} \biggr) \\ &\qquad{}+K_{2}\sum_{1\leq p\neq q\leq d}\sum _{1\leq i< j\leq n}\bigl|r_{ij}^{0}(p,q)-r_{ij}^{\prime}(p,q) \bigr|\exp \biggl(-\frac{u_{i}^{2}(p)+u^{2}_{j}(q)}{2(1+\rho_{ij}(p,q))} \biggr) \\ &\quad=:A_{1}+A_{2}, \end{aligned}$$
(2.5)

where \(u_{n}(p)=x(p)/a_{n}-b_{n}\).

Since \(\{Z_{1}^{\prime}(p), Z_{2}^{\prime}(p), \ldots, Z_{n}^{\prime}(p)\}\) has the same distribution as \(\{Z_{1}(p), Z_{2}(p), \ldots, Z_{n}(p)\}\), which implies \(r_{ij}^{0}(p)=r_{ij}^{\prime}(p)\). Therefore, \(A_{1}=0\).

Notice that \(\{Z_{1}^{\prime}(p), Z_{2}^{\prime}(p), \ldots, Z_{n}^{\prime}(p)\}\) is independent of \(\{Z_{1}^{\prime}(q), Z_{2}^{\prime}(q), \ldots, Z_{n}^{\prime}(q)\}\), as \(p\neq q\), thus \(r_{ij}^{\prime}(p,q)=0\). Using Lemma 3.2 in [15], we have

$$\begin{aligned} A_{2}\ll \sum_{1\leq p\neq q\leq d}\sum _{1\leq i< j\leq n}\bigl|r_{ij}^{0}(p,q) \bigr|\exp \biggl(- \frac{u_{i}^{2}(p)+u^{2}_{j}(q)}{2(1+\rho_{ij}(p,q))} \biggr) \ll(\ln D_{n})^{-(1+\varepsilon)}\rightarrow0. \end{aligned}$$

By (2.5), we get

$$ \lim_{n\rightarrow\infty} \mathbb{P} \bigl( \boldsymbol {a}_{n}(\boldsymbol {M}_{n}-\boldsymbol {b}_{n}) \leq \boldsymbol {x} \bigr)=\lim_{n\rightarrow\infty} \prod^{d}_{i=1} \mathbb{P} \bigl(a_{n}\bigl(M_{n}(p)-b_{n}\bigr)\leq x(p) \bigr). $$
(2.6)

From Theorem 6.5.1 of [17], we obtain

$$ \lim_{n\rightarrow\infty}\prod^{d}_{p=1} \mathbb{P} \bigl(a_{n}\bigl(M_{n}(p)-b_{n}\bigr)\leq x(p) \bigr)=\prod^{d}_{p=1}\int _{\mathbb{R}}\exp\bigl(-\mathrm{e}^{-x(p)-r+\sqrt {2r}z}\bigr)\,\mathrm{d} \Phi(z) \quad\mbox{a.s.} $$
(2.7)

Combining this with (2.6) and (2.7), the proof is completed. □

Lemma 5

Let \(\zeta_{1}, \zeta_{2},\ldots,\zeta _{n}, \ldots\) , be a sequence of bounded random variables. If

$$ \operatorname{Var} \Biggl(\sum^{n}_{k=1}d_{k} \zeta_{k} \Biggr)=O \biggl( \frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}} \biggr) \quad \textit{for some } \varepsilon>0, $$
(2.8)

then

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum _{k=1} ^{n}d_{k}(\zeta_{k}- \mathbb{E}\zeta_{k})=0 \quad\textit{a.s.} $$
(2.9)

Proof

The proof can be found in Lemma 2.2 obtained by Wu and Chen [18]. □

3 Proof of the main result

Proof of Theorem 1

Let \(u_{n}(p)=x(p)/a_{n}+b_{n}\) satisfy \(n(1-\Phi(u_{n}(p)))\rightarrow\tau_{p}\) for \(x(p)\in\mathbb{R}\), \(0\leq\tau_{p}<\infty\) and \(p=1,2,\ldots,d\). By Lemma 4 and the Toeplitz lemma, note that (1.8) is equivalent to

$$ \lim_{n\rightarrow\infty}\frac{1}{D_{n}}\sum _{k=1} ^{n}d_{k} \bigl(I( \boldsymbol {M}_{k}\leq \boldsymbol {u}_{k})-\mathbb{P}( \boldsymbol {M}_{k}\leq \boldsymbol {u}_{k}) \bigr)=0 \quad \mbox{a.s.} $$
(3.1)

From Lemma 4, in order to prove (3.1), it suffices to prove that

$$ \operatorname{Var} \Biggl(\sum^{n}_{k=1}d_{k}I( \boldsymbol {M}_{k}\leq \boldsymbol {u}_{k}) \Biggr)= O \biggl( \frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}} \biggr) \quad \mbox{for some } \varepsilon>0. $$
(3.2)

Let \(\boldsymbol {\lambda}, \boldsymbol {\lambda}_{1}, \boldsymbol {\lambda}_{2}, \ldots\) be a d-dimensional independent standardized stationary Gaussian sequence with \(\boldsymbol {\lambda}, \boldsymbol {\lambda}_{1}, \boldsymbol {\lambda}_{2}, \ldots\) and \(\{\boldsymbol {Z}_{k}:k\geq1\}\) are independent. Obviously \((1-\rho_{k})^{1/2}\boldsymbol {\lambda}_{1} + \rho_{k}^{1/2}\boldsymbol {\lambda}, (1-\rho_{k})^{1/2}\boldsymbol {\lambda}_{2} + \rho_{k}^{1/2}\boldsymbol {\lambda}, \ldots\) have constant covariance \(\rho_{k}=\frac{r}{\ln k} \). Define

$$\begin{aligned} \boldsymbol {M}_{k}(\rho_{k}) =&\max_{1\leq i\leq k} \bigl((1-\rho_{k})^{1/2}\boldsymbol {\lambda}_{i} + \rho_{k}^{1/2}\boldsymbol {\lambda} \bigr) \\ =&(1-\rho_{k})^{1/2}\max(\boldsymbol {\lambda}_{1}, \boldsymbol {\lambda}_{2}, \ldots, \boldsymbol {\lambda}_{k})+ \rho_{k}^{1/2}\boldsymbol {\lambda} \\ =:&(1-\rho_{k})^{1/2}\boldsymbol {M}_{k}(0)+ \rho_{k}^{1/2}\boldsymbol {\lambda}. \end{aligned}$$

Obviously, \(\{\boldsymbol {M}_{k}(\rho_{k}):k\geq1\}\) and \(\{\boldsymbol {M}_{k}:k\geq1\}\) are independent.

Using the well-known \(c_{r}\)-inequality, the left-hand side of (3.2) can be written as

$$\begin{aligned} &\operatorname{Var} \Biggl(\sum^{n}_{k=1}d_{k}I( \boldsymbol {M}_{k}\leq \boldsymbol {u}_{k})-\sum ^{n}_{k=1}d_{k}I\bigl(\boldsymbol {M}_{k}( \rho_{k})\leq \boldsymbol {u}_{k}\bigr)+\sum ^{n}_{k=1}d_{k}I\bigl(\boldsymbol {M}_{k}( \rho_{k})\leq \boldsymbol {u}_{k}\bigr) \Biggr) \\ &\quad\ll\operatorname{Var} \Biggl(\sum^{n}_{k=1}d_{k}I \bigl(\boldsymbol {M}_{k}(\rho_{k})\leq \boldsymbol {u}_{k} \bigr) \Biggr) \\ &\qquad{}+\operatorname{Var} \Biggl(\sum^{n}_{k=1}d_{k}I( \boldsymbol {M}_{k}\leq \boldsymbol {u}_{k})-\sum ^{n}_{k=1}d_{k}I\bigl(\boldsymbol {M}_{k}( \rho_{k})\leq \boldsymbol {u}_{k}\bigr) \Biggr) \\ &\quad=:L_{1}+L_{2}. \end{aligned}$$
(3.3)

We will show \(L_{i}\ll \frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}\), \(i=1,2\). Let \(\boldsymbol {z}=(z(1), z(2), \ldots, z(d))\) be a real vector. Clearly,

$$\begin{aligned} L_{1} =&\mathbb{E} \Biggl(\sum ^{n}_{k=1}d_{k} \bigl(I\bigl( \boldsymbol {M}_{k}(\rho _{k})\leq \boldsymbol {u}_{k}\bigr)- \mathbb{P}\bigl(\boldsymbol {M}_{k}(\rho_{k})\leq \boldsymbol {u}_{k}\bigr) \bigr) \Biggr)^{2} \\ =&\mathbb{E} \Biggl(\sum^{n}_{k=1}d_{k} \bigl(I\bigl(\boldsymbol {M}_{k}(0)\leq(1-\rho _{k})^{-1/2} \bigl(\boldsymbol {u}_{k}-\rho_{k}^{1/2}\lambda\bigr) \bigr) \\ &{}-\mathbb{P}\bigl(\boldsymbol {M}_{k}(0)\leq(1-\rho_{k})^{-1/2} \bigl(\boldsymbol {u}_{k}-\rho _{k}^{1/2}\lambda\bigr) \bigr) \bigr)\Biggr)^{2} \\ =&\int_{\mathbb{R}^{d}}\mathbb{E} \Biggl(\sum^{n}_{k=1}d_{k} \bigl(I\bigl(\boldsymbol {M}_{k}(0)\leq(1-\rho_{k})^{-1/2} \bigl(\boldsymbol {u}_{k}-\rho_{k}^{1/2}\boldsymbol {z}\bigr) \bigr) \\ &{}-\mathbb{P}\bigl(\boldsymbol {M}_{k}(0)\leq(1-\rho_{k})^{-1/2} \bigl(\boldsymbol {u}_{k}-\rho_{k}^{1/2}\boldsymbol {z}\bigr) \bigr) \bigr)\Biggr)^{2}\,\mathrm{d}\Phi(\boldsymbol {z}) \\ =&\int_{\mathbb{R}^{d}}\mathbb{E} \Biggl(\sum ^{n}_{k=1}d_{k}\eta _{k} \Biggr)^{2}\,\mathrm{d}\Phi(\boldsymbol {z}), \end{aligned}$$
(3.4)

where \(\eta_{k}=I(\boldsymbol {M}_{k}(0)\leq(1-\rho_{k})^{-1/2}(\boldsymbol {u}_{k} -\rho_{k}^{1/2}\boldsymbol {z})) -\mathbb{P}(\boldsymbol {M}_{k}(0)\leq(1-\rho_{k})^{-1/2}(\boldsymbol {u}_{k}-\rho _{k}^{1/2}\boldsymbol {z}))\).

Write

$$\begin{aligned} \mathbb{E} \Biggl(\sum^{n}_{k=1}d_{k} \eta_{k} \Biggr)^{2}\leq \sum^{n}_{k=1}d_{k}^{2} \mathbb{E}|\eta_{k}|^{2}+2\sum_{1\leq k< l\leq n}d_{k}d_{l}\bigl| \mathbb{E}(\eta_{k}\eta_{l})\bigr| =:H_{1}+H_{2}. \end{aligned}$$
(3.5)

Noting that \(|\eta_{k}|\leq1\), \(\exp(\ln^{\alpha}x)=\exp (\int^{x}_{1}\frac{\alpha(\ln u)^{\alpha-1}}{u}\,\mathrm{d}u )\), we have that \(\exp(\ln^{\alpha}x)\) (\(\alpha<1/2\)) is a slowly varying function at infinity. Hence,

$$ H_{1 }\leq\sum^{n}_{k=1}d_{k}^{2}= \sum^{n}_{k=1}\frac{\exp(2\ln^{\alpha} k)}{k^{2}}\leq\sum ^{\infty}_{k=1}\frac{\exp(2\ln^{\alpha} k)}{k^{2}}< \infty. $$
(3.6)

Using the inequality \(x^{n-i}-x^{n}\leq\frac{i}{n}\) for \(0< x<1\), \(i\leq n\), we get

$$\begin{aligned} \bigl|\mathbb{E}(\eta_{k}\eta_{l})\bigr| \leq&\bigl|\operatorname{Cov}(I \bigl(\boldsymbol {M}_{k}(0)\leq(1-\rho_{k})^{-1/2}\bigl( \boldsymbol {u}_{k}-\rho _{k}^{1/2}\boldsymbol {z}\bigr)\bigr), \\ &{}I\bigl(\boldsymbol {M}_{l}(0)\leq(1-\rho_{l})^{-1/2} \bigl(\boldsymbol {u}_{l}-\rho _{l}^{1/2}\boldsymbol {z}\bigr) \bigr)-I\bigl(\boldsymbol {M}_{k,l}(0)\leq(1-\rho_{l})^{-1/2} \bigl(\boldsymbol {u}_{l}-\rho_{l}^{1/2}\boldsymbol {z}\bigr) \bigr)\bigr| \\ \ll&\mathbb{E}\bigl|I\bigl(\boldsymbol {M}_{l}(0)\leq(1-\rho_{l})^{-1/2} \bigl(\boldsymbol {u}_{l}-\rho _{l}^{1/2}\boldsymbol {z}\bigr) \bigr) \\ &{}-I\bigl(\boldsymbol {M}_{k,l}(0)\leq(1-\rho_{l})^{-1/2} \bigl(\boldsymbol {u}_{l}-\rho_{l}^{1/2}\boldsymbol {z}\bigr) \bigr)\bigr| \\ =&\mathbb{P}\bigl(\boldsymbol {M}_{k,l}(0)\leq(1-\rho_{l})^{-1/2} \bigl(\boldsymbol {u}_{l}-\rho _{l}^{1/2}\boldsymbol {z}\bigr) \bigr)-\mathbb{P}\bigl(\boldsymbol {M}_{l}(0)\leq(1-\rho _{l})^{-1/2} \bigl(\boldsymbol {u}_{l}-\rho_{l}^{1/2}\boldsymbol {z}\bigr) \bigr) \\ =&\Phi^{l-k}\bigl((1-\rho_{l})^{-1/2}\bigl( \boldsymbol {u}_{l}-\rho_{l}^{1/2}\boldsymbol {z}\bigr)\bigr)- \Phi^{l}\bigl((1-\rho_{l})^{-1/2}\bigl( \boldsymbol {u}_{l}-\rho_{l}^{1/2}\boldsymbol {z}\bigr)\bigr) \\ \leq&\frac{k}{l}. \end{aligned}$$

So, we have

$$\begin{aligned} H_{2}\leq \sum_{1\leq k< l\leq n}d_{k}d_{l} \frac{k}{l} =\mathop{\sum_{1\leq k< l\leq n}}_{\frac{l}{k}\geq \ln^{2}D_{n}}d_{k}d_{l} \frac{k}{l}+\mathop{\sum_{1\leq k< l\leq n}}_{\frac{l}{k}< \ln^{2}D_{n}}d_{k}d_{l} \frac{k}{l} =:T_{1}+T_{2}. \end{aligned}$$
(3.7)

For \(T_{1}\), we have

$$ T_{1}\leq\sum_{1\leq k< l\leq n} \frac{d_{k}d_{l}}{\ln^{2}D_{n}}\leq \frac{D_{n}^{2}}{\ln^{2}D_{n}}. $$
(3.8)

According to Wu and Chen [18], for sufficiently large n, for \(\alpha>0\), we have

$$ D_{n}\sim\frac{1}{\alpha} \bigl(\ln^{1-\alpha}n \exp\bigl(\ln^{\alpha}n\bigr) \bigr),\qquad \ln D_{n}\sim \ln^{\alpha}n, \qquad\exp\bigl(\ln^{\alpha}n\bigr)\sim\frac{\alpha D_{n}}{(\ln D_{n})^{\frac{1-\alpha}{\alpha}}}. $$
(3.9)

Since \(\alpha<1/2\) implies \((1-\alpha)/\alpha>1\), letting \(0<\varepsilon<(1-\alpha)/\alpha-1\), for sufficiently large n, we get

$$\begin{aligned} T_{2} \leq& \sum^{n}_{k=1}d_{k} \sum_{l=k}^{k\ln^{2}D_{n}}\frac{\exp(\ln^{\alpha} l)}{l} \\ \leq&\exp\bigl(\ln^{\alpha}n\bigr)\sum^{n}_{k=1}d_{k} \sum_{l=k}^{k\ln ^{2}D_{n}}\frac{1}{l} \leq\exp\bigl(\ln^{\alpha}n\bigr)D_{n}\ln\ln D_{n} \\ \leq&\frac{D_{n}^{2}\ln\ln D_{n}}{(\ln D_{n})^{\frac{1-\alpha}{\alpha}}} \leq\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. \end{aligned}$$
(3.10)

Combining with (3.7)-(3.10), we can get

$$ H_{2}\leq\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. $$
(3.11)

By (3.5), (3.6) and (3.11), we have

$$ L_{1}\ll\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. $$
(3.12)

Clearly,

$$\begin{aligned} L_{2} =&\operatorname{Var} \Biggl(\sum ^{n}_{k=1}d_{k} \bigl(I\bigl( \boldsymbol {M}_{k}(\rho_{k})\leq \boldsymbol {u}_{k} \bigr)-I(\boldsymbol {M}_{k}\leq \boldsymbol {u}_{k}) \bigr) \Biggr) \\ \leq&\sum^{n}_{k=1}d^{2}_{k} \operatorname{Var} \bigl(I(\boldsymbol {M}_{k}\leq \boldsymbol {u}_{k})-I \bigl(\boldsymbol {M}_{k}(\rho_{k})\leq \boldsymbol {u}_{k} \bigr) \bigr) \\ &{}+2 \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \operatorname{Cov} \bigl(I(\boldsymbol {M}_{i}\leq \boldsymbol {u}_{i})-I \bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i} \bigr),I(\boldsymbol {M}_{j}\leq \boldsymbol {u}_{j})-I\bigl( \boldsymbol {M}_{j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \biggr| \\ =:&J_{1}+2J_{2}. \end{aligned}$$
(3.13)

Similarly to (3.6), we find that \(J_{1}\leq\sum^{\infty}_{k=1}d_{k}^{2}<\infty\). Note that

$$\begin{aligned} J_{2} \leq& \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \operatorname{Cov} \bigl(I(\boldsymbol {M}_{i}\leq \boldsymbol {u}_{i})-I \bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i} \bigr),I(\boldsymbol {M}_{j}\leq \boldsymbol {u}_{j})-I\bigl( \boldsymbol {M}_{j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \\ &{}-\bigl(I(\boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j})-I\bigl( \boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \bigr) \biggr| \\ &{}+ \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \operatorname{Cov} \bigl(I(\boldsymbol {M}_{i}\leq \boldsymbol {u}_{i})-I \bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i} \bigr), \bigl(I(\boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j})-I\bigl( \boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \bigr) \biggr| \\ =:&J_{21}+J_{22}. \end{aligned}$$
(3.14)

For \(J_{21}\), we can get

$$\begin{aligned} J_{21} \leq&\sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl\{ \bigl|\operatorname{Cov} \bigl(I(\boldsymbol {M}_{i}\leq \boldsymbol {u}_{i})-I\bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr),I(\boldsymbol {M}_{j}\leq \boldsymbol {u}_{j})-I(\boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j}) \bigr) \bigr| \\ &{}+ \bigl|\operatorname{Cov} \bigl(I(\boldsymbol {M}_{i}\leq \boldsymbol {u}_{i})-I\bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr),I\bigl(\boldsymbol {M}_{j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr)-I\bigl( \boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \bigr| \bigr\} \\ \leq&2\sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl\{ \mathbb{E} \bigl|I(\boldsymbol {M}_{j}\leq \boldsymbol {u}_{j})-I( \boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j}) \bigr| \\ &{}+\mathbb{E} \bigl|I\bigl(\boldsymbol {M}_{j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr)-I\bigl(\boldsymbol {M}_{i,j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr| \bigr\} \\ =&2\sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl\{ \bigl(\mathbb {P}(\boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j})- \mathbb{P}(\boldsymbol {M}_{j}\leq \boldsymbol {u}_{j}) \bigr) \\ &{}+ \bigl(\mathbb{P}\bigl(\boldsymbol {M}_{i,j}(\rho _{j})\leq \boldsymbol {u}_{j}\bigr)-\mathbb{P}\bigl(\boldsymbol {M}_{j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \bigr\} \\ \leq&2\sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl\{ \bigl|\mathbb {P}(\boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j})- \mathbb{P}\bigl(\boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr| \\ &{}+ \bigl|\mathbb{P}(\boldsymbol {M}_{j}\leq \boldsymbol {u}_{j})- \mathbb{P}\bigl(\boldsymbol {M}_{j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr| +2 \bigl|\mathbb{P}\bigl(\boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr)-\mathbb{P}\bigl(\boldsymbol {M}_{j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr| \bigr\} . \end{aligned}$$
(3.15)

By Lemma 3 and (3.9), for \(\alpha>0\), we have

$$\begin{aligned} &2\sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl\{ \bigl|\mathbb{P}(\boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j})- \mathbb{P}\bigl(\boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr| \\ &\qquad{}+ \bigl|\mathbb{P}(\boldsymbol {M}_{j}\leq \boldsymbol {u}_{j})- \mathbb{P}\bigl(\boldsymbol {M}_{j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr| \bigr\} \\ &\quad\ll\sum_{1\leq i< j\leq n}d_{i}d_{j}( \ln D_{j})^{-(1+\varepsilon)}=\sum_{j=1}^{n} \frac{\exp(\ln^{\alpha}j)}{j(\ln D_{j})^{1+\varepsilon}}\sum_{i=1}^{j}d_{i} \\ &\quad=\sum_{j=1}^{n}\frac{\exp(\ln^{\alpha}j)}{j(\ln D_{j} )^{1+\varepsilon}}D_{j} \ll\sum_{j=1}^{n}\frac{\exp(2\ln^{\alpha}j)(\ln j)^{1-\alpha}}{j(\ln j)^{(1+\varepsilon)\alpha}} \\ &\quad\sim\int^{n}_{e}\frac{\exp(2\ln^{\alpha}x)(\ln x)^{1-\alpha}}{(\ln x)^{\alpha+\alpha\varepsilon}}\, \mathrm{d}\ln x \\ &\quad=\int^{\ln n}_{1}\exp\bigl(2y^{\alpha} \bigr)y^{1-2\alpha-\alpha\varepsilon}\,\mathrm{d}y \\ &\quad\sim\int^{\ln n}_{1}\exp\bigl(2y^{\alpha} \bigr)y^{1-2\alpha-\alpha\varepsilon }+\frac{2-3\alpha-\alpha\varepsilon}{2\alpha}\exp\bigl(2y^{\alpha } \bigr)y^{1-3\alpha-\alpha\varepsilon}\,\mathrm{d}y \\ &\quad=\frac{1}{2\alpha}\exp\bigl(2y^{\alpha}\bigr)y^{2-3\alpha-\alpha\varepsilon} \Big|^{\ln n}_{1} \\ &\quad\ll\exp\bigl(2\ln^{\alpha}n\bigr) (\ln n)^{2-3\alpha-\alpha\varepsilon}\ll \frac{D_{n}^{2}}{(\ln D_{n})^{\frac{\alpha+\alpha\varepsilon}{\alpha}}} \\ &\quad\ll\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. \end{aligned}$$
(3.16)

Let \(\boldsymbol {z}=(z(1), z(2), \ldots, z(d))\) be a real vector. By (3.7)-(3.11), we obtain

$$\begin{aligned} &\sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl| \mathbb{P}\bigl(\boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr)-\mathbb{P}\bigl(\boldsymbol {M}_{j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr| \\ &\quad=\sum_{1\leq i< j\leq n}d_{i}d_{j} \biggl(\int_{\mathbb {R}^{d}} \bigl(\mathbb{P}\bigl(\boldsymbol {M}_{i,j}(0) \leq(1-\rho_{j})^{-1/2} \bigl(\boldsymbol {u}_{j}- \rho_{j}^{1/2}\boldsymbol {z}\bigr)\bigr) \\ &\qquad{} -\mathbb{P}\bigl(\boldsymbol {M}_{j}(0)\leq(1- \rho_{j})^{-1/2} \bigl(\boldsymbol {u}_{j}- \rho_{j}^{1/2}\boldsymbol {z}\bigr)\bigr) \bigr)\,\mathrm{d}\Phi( \boldsymbol {z}) \biggr) \\ &\quad=\sum_{1\leq i< j\leq n}d_{i}d_{j} \biggl(\int_{\mathbb{R}^{d}} \bigl(\Phi^{j-i} \bigl((1-\rho _{j})^{-1/2}\bigl(\boldsymbol {u}_{j}- \rho_{j}^{1/2}\boldsymbol {z}\bigr) \bigr) \\ &\qquad{}-\Phi^{j} \bigl((1-\rho_{j})^{-1/2}\bigl(\boldsymbol {u}_{j}- \rho_{j}^{1/2}\boldsymbol {z}\bigr) \bigr) \bigr)\,\mathrm{d}\Phi( \boldsymbol {z}) \biggr) \\ &\quad\leq\sum_{1\leq i< j\leq n}d_{i}d_{j} \biggl(\int_{\mathbb{R}^{d}}\frac{i}{j}\,\mathrm{d}\Phi(\boldsymbol {z}) \biggr) =\sum_{1\leq i< j\leq n}d_{i}d_{j} \biggl(\frac{i}{j} \biggr) \\ &\quad\leq\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. \end{aligned}$$
(3.17)

By (3.15)-(3.17), we have

$$ J_{21}\ll\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. $$
(3.18)

For \(J_{22}\), noting that \(\{\boldsymbol {M}_{i}:i\geq1\}\) and \(\{\boldsymbol {M}_{i}(\rho_{i}):i\geq1\}\) are independent, by Lemma 3 and (3.16), we get

$$\begin{aligned} J_{22} =& \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl\{ \operatorname{Cov} \bigl(I(\boldsymbol {M}_{i}\leq \boldsymbol {u}_{i}),I(\boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j}) \bigr) \\ &{}+\operatorname{Cov} \bigl(I\bigl(\boldsymbol {M}_{i}( \rho_{i})\leq \boldsymbol {u}_{i}\bigr),I\bigl( \boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \bigr\} \biggr| \\ =& \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl(\mathbb{P}(\boldsymbol {M}_{i}\leq \boldsymbol {u}_{i}, \boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j})-\mathbb{P}\bigl( \boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}, \boldsymbol {M}_{i,j}(\rho_{j})\leq\boldsymbol {u}_{j}\bigr) \\ &{}-\mathbb{P}(\boldsymbol {M}_{i}\leq \boldsymbol {u}_{i}) \mathbb{P}(\boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j})-\mathbb{P}\bigl( \boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr) \mathbb{P}\bigl(\boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \biggr| \\ =& \sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl\{ \bigl|\mathbb{P}(\boldsymbol {M}_{i}\leq \boldsymbol {u}_{i}, \boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j})-\mathbb{P}\bigl( \boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}, \boldsymbol {M}_{i,j}(\rho_{j})\leq\boldsymbol {u}_{j}\bigr) \bigr| \\ &{}+ \bigl|\mathbb{P}(\boldsymbol {M}_{i}\leq \boldsymbol {u}_{i})- \mathbb{P}\bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr) \bigr|+ \bigl|\mathbb{P}(\boldsymbol {M}_{i,j}\leq \boldsymbol {u}_{j})-\mathbb{P}\bigl(\boldsymbol {M}_{i,j}( \rho_{j})\leq\boldsymbol {u}_{j}\bigr) \bigr| \bigr\} \\ &{}+2 \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \operatorname{Cov} \bigl(I\bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr),I\bigl(\boldsymbol {M}_{i,j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \biggr| \\ \ll&\sum_{1\leq i< j\leq n}d_{i}d_{j}(\ln D_{j})^{-(1+\varepsilon)} + \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \operatorname{Cov} \bigl(I\bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr),I\bigl(\boldsymbol {M}_{i,j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \biggr| \\ \ll&\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}+ \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \operatorname{Cov} \bigl(I\bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr),I\bigl(\boldsymbol {M}_{i,j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \biggr|. \end{aligned}$$
(3.19)

By (3.17), we have

$$\begin{aligned} & \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \operatorname{Cov} \bigl(I\bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr),I\bigl(\boldsymbol {M}_{i,j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \biggr| \\ &\quad= \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl\{ \operatorname{Cov} \bigl(I\bigl(\boldsymbol {M}_{i}( \rho_{i})\leq \boldsymbol {u}_{i}\bigr),I\bigl( \boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr)-I \bigl(\boldsymbol {M}_{j}(\rho_{j})\leq \boldsymbol {u}_{j} \bigr) \bigr) \\ &\qquad{} +\operatorname{Cov} \bigl(I\bigl(\boldsymbol {M}_{i}( \rho_{i})\leq \boldsymbol {u}_{i}\bigr),I\bigl( \boldsymbol {M}_{j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \bigr\} \biggr| \\ &\quad\leq\sum_{1\leq i< j\leq n}d_{i}d_{j} \mathbb{E} \bigl|I\bigl(\boldsymbol {M}_{j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr)-I\bigl(\boldsymbol {M}_{i,j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr| \\ &\qquad{} + \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \operatorname{Cov} \bigl(I\bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr),I\bigl(\boldsymbol {M}_{j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \biggr| \\ &\quad\leq\sum_{1\leq i< j\leq n}d_{i}d_{j} \bigl(\mathbb{P}\bigl(\boldsymbol {M}_{i,j}(\rho_{j})\leq \boldsymbol {u}_{j}\bigr)-\mathbb{P}\bigl(\boldsymbol {M}_{j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \\ &\qquad{}+\operatorname{Var} \Biggl(\sum_{i=1}^{n}d_{i}I \bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i} \bigr) \Biggr)+\sum_{i=1}^{n}d_{i}^{2} \operatorname{Var} \bigl(I\bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr) \bigr) \\ &\quad\ll\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}} +\operatorname{Var} \Biggl(\sum _{i=1}^{n}d_{i}I\bigl(\boldsymbol {M}_{i}( \rho_{i})\leq \boldsymbol {u}_{i}\bigr) \Biggr). \end{aligned}$$
(3.20)

By (3.4)-(3.12), we have

$$ \operatorname{Var} \Biggl(\sum_{i=1}^{n}d_{i}I \bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i} \bigr) \Biggr)\ll\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. $$
(3.21)

Together with (3.20) and (3.21), we obtain

$$\begin{aligned} \biggl|\sum_{1\leq i< j\leq n}d_{i}d_{j} \operatorname{Cov} \bigl(I\bigl(\boldsymbol {M}_{i}(\rho_{i})\leq \boldsymbol {u}_{i}\bigr),I\bigl(\boldsymbol {M}_{i,j}( \rho_{j})\leq \boldsymbol {u}_{j}\bigr) \bigr) \biggr| \ll \frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. \end{aligned}$$
(3.22)

Hence, by (3.19) and (3.22), we have

$$ J_{22}\ll\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. $$
(3.23)

By (3.13), (3.14), (3.18) and (3.23), for \(\alpha>0\), we get

$$ L_{2}\ll\frac{D_{n}^{2}}{(\ln D_{n})^{1+\varepsilon}}. $$
(3.24)

Thus (3.2)-(3.24) together establish (3.18). The proof is completed. □