Skip to main content
Log in

Almost Sure Convergence for the Maximum of Nonstationary Random Fields

  • Published:
Journal of Theoretical Probability Aims and scope Submit manuscript

Abstract

We obtain an almost sure limit theorem for the maximum of nonstationary random fields under some dependence conditions. The obtained result is applied to Gaussian random fields.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Berkes, I., Csáki, E.: A universal result in almost sure central limit theorem. Stoch. Process. Appl. 94, 105–134 (2001)

    Article  MATH  Google Scholar 

  2. Brosamler, G.A.: An almost sure everywhere central limit theorem. Math. Proc. Camb. Philos. Soc. 104, 561–574 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  3. Chen, S., Lin, Z.: Almost sure max-limits for nonstationary Gaussian sequence. Stat. Probab. Lett. 76, 1175–1184 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Cheng, S., Peng, L., Qi, Y.: Almost sure convergence in extreme value theory. Math. Nachr. 190, 43–50 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  5. Choi, H.: Central limit theory and extremes of random fields. Ph.D. Dissertation in University of North Carolina at Chapel Hill (2002)

  6. Choi, H.: Almost sure limit theorem for stationary Gaussian random fields. J. Korean Stat. Soc. 39, 475–482 (2010)

    MathSciNet  MATH  Google Scholar 

  7. Csáki, E., Gonchigdanzan, K.: Almost sure limit theorems for the maximum of stationary Gaussian sequences. Stat. Probab. Lett. 58, 195–203 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  8. Fahrner, I., Stadmüller, U.: On almost sure max-limit theorems. Stat. Probab. Lett. 37, 229–236 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  9. Hüsler, J.: Extremes of nonstationary random sequences. J. Appl. Probab. 23, 937–950 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  10. Lacey, M.T., Philipp, W.: A note on the almost sure central limit theorem. Stat. Probab. Lett. 9, 201–205 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  11. Leadbetter, M.R., Rootzén, H.: On extreme values in stationary random fields. In: Stochastic Processes and Related Topics, pp. 275–285. Trends Math. Birkhauser Boston, Boston (1998)

  12. Pereira, L., Ferreira, H.: Extremes of quasi-independent random fields and clustering of high values. In: Proceedings of 8th WSEAS International Conference on Applied Mathematics, pp. 104–109 (2005)

  13. Pereira, L., Ferreira, H.: Limiting crossing probabilities of random fields. J. Appl. Probab. 3, 884–891 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  14. Peng, Z., Nadarajah, S.: Almost sure limit theorems for Gaussian sequences. Theory Probab. Appl. 55, 361–367 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Schatte, P.: On strong versions of the central limit theorem. Math. Nachr. 137, 249–256 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  16. Tan, Z., Wang, Y.: Almost sure asymptotics for extremes of non-stationary Gaussian random fields. Chin. Ann. Math. Ser. B 35, 125–138 (2014)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the referee for several corrections and important suggestions which significantly improved this paper. Pereira’s work was supported by National Foundation of Science and Technology through UID/MAT/00212/2013. Tan’s work was supported by National Science Foundation of China (No. 11501250) and Natural Science Foundation of Zhejiang Province of China (No. LQ14A010012, LY15A010019 ).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luísa Pereira.

Appendices

Appendix 1: Proofs for Sect. 2

Let \(B_\mathbf{k}(\mathbf{R_k})=\bigcap _\mathbf{i\in R_k}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \) and \(\overline{B}_\mathbf{k}(\mathbf{R_k})=\bigcup _\mathbf{i\in R_k}\left\{ X_\mathbf{i}> u_\mathbf{k,i}\right\} \). For \(\mathbf{k,l\in R_n}\) such that \(\mathbf{k}\ne \mathbf{l}\) and \(u_{\mathbf{l},\mathbf{i}}\ge u_{\mathbf{k},\mathbf{i}}\), let \(m_{l_{i}}=\log l_{i}\). Note that \(k_{1}k_{2}\le l_{1}l_{2}\). Let \(\mathbf {M^{*}}=\mathbf {M^{*}}_{\mathbf {kl}}=\mathbf {R_{k}}\cap \mathbf {R_{l}}\) and \(\mathbf {M}_{\mathbf {kl}}=\{(x_{1},x_{2}): (x_{1},x_{2})\in \mathbf {N}^{2}, 0\le x_{i}\le \sharp (\prod _{i}(\mathbf {M}^{*}))+m_{l_{i}}, i=1,2\}\), where \(\sharp \) denotes cardinality. Note that \(\mathbf {M^{*}}\subset \mathbf {M_{kl}}\).

The proof of Theorem 2.1 will be given by means of several lemmas.

Lemma 4.1

Let \(\mathbf{X}\) be a nonstationary random field satisfying condition \(D^*(u_{\mathbf {n},\mathbf {i}})\) over \(\mathcal {F}\). Assume that \(\left\{ n_{1}n_{2}\max \left\{ P\left( X_{\mathbf {i}}>u_{\mathbf {n},\mathbf {i} }\right) :\mathbf {i}\le \mathbf {n}\right\} \right\} _{\mathbf {n}\ge \mathbf {1}}\) is bounded and \(\alpha _{\mathbf {l},m_{l_1},m_{l_2}}\ll (\log l_1 \log l_2)^{-(\epsilon +1)}\). Then, for \(\mathbf{k,l\in R_n}\) such that \(\mathbf{k}\ne \mathbf{l}\) and \(u_{\mathbf{l},\mathbf{i}}\ge u_{\mathbf{k},\mathbf{i}}\),

$$\begin{aligned} \left| Cov\left( \mathbbm {1}_{\left\{ \bigcap _\mathbf{i\in R_k}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} }, \mathbbm {1}_{\left\{ \bigcap _\mathbf{i\in R_l-R_k}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right) \right| \ll \alpha _{\mathbf {l,k},m_{l_1},m_{l_2}}+\frac{m_{l_1}k_{2}}{l_1l_{2}}+\frac{m_{l_2}k_{1}}{l_{1}l_2}. \end{aligned}$$

Proof

Write

$$\begin{aligned}&\left| Cov\left( \mathbbm {1}_{\left\{ \bigcap _\mathbf{i\in R_k}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} }, \mathbbm {1}_{\left\{ \bigcap _\mathbf{i\in R_l-R_k}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right) \right| \\&\quad =\left| P(B_\mathbf{k}(\mathbf{R_k})\cap B_\mathbf{l}(\mathbf{R_l-R_k}))-P(B_\mathbf{k}(\mathbf{R_k}))P(B_\mathbf{l}(\mathbf R_l-R_k))\right| \\&\quad \le \left| P(B_\mathbf{k}(\mathbf{R_k})\cap B_\mathbf{l}(\mathbf{R_l-R_k}))-P(B_{\mathbf{k}}(\mathbf{R_k})\cap B_\mathbf{l}(\mathbf R_l-M_{kl}))\right| \\&\qquad +\left| P(B_\mathbf{k}(\mathbf{R_k})\cap B_\mathbf{l}(\mathbf{R_l-M_{kl}}))-P(B_{\mathbf{k}}(\mathbf{R_k}))P(B_\mathbf{l}(\mathbf R_l-M_{kl}))\right| \\&\qquad +\left| P(B_{\mathbf{k}}(\mathbf{R_k}))P(B_\mathbf{l}(\mathbf{R_l-M_{kl}}))-P(B_{\mathbf{k}}(\mathbf{R_k}))P(B_\mathbf{l}(\mathbf R_l-R_{k}))\right| \\&\quad =:I_1+I_2+I_{3}. \end{aligned}$$

Using the condition that \(\left\{ n_{1}n_{2}\max \left\{ P\left( X_{\mathbf {i}}>u_{\mathbf {n},\mathbf {i} }\right) :\mathbf {i}\le \mathbf {n}\right\} \right\} _{\mathbf {n}\ge \mathbf {1}}\) is bounded we get

$$\begin{aligned} I_1= & {} \left| P(B_\mathbf{k}(\mathbf{R_k})\cap B_\mathbf{l}(\mathbf{R_l-R_k}))-P(B_\mathbf{k}(\mathbf{R_k})\cap B_{\mathbf{l}}(\mathbf R_l-M_{kl}))\right| \\\le & {} \left| P(B_\mathbf{l}(\mathbf{R_l-R_k}))-P(B_\mathbf{l}(\mathbf R_l-M_{kl}))\right| \\\le & {} P(\overline{B}_\mathbf{l}((\mathbf R_l-R_k)-(\mathbf R_l-M_{kl})))\\\le & {} P(\overline{B}_\mathbf{l}((\mathbf M_{kl}-R_{k})))\\\le & {} (m_{l_1}k_2+m_{l_2}k_1)\max \left\{ P\left( X_{\mathbf {i}}>u_{\mathbf {l},\mathbf {i} }\right) :\mathbf {i}\le \mathbf {l}\right\} \\\ll & {} \frac{m_{l_1}k_{2}}{l_1l_{2}}+\frac{m_{l_2}k_{1}}{l_{1}l_2}. \end{aligned}$$

Similarly, we have

$$\begin{aligned} I_3 \ll \frac{m_{l_1}k_{2}}{l_1l_{2}}+\frac{m_{l_2}k_{1}}{l_{1}l_2}. \end{aligned}$$

Condition \(D^*(u_{\mathbf {n},\mathbf {i}})\) implies

$$\begin{aligned} I_2=\left| P(B_\mathbf{k}(\mathbf{R_k})\cap B_\mathbf{l}(\mathbf{R_l-M_{kl}}))-P(B_{\mathbf{k}}(\mathbf{R_k}))P(B_\mathbf{l}(\mathbf R_l-M_{kl}))\right| \le \alpha _{\mathbf {l},m_{l_1},m_{l_2}}. \end{aligned}$$

Noticing \(\alpha _{\mathbf {l,k},m_{l_1},m_{l_2}}\ll (\log l_1 \log l_2)^{-(\epsilon +1)}\), we obtain

$$\begin{aligned} \left| Cov\left( \mathbbm {1}_{\left\{ \bigcap _\mathbf{i\in R_k}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} }, \mathbbm {1}_{\left\{ \bigcap _\mathbf{i\in R_l-R_k}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right) \right| \ll \alpha _{\mathbf {l},m_{l_1},m_{l_2}}+\frac{m_{l_1}k_{2}}{l_1l_{2}}+\frac{m_{l_2}k_{1}}{l_{1}l_2}. \end{aligned}$$

Lemma 4.2

Let \(\mathbf{X}\) be a nonstationary random field such that \(\big \{ n_{1}n_{2}\max \big \{ P\left( X_{\mathbf {i}}>u_{\mathbf {n},\mathbf {i} }\right) :\mathbf {i}\le \mathbf {n}\big \} \big \} _{\mathbf {n}\ge \mathbf {1}}\) is bounded. Then, for \(\mathbf k,l\in R_n\) such that \(\mathbf k\ne \mathbf l\) and \(u_\mathbf{l, i}\ge u_\mathbf{k, i}\),

$$\begin{aligned} E\left| \mathbbm {1}_{\left\{ \cap _\mathbf{i\in R_l-R_k}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }-\mathbbm {1}_{\left\{ \cap _\mathbf{i\in R_l}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right| \le \frac{l_1l_2-\sharp (\mathbf R_l-R_k)}{l_1l_2}. \end{aligned}$$

Proof

Using the condition that \(\left\{ n_{1}n_{2}\max \left\{ P\left( X_{\mathbf {i}}>u_{\mathbf {n},\mathbf {i} }\right) :\mathbf {i}\le \mathbf {n}\right\} \right\} _{\mathbf {n}\ge \mathbf {1}}\) is bounded we get

$$\begin{aligned}&E\left| \mathbbm {1}_{\left\{ \cap _\mathbf{i\in R_l-R_k}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }-\mathbbm {1}_{\left\{ \cap _\mathbf{i\in R_l}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right| \\&\quad =P\left( \bigcap _{\mathbf{i}\in \mathbf{R_l}-\mathbf{R_k}}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right) -P\left( \bigcap _{\mathbf{i}\in \mathbf{R_l}}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right) \\&\quad \le \sum _\mathbf{i\in R_l-(R_l-R_k)}P(X_\mathbf{i}>u_\mathbf{l,i})\\&\quad \le \left[ l_1l_2-\sharp (\mathbf R_l-R_k)\right] \max \left\{ P\left( X_{\mathbf {i}}>u_{\mathbf {l},\mathbf {i} }\right) :\mathbf {i}\le \mathbf {l}\right\} \\&\quad \ll \frac{l_1l_2-\sharp (\mathbf R_l-R_k)}{l_1l_2}. \end{aligned}$$

The following lemma is from Tan and Wang [16].

Lemma 4.3

Let \(\eta _\mathbf{i}\), \(\mathbf{i}\in \mathbb {Z}_+^2\), be uniformly bounded variables. Assume that

$$\begin{aligned} \mathrm{Var}\left( \frac{1}{\log n_1 \log n_2}\sum _\mathbf{k \in R_n}\frac{1}{k_1k_2}\eta _\mathbf{k}\right) \ll \frac{1}{(\log n_1 \log n_2)^{\epsilon +1}}. \end{aligned}$$

Then

$$\begin{aligned} \frac{1}{\log n_1 \log n_2}\sum _{\mathbf{k}\in \mathbf{R_n}}\frac{1}{k_1k_2}(\eta _\mathbf{k}-E(\eta _\mathbf{k}))\rightarrow 0 \ \ \hbox {a.s.} \end{aligned}$$

Proof of Theorem 2.1

Let \(\eta _\mathbf{k}=\mathbbm {1}_{\left\{ \bigcap _{\mathbf{i}\le \mathbf{k}}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} }-E\left( \mathbbm {1}_{\left\{ \bigcap _{\mathbf{i}\le \mathbf{k}}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} }\right) \). Then

$$\begin{aligned}&\hbox {Var}\left( \frac{1}{\log n_1 \log n_2}\sum _\mathbf{k \in R_n}\frac{1}{k_1k_2}\mathbbm {1}_{\left\{ \bigcap _{\mathbf{i}\le \mathbf{k}}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} }\right) \\&\quad =E\left( \frac{1}{\log n_1 \log n_2}\sum _\mathbf{k \in R_n}\frac{\eta _\mathbf{k}}{k_1k_2}\right) ^2\\&\quad =\frac{1}{\log ^2n_1 \log ^2n_2}\left( \sum _\mathbf{k \in R_n}\frac{E(\eta _\mathbf{k}^2)}{k_1^2k_2^2}+ \sum _{\mathbf{k,l \in R_n},\mathbf{k}\ne \mathbf{l}}\frac{E(\eta _\mathbf{k}\eta _\mathbf{l})}{k_1k_2l_1l_2}\right) \\&\quad =T_1+T_2. \end{aligned}$$

Since \(|\eta _\mathbf{k}|\le 1\), it follows that

$$\begin{aligned} T_1\le \frac{1}{\log ^2n_1 \log ^2n_2}\sum _{\mathbf{k}\in \mathbf{R_n}}\frac{1}{k_1^2 k_2^2}\le \frac{K}{\log ^2n_1 \log ^2n_2}. \end{aligned}$$

Note that for \(\mathbf{k}\ne \mathbf{l}\) such that \(u_\mathbf{k,i}<u_\mathbf{l,i}\),

$$\begin{aligned} |E(\eta _\mathbf{k}\eta _\mathbf{l})|= & {} \left| \hbox {Cov}\left( \mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_k}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} },\mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_l}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right) \right| \\\le & {} \left| \hbox {Cov}\left( \mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_k}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} },\mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_l}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }-\mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_l-R_{k}}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right) \right| \\&+\left| \hbox {Cov}\left( \mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_k}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} },\mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_l-R_{k}}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right) \right| \\\le & {} E\left| \mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_l}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }-\mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_l-R_{k}}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right| \\&+\left| \hbox {Cov}\left( \mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_k}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} },\mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_l-R_k}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right) \right| . \end{aligned}$$

By Lemma 4.2 we get

$$\begin{aligned} E\left| \mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_l}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }-\mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_l-R_{k}}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} }\right| \le \frac{l_1l_2-\sharp (\mathbf{R_l-R_k})}{l_1l_2} \end{aligned}$$

and from Lemma 4.1. we obtain

$$\begin{aligned} \left| Cov(\mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_k}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} },\mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_l-R_k}\left\{ X_\mathbf{i}\le u_\mathbf{l,i}\right\} \right\} })\right| \ll \alpha _{\mathbf {l},m_{l_1},m_{l_2}}+\frac{m_{l_1}k_{2}}{l_1l_{2}}+\frac{m_{l_2}k_{1}}{l_{1}l_2}. \end{aligned}$$

Hence

$$\begin{aligned} |E(\eta _\mathbf{k}\eta _\mathbf{l})|\ll \frac{l_1l_2-\sharp (\mathbf{R_l-R_k})}{l_1l_2}+\alpha _{\mathbf {l},m_{l_1},m_{l_2}}+\frac{m_{l_1}k_{2}}{l_1l_{2}}+\frac{m_{l_2}k_{1}}{l_{1}l_2}. \end{aligned}$$

In order to consider \(T_2\), we define \(A_\mathbf{m}=\big \{\left( \mathbf{k},\mathbf{l}\right) \in \mathbf{R_n\times R_n}:(2m_j-1)(k_j-l_j)\ge 0, \mathbf{k}\ne \mathbf{l} \big \}\) for \(\mathbf{m}\in \Lambda =\left\{ (m_1,m_2):m_1,m_2\in \left\{ 0,1\right\} , \mathbf{m\ne 1}\right\} \). Then, we have

$$\begin{aligned} T_2\le & {} \frac{1}{(\log n_1 \log n_2)^2}\sum _{{\mathbf{m} \in \Lambda }}\sum _{(\mathbf{k},\mathbf{l})\in A_\mathbf{m}}\frac{l_1l_2-\sharp (\mathbf R_l-R_k)}{l_1^2l_2^2k_1k_2}\\&+\frac{1}{(\log n_1 \log n_2)^2}\sum _{{\mathbf{m} \in \Lambda }}\sum _{(\mathbf{k},\mathbf{l})\in A_\mathbf{m}}\frac{\alpha _{\mathbf {l},m_{l_1},m_{l_2}}+\frac{m_{l_1}k_{2}}{l_1l_{2}}+\frac{m_{l_2}k_{1}}{l_{1}l_2}}{k_1k_2l_1l_2}=:T_{21}+T_{22}. \end{aligned}$$

Since

$$\begin{aligned} T_{21}= & {} \frac{1}{\log ^2n_1 \log ^2n_2}\underset{\underset{1\le k_2\le l_2\le n_2, \mathbf{k\ne l}}{1\le k_1\le l_1\le n_1}}{\sum }\bigg [\frac{k_1k_2}{l_1l_2}\times \frac{1}{k_1k_2l_1l_2}+\frac{1}{k_1k_2l_1l_2} \times \frac{k_1}{l_1}+\frac{1}{k_1k_2l_1l_2}\times \frac{k_2}{l_2}\bigg ]\\\le & {} \frac{K}{\log ^2n_1 \log ^2n_2}\bigg [\prod _{i=1}^2\underset{1\le k_i\le l_i\le n_i}{\sum }\frac{1}{l_i^2}+\underset{1\le k_1< l_1\le n_1}{\sum }\frac{1}{l_1^2}\underset{1\le l_2< k_2\le n_2}{\sum }\frac{1}{k_2l_2}\\&+\underset{1\le k_2< l_2\le n_2}{\sum }\frac{1}{l_2^2}\underset{1\le l_1< k_1\le n_1}{\sum }\frac{1}{k_1l_1}\bigg ]\\\le & {} K\left( \frac{1}{\log n_1 \log n_2}+\frac{\log n_2}{\log n_1 \log n_2}+\frac{\log n_1}{\log n_1 \log n_2}\right) \end{aligned}$$

and

$$\begin{aligned} T_{22}= & {} \frac{K}{(\log n_1 \log n_2)^2}\bigg [\underset{\underset{1\le k_2\le l_2\le n_2, \mathbf{k\ne l}}{1\le k_1\le l_1\le n_1}}{\sum }\frac{1}{k_1k_2l_1l_2(\log l_1 \log l_2)^{\epsilon _1+1}}\\&+\underset{1\le k_2\le l_2\le n_2}{\sum }\frac{1}{k_2l_2(\log l_2)^{\epsilon _1}}\underset{1\le l_1\le k_1\le n_1}{\sum }\frac{1}{k_1l_1(\log l_1)^{\epsilon _1+1}}\\&+\underset{1\le k_1\le l_1\le n_1}{\sum }\frac{1}{k_1l_1(\log l_1)^{\epsilon _1}}\underset{1\le l_2\le k_2\le n_2}{\sum }\frac{1}{k_2l_2(\log l_2)^{\epsilon _1+1}}\bigg ]\\\le & {} K(\log n_1 \log n_2)^{-(\epsilon _1+1)} \end{aligned}$$

we have

$$\begin{aligned} T_2\le K\left( \frac{1}{\log n_1 \log n_2}+\frac{\log n_2}{\log n_1 \log n_2}+\frac{\log n_1}{\log n_1 \log n_2}+\frac{1}{(\log n_1 \log n_2)^{\epsilon _1+1}}\right) \end{aligned}$$

and hence

$$\begin{aligned} T_2\le K \frac{1}{(\log n_1\log n_2)^{\epsilon +1}}, \ \ {\text{ for } \text{ some }} \ \epsilon >0. \end{aligned}$$

So

$$\begin{aligned} \hbox {Var}\left( \frac{1}{\log n_1 \log n_2}\sum _\mathbf{k \in R_n}\frac{1}{k_1k_2}\mathbbm {1}_{\left\{ \bigcap _\mathbf{i \in R_k}\left\{ X_\mathbf{i}\le u_\mathbf{k,i}\right\} \right\} } \right) \le \frac{K}{(\log n_1\log n_2)^{\epsilon +1}}. \end{aligned}$$

The result follows by Lemma 4.3 and Proposition 1.2.

Appendix 2: Proofs for Sect. 3

The proof of Theorem 3.2 will be given through a technical lemma showing that (2) implies that

$$\begin{aligned} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}S_{\mathbf {n}}(\mathbf {R_{k}},\mathbf {R}_\mathbf{n}):= & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}\sum \limits _{\mathop {\mathbf {i}\in {\mathbf {R_{k}}},\mathbf {j}\in {\mathbf {R_{n}}}}\limits _{\mathbf {i}\le \mathbf {j}, \mathbf {i}\ne \mathbf {j}}}\left| r_{\mathbf {i},\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2} \left( u_{\mathbf {k},\mathbf {i}}^{2}+u_{\mathbf {n},\mathbf {j}}^{2}\right) }{ 1+\left| r_{{\mathbf {i},\mathbf {j}}}\right| }\right) \nonumber \\\ll & {} (\log n_1 n_2)^{-(1+\epsilon )} \end{aligned}$$
(3)

Lemma 4.4

Suppose that the covariance functions \(r_{\mathbf{i}, \mathbf{j}}\) satisfy \(\left| r_{\mathbf{i}, \mathbf{j}}\right| <\rho _{\left| \mathbf{i}-\mathbf{j}\right| }\) for some sequence \(\left\{ \rho _\mathbf{n}\right\} _{\mathbf{n}\in \mathbb {N}^2-\left\{ \mathbf 0\right\} }\) that verifies (2) for some \(\epsilon >0\). Let the constants \(\{u_{\mathbf {n,i}},\mathbf {i}\le \mathbf {n}\}_{\mathbf {n}\ge \mathbf {1}}\) be such that \(\left\{ n_{1}n_{2}(1-\Phi (\lambda _{\mathbf {n}}))\right\} _{n\,\ge \,1}\) is bounded, where \(\lambda _{\mathbf {n}}=\min _{\mathbf {i}\in \mathbf {R_{n}}}u_{\mathbf {n,i}}\). Then (3) holds.

We omit the proof, since it follows similar arguments to those of Lemmas 3.3–3.5 of Tan and Wang [16].

Proof of Theorem 3.2

We will denote the event \(\left\{ X_{\mathbf {i}}\le u_{\mathbf {n,i} }\right\} \) by \(A_{\mathbf {i,n}}\). Using the normal comparison lemma we obtain

$$\begin{aligned}&\alpha _{\mathbf {n,k},m_{n_1},m_{n_2}} \\&\quad =\sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}\underset{\left( \mathbf {I},\mathbf {J}\right) \in S(m_{n_{1}},m_{n_{2}})}{\sup }\left| P\left( \underset{\mathbf {i}\in \mathbf {I\wedge j} \in \mathbf {J}}{\bigcap }A_{\mathbf {i,k}}A_{\mathbf {j,n}}\right) -P\left( \underset{\mathbf {i}\in \mathbf {I}}{\bigcap }A_{\mathbf {i,k}}\right) P\left( \underset{\mathbf {j}\in \mathbf {J}}{\bigcap }A_{\mathbf {j,n}}\right) \right| \\&\quad \le \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}\underset{\left( \mathbf {I},\mathbf {J}\right) \in S(m_{n_{1}},m_{n_{2}})}{\sup }\underset{\mathbf {i}\in {\mathbf {I}},\mathbf {j}\in {\mathbf {J}}}{ \sum }\left| r_{\mathbf {i},\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2} \left( u_{\mathbf {k},\mathbf {i}}^{2}+u_{\mathbf {n},\mathbf {j}}^{2}\right) }{ 1+\left| r_{{\mathbf {i},\mathbf {j}}}\right| }\right) \\&\quad \le \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}\underset{\left( \mathbf {I},\mathbf {J}\right) \subseteq \mathbf {R_{k}}\times \mathbf {R_{n}}}{\sup }\underset{\mathbf {i}\in {\mathbf {I}},\mathbf {j}\in {\mathbf {J}}}{ \sum }\left| r_{\mathbf {i},\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2} \left( u_{\mathbf {k},\mathbf {i}}^{2}+u_{\mathbf {n},\mathbf {j}}^{2}\right) }{ 1+\left| r_{{\mathbf {i},\mathbf {j}}}\right| }\right) \\&\quad \le C\sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}\sum \limits _{\mathop {\mathbf {i}\in {\mathbf {R_{k}}},\mathbf {j}\in {\mathbf {R_{n}}}}\limits _{\mathbf {i}\le \mathbf {j}, \mathbf {i}\ne \mathbf {j}}}\left| r_{\mathbf {i},\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2} \left( u_{\mathbf {k},\mathbf {i}}^{2}+u_{\mathbf {n},\mathbf {j}}^{2}\right) }{ 1+\left| r_{{\mathbf {i},\mathbf {j}}}\right| }\right) \\&\quad =C\sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}S_{\mathbf {n}}(\mathbf {R_{k}},\mathbf {R}_\mathbf{n}), \end{aligned}$$

where C is a constant. So, \(D^*(u_{\mathbf {n},\mathbf {i}})\) follows from Lemma 4.4. Next, we show condition \(D'(u_{\mathbf {n},\mathbf {i}})\) holds. To that end, let \(\mathbf I \in \mathcal {E}(u_{\mathbf {n},\mathbf {i}})\). Then, we have

$$\begin{aligned}&k_{n_{1}}k_{n_{2}}\underset{\mathbf {i,j}\in \mathbf {I}}{\sum }P(\overline{A }_{\mathbf {i,n}}\overline{A}_{\mathbf {j,n}}) \\&\quad \le k_{n_{1}}k_{n_{2}}\underset{\mathbf {i,j}\in \mathbf {I}}{\sum }\left| P(\overline{A}_{\mathbf {i,n}}\overline{A}_{\mathbf {j,n}})-P(\overline{A}_{ \mathbf {i,n}})P(\overline{A}_{\mathbf {j,n}})\right| +k_{n_{1}}k_{n_{2}} \underset{\mathbf {i,j}\in \mathbf {I}}{\sum }P(\overline{A}_{\mathbf {i,n}})P( \overline{A}_{\mathbf {j,n}}) \\&\quad \le k_{n_{1}}k_{n_{2}}S_{\mathbf {n}}(\mathbf {I},\mathbf {I})+k_{n_{1}}k_{n_{2}} \underset{\mathbf {i,j}\in \mathbf {I}}{\sum }\left( 1-\Phi (u_{\mathbf {n}, \mathbf {i}}) \right) \left( 1-\Phi (u_{\mathbf {n},\mathbf {j}}) \right) \\&\quad \le k_{n_{1}}k_{n_{2}}S_{\mathbf {n}}(\mathbf {R_{n}},\mathbf {R_{n}})+k_{n_{1}}k_{n_{2}}\left( \underset{\mathbf {i}\in \mathbf {R_{n}}}{\sum }\left( 1-\Phi (u_{\mathbf {n}, \mathbf {i}}\right) )\right) ^{2} \\&\quad \le k_{n_{1}}k_{n_{2}}S_{\mathbf {n}}(\mathbf {R_{n}},\mathbf {R_{n}})+\frac{1}{k_{n_{1}}k_{n_{2}}} \left( \underset{\mathbf {i}\le \mathbf {n}}{\sum }\left( 1-\Phi (u_{\mathbf {n}, \mathbf {i}}\right) )\right) ^{2}\xrightarrow [\mathbf {n}\rightarrow {\varvec{\infty }}]{}0, \end{aligned}$$

which completes the proof of Theorem 3.2.

We need the following facts to prove Example 3.1, which are from Choi [5]. The covariance function \(\gamma _{n}\) satisfies the following facts

$$\begin{aligned} \sum _{m=0}^{n}|\gamma _{m}|^{2}\le C n^{1-1/\log _{2}^{3}}\ \ \ \text{ and }\ \ \ \sum _{m=0}^{n}|\gamma _{m}|^{2}\ge C \frac{n^{1-1/\log _{2}^{3}}}{\log n} \end{aligned}$$
(4)

for some constants C whose value may change from place to place. From (4) and the definition of \(\gamma _{\mathbf {n}}\), it is easy to see that

$$\begin{aligned} \sum _{\mathbf {m}\in \mathbf {R_{n}}}|\gamma _{\mathbf {m}}|^{2}\le C (n_{1}n_{2})^{(1-1/\log _{2}^{3})}\ \ \ \text{ and }\ \ \ \sum _{\mathbf {m}\in \mathbf {R_{n}}}|\gamma _{\mathbf {m}}|^{2}\ge C \frac{n_{1}^{1-1/\log _{2}^{3}}}{\log n_{1}}\frac{n_{2}^{1-1/\log _{2}^{3}}}{\log n_{2}}. \end{aligned}$$
(5)

Proof of Example 3.1

We only need to show that conditions \(D'\mathbb {(}u_{\mathbf {n}}\mathbb {)}\) and \(D^*\mathbb {(}u_{\mathbf {n}}\mathbb {)}\) hold. The checking of condition \(D'\mathbb {(}u_{\mathbf {n}}\mathbb {)}\) is the same as it was given in the proof of Theorem 3.2, so we omit it. We will denote the event \(\left\{ X_{\mathbf {i}}\le u_{\mathbf {n} }\right\} \) by \(B_{\mathbf {i,n}}\). Using the normal comparison lemma, as for the proof Theorem 3.2, we obtain

$$\begin{aligned}&\alpha _{\mathbf {n},m_{n_1},m_{n_2}}\\&\quad =\sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}\alpha _{\mathbf {n,k},m_{n_1},m_{n_2}}\\&\quad =\sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}\underset{\left( \mathbf {I},\mathbf {J}\right) \in S(m_{n_{1}},m_{n_{2}})}{\sup }\left| P\left( \underset{\mathbf {i}\in \mathbf {I\wedge j}\in \mathbf {J}}{\bigcap }B_{\mathbf {i,k}}B_{\mathbf {j,n}}\right) -P\left( \underset{\mathbf {i}\in \mathbf {I}}{\bigcap }B_{\mathbf {i,k}}\right) P\left( \underset{\mathbf {j}\in \mathbf {J}}{\bigcap }B_{\mathbf {j,n}}\right) \right| \\&\quad \le C\sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}\sum \limits _{\mathop {\mathbf {i}\in {\mathbf {R_{k}}},\mathbf {j}\in {\mathbf {R_{n}}}}\limits _{\mathbf {i}\le \mathbf {j}, \mathbf {i}\ne \mathbf {j}}}\left| \gamma _{\mathbf {i},\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2}\left( u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}\right) }{ 1+\left| \gamma _{{\mathbf {i},\mathbf {j}}}\right| }\right) \\&\quad \le C\sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\underset{\mathbf {0}\le \mathbf {j}\le \mathbf {n}, \mathbf {j}\ne \mathbf {0}}{ \sum }\left| \gamma _{\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2}\left( u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}\right) }{ 1+\left| \gamma _{{\mathbf {j}}}\right| }\right) \\&\quad =:C\sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}S_{\mathbf {n}}^{*}(\mathbf {R_{k}},\mathbf {R_{n}}). \end{aligned}$$

Let \(\delta =\sup _{\mathbf {m}\ge \mathbf {0}, \mathbf {m}\ne \mathbf {0}}|\gamma _{\mathbf {m}}|<1\) and \(\theta _{\mathbf {n}}=\exp (\alpha u_{\mathbf {n}}^{2})\), where \(\alpha \) is a constant satisfying \(0<\alpha <(1-\delta )/4(1+\delta )\). Split the term \(S_{\mathbf {n}}^{*}(\mathbf {R_{k}},\mathbf {R_{n}})\) into two parts as:

$$\begin{aligned} S_{\mathbf {n}}^{*}(\mathbf {R_{k}},\mathbf {R_{n}})=\sum \limits _{\mathop {\mathbf {0}\le \mathbf {j}\le \mathbf {n}, \mathbf {j}\ne \mathbf {0},}\limits _{ \chi (\mathbf {|j-i|})\le \theta _{\mathbf {n}}}} +\sum \limits _{\mathop {\mathbf {0}\le \mathbf {j}\le \mathbf {n}, \mathbf {j}\ne \mathbf {0},}\limits _{\chi (\mathbf {|j-i|})> \theta _{\mathbf {n}}}}=:S_{\mathbf {n},1}^{*}+S_{\mathbf {n},2}^{*}, \end{aligned}$$

where \(\chi (\mathbf {j})=\max (j_{1},1)\times \max (j_{2},1)\). For sufficiently large \(\mathbf {n}\), we have

$$\begin{aligned} \exp \left( -\frac{u_{\mathbf {n}}^{2}}{2}\right) \thicksim C\frac{u_{\mathbf {n}}}{n_{1}n_{2}}\ \ \ \text{ and }\ \ \ u_{\mathbf {n}}\thicksim \sqrt{2\log (n_{1}n_{2})}, \end{aligned}$$
(6)

and (6) will be extensively used in the following proof. For the term \(S_{\mathbf {n},1}^{*}\), using (6), we have

$$\begin{aligned} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}S_{\mathbf {n},1}^{*}= & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\sum \limits _{\mathop {\mathbf {0}\le \mathbf {j}\le \mathbf {n}, \mathbf {j}\ne \mathbf {0},}\limits _{\chi (\mathbf {j})\le \theta _{\mathbf {n}}}} \left| \gamma _{\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2} \left( u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}\right) }{ 1+\left| \gamma _{\mathbf {j}}\right| }\right) \\\le & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\sum \limits _{\mathop {\mathbf {0}\le \mathbf {j}\le \mathbf {n}, \mathbf {j}\ne \mathbf {0},}\limits _{\chi (\mathbf {j})\le \theta _{\mathbf {n}}}} \delta \exp \left( -\frac{\frac{1}{2}\left( u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}\right) }{1+\delta }\right) \\\ll & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\theta _{\mathbf {n}}^{2}\exp \left( -\frac{u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}}{2}\right) ^{1/(1+\delta )} \\\ll & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\theta _{\mathbf {n}}^{2}\left( \frac{u_{\mathbf {k}}}{k_{1}k_{2}}\frac{u_{\mathbf {n}}}{n_{1}n_{2}}\right) ^{1/(1+\delta )}\\\le & {} (n_{1}n_{2})^{1+4\alpha -2/(1+\delta )}(\log n_{1}n_{2})^{1/(1+\delta )}. \end{aligned}$$

Since \(1+4\alpha -2/(1+\delta )<0\), we get \(S_{1}\le (n_{1}n_{2})^{-\kappa }\) for some \(\kappa >0\).

We split the term \(S_{\mathbf {n},2}^{*}\) into three parts, the first for \(\mathbf {j}>\mathbf {0}\), the second for \(j_{1}=0\wedge j_{2}>0\), the third for \(j_{2}=0\wedge j_{1}>0\). We will denote them by \(\mathbf {S}^{*}_{\mathbf {n},2i}\), \(i=1,2,3,\) respectively.

To deal with the first case \(\mathbf {j}>\mathbf {0}\), let

$$\begin{aligned} \mathbf {A}_{\mathbf {n}}=\left\{ \mathbf {m}|\mathbf {1}\le \mathbf {m}\le \mathbf {n}, \chi (\mathbf {m})>\theta _{\mathbf {n}}, |\gamma _{\mathbf {m}}|>\frac{1}{(\log m_{1}m_{2})^{3}}\right\} . \end{aligned}$$

Now, we have

$$\begin{aligned} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}S_{\mathbf {n},21}^{*}= & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\sum _{\mathbf {j}\in \mathbf {A}_{\mathbf {n}}^{c}}\left| \gamma _{\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2} \left( u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}\right) }{ 1+\left| \gamma _{\mathbf {j}}\right| }\right) \\&+\sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\sum _{\mathbf {j}\in \mathbf {A}_{\mathbf {n}}}\left| \gamma _{\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2} \left( u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}\right) }{ 1+\left| \gamma _{\mathbf {j}}\right| }\right) \\=: & {} S_{1}+S_{2}. \end{aligned}$$

Since

$$\begin{aligned} \max _{\mathbf {j}\in \mathbf {A}_{\mathbf {n}}^{c}}|\gamma _{\mathbf {j}}|\le \frac{1}{(\log \theta _{\mathbf {n}})^{3}}, \end{aligned}$$

by the same arguments as for \(\mathbf {S}^{*}_{\mathbf {n},1}\), we have

$$\begin{aligned} S_{1}\le & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}n_{1}n_{2}\frac{1}{(\log \theta _{\mathbf {n}})^{3}}\exp \left( -\frac{u_{k}^{2}+u_{n}^{2}}{2(1+\frac{1}{(\log \theta _{\mathbf {n}})^{3}})}\right) \\\ll & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}n_{1}n_{2}\frac{1}{u_{\mathbf {n}}^{6}}\left( \frac{u_{\mathbf {k}}}{k_{1}k_{2}}\frac{u_{\mathbf {n}}}{n_{1}n_{2}}\right) ^{1+\frac{1}{\alpha ^{3}u_{\mathbf {n}}^{6}}}\\\ll & {} (n_{1}n_{2})^{-\frac{2}{\alpha ^{3}u_{\mathbf {n}}^{6}}}(u_{\mathbf {n}})^{-4+\frac{2}{\alpha ^{3}u_{\mathbf {n}}^{6}}}\\\ll & {} (\log n_{1}n_{2})^{-2}. \end{aligned}$$

Now we consider the term \(S_{2}\). Let \(\beta =1-1/(2\log _{2}^{3})\). Form the definition of \(\gamma _{\mathbf {m}}\), we have

$$\begin{aligned} \delta ':= & {} \sup _{\mathbf {m}\in \mathbf {A}_{\mathbf {n}}}|\gamma _{\mathbf {m}}|\le \sup _{\mathbf {m}\in \mathbf {A}_{\mathbf {n}}}\left( \frac{1}{\log m_{1}\log m_{2}}\right) ^{1/2}\nonumber \\\le & {} \sup _{\mathbf {m}\in \mathbf {A}_{\mathbf {n}}}\left( \frac{1}{\log m_{1}m_{2}}\right) ^{1/2}\le \left( \frac{1}{\log \theta _{\mathbf {n}}}\right) ^{1/2} \end{aligned}$$

As in Choi [5], we claim that \(\hbox {card}(\mathbf {A}_{\mathbf {n}})=O((n_{1}n_{2})^{\beta })\). If not, \(|\gamma _{\mathbf {m}}|>\frac{1}{(\log m_{1}m_{2})^{3}}\) on a set of size \(O((n_{1}n_{2})^{\beta })\) and thus

$$\begin{aligned} \sum _{\mathbf {m}\in \mathbf {R_{n}}}|\gamma _{\mathbf {m}}|^{2}\ge \sum _{\mathbf {m}\in \mathbf {A}_{\mathbf {n}}}|\gamma _{\mathbf {m}}|^{2}\ge C\frac{(n_{1}n_{2})^{\beta }}{(\log n_{1}n_{2})^{6}} \end{aligned}$$

contradicting (5). Hence

$$\begin{aligned} S_{2}= & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\sum _{\mathbf {j}\in \mathbf {A}_{\mathbf {n}}}\left| \gamma _{\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2} \left( u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}\right) }{ 1+\left| \gamma _{\mathbf {j}}\right| }\right) \\\le & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}(n_{1}n_{2})^{\beta }\frac{1}{(\log \theta _{\mathbf {n}})^{1/2}}\exp \left( -\frac{u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}}{2(1+\delta ')}\right) \\\ll & {} (n_{1}n_{2})^{1+\beta -\frac{2}{1+\delta '}}(u_{\mathbf {n}})^{\frac{2}{1+\delta '}-1}\\\ll & {} (n_{1}n_{2})^{2-\frac{1}{2\log _{2}^{3}}-\frac{2}{1+\delta '}}(u_{\mathbf {n}})^{\frac{2}{1+\delta '}-1}\\\ll & {} (n_{1}n_{2})^{-\varepsilon }, \end{aligned}$$

for some \(\varepsilon >0\).

Next, we deal with the second case \(j_{1}=0\wedge j_{2}>0\). If \(n_{2}\le \theta _{\mathbf {n}}\), by the same argument as for \(\mathbf {S}^{*}_{\mathbf {n},1}\), we can show

$$\begin{aligned} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}S_{\mathbf {n},22}^{*}\le (n_{1}n_{2})^{-\varepsilon } \end{aligned}$$

for some \(\varepsilon >0\). If \(n_{2}>\theta _{\mathbf {n}}\), let

$$\begin{aligned} \mathbf {B}_{\mathbf {n}}=\left\{ (0,m_{2})|1\le m_{2}\le n_{2}, m_{2}>\theta _{\mathbf {n}}, |\gamma _{(0,m_{2})}|>\frac{1}{(\log m_{2})^{3}}\right\} . \end{aligned}$$

Now, we have

$$\begin{aligned} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}S_{\mathbf {n},22}^{*}= & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\sum _{\mathbf {j}\in \mathbf {B}_{\mathbf {n}}^{c}}\left| \gamma _{\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2} \left( u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}\right) }{ 1+\left| \gamma _{\mathbf {j}}\right| }\right) \nonumber \\&+\sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\sum _{\mathbf {j}\in \mathbf {B}_{\mathbf {n}}}\left| \gamma _{\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2} \left( u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}\right) }{ 1+\left| \gamma _{\mathbf {j}}\right| }\right) \\=: & {} S_{3}+S_{4}. \end{aligned}$$

Since

$$\begin{aligned} \max _{\mathbf {j}\in \mathbf {B}_{\mathbf {n}}^{c}}|\gamma _{\mathbf {j}}|\le \frac{1}{(\log \theta _{\mathbf {n}})^{3}}, \end{aligned}$$

by the same arguments as for \(S_{1}\), we have

$$\begin{aligned} S_{3}\le & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}n_{2}\frac{1}{(\log \theta _{\mathbf {n}})^{3}}\exp \left( -\frac{u_{k}^{2}+u_{n}^{2}}{2(1+\frac{1}{(\log \theta _{\mathbf {n}})^{3}})}\right) \\< & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}n_{1}n_{2}\frac{1}{u_{\mathbf {n}}^{6}}\left( \frac{u_{\mathbf {k}}}{k_{1}k_{2}}\frac{u_{\mathbf {n}}}{n_{1}n_{2}}\right) ^{1+\frac{1}{\alpha ^{3}u_{\mathbf {n}}^{6}}}\\\ll & {} (n_{1}n_{2})^{-\frac{2}{\alpha ^{3}u_{\mathbf {n}}^{6}}}(u_{\mathbf {n}})^{-4+\frac{2}{\alpha ^{3}u_{\mathbf {n}}^{6}}}\\\ll & {} (\log n_{1}n_{2})^{-2}. \end{aligned}$$

Now we consider the term \(S_{4}\). Noting that \(\gamma _{\mathbf {m}}=\gamma _{m_{1}}\gamma _{m_{2}}\) and \(\gamma _{0}=1\), we have

$$\begin{aligned} \delta '':=\sup _{\mathbf {m}\in \mathbf {B}_{\mathbf {n}}}|\gamma _{\mathbf {m}}|\le \sup _{\mathbf {m}\in \mathbf {B}_{\mathbf {n}}}\left( \frac{1}{\log m_{2}}\right) ^{1/2}\le \left( \frac{1}{\log \theta _{\mathbf {n}}}\right) ^{1/2} \end{aligned}$$

As in Choi [5], we claim that \(\hbox {card}(\mathbf {B}_{\mathbf {n}})=O((n_{2})^{\beta })\). If not, \(|\gamma _{\mathbf {m}}|>\frac{1}{(\log m_{2})^{3}}\) on a set of size \(O((n_{2})^{\beta })\) and thus

$$\begin{aligned} \sum _{m_{2}=1}^{n_{2}}|\gamma _{m_{2}}|^{2}=\sum _{m_{2}=1}^{n_{2}}|\gamma _{(0,m_{2})}|^{2}\ge \sum _{\mathbf {m}\in \mathbf {B}_{\mathbf {n}}}|\gamma _{\mathbf {m}}|^{2}\ge C\frac{(n_{2})^{\beta }}{(\log n_{2})^{6}} \end{aligned}$$

contradicting (4). Hence

$$\begin{aligned} S_{4}= & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}\sum _{\mathbf {j}\in \mathbf {B}_{\mathbf {n}}}\left| \gamma _{\mathbf {j}}\right| \exp \left( -\frac{\frac{1}{2}\left( u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}\right) }{ 1+\left| \gamma _{\mathbf {j}}\right| }\right) \\\le & {} \sup _{\mathbf {1}\le \mathbf {k}\le \mathbf {n}}k_{1}k_{2}(n_{2})^{\beta }\frac{1}{(\log \theta _{\mathbf {n}})^{1/2}}\exp \left( -\frac{u_{\mathbf {k}}^{2}+u_{\mathbf {n}}^{2}}{2(1+\delta ')}\right) \\< & {} (n_{1}n_{2})^{1+\beta -\frac{2}{1+\delta '}}(u_{\mathbf {n}})^{\frac{2}{1+\delta '}-1}\\\ll & {} (n_{1}n_{2})^{2-\frac{1}{2\log _{2}^{3}}-\frac{2}{1+\delta '}}(u_{\mathbf {n}})^{\frac{2}{1+\delta '}-1}\\\ll & {} (n_{1}n_{2})^{-\varepsilon }, \end{aligned}$$

for some \(\varepsilon >0\). Likewise we can bound the third case \(j_{2}=0\wedge j_{1}>0\). Thus, condition \(D^*\mathbb {(}u_{\mathbf {n}}\mathbb {)}\) holds.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pereira, L., Tan, Z. Almost Sure Convergence for the Maximum of Nonstationary Random Fields. J Theor Probab 30, 996–1013 (2017). https://doi.org/10.1007/s10959-015-0663-3

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10959-015-0663-3

Keywords

Mathematics Subject Classification (2010)

Navigation