Skip to main content
Log in

Circular block bootstrap for coefficients of autocovariance function of almost periodically correlated time series

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

In the paper the consistency of the circular block bootstrap for the coefficients of the autocovariance function of almost periodically correlated time series is proved. The pointwise and the simultaneous bootstrap equal-tailed confidence intervals for these coefficients are constructed. Application of the results to detect the second-order significant frequencies is provided. The simulation and real data examples are also presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Antoni J (2009) Cyclostationarity by examples. Mech Syst Sig Process 23(4):987–1036

    Article  Google Scholar 

  • Antoni J, Randall RB (2006) The spectral kurtosis: application to the vibratory surveillance and diagnostics of rotating machines. Mech Syst Sig Process 20(2):308–331

    Article  Google Scholar 

  • Araujo A, Giné E (1980) The central limit theorem for real and Banach valued random variables. Wiley, New York

    MATH  Google Scholar 

  • Besicovitch AS (1932) Almost periodic functions. University Press, Cambridge

    Google Scholar 

  • Dehay D, Dudek A, Leśkow J (2014) Subsampling for continuous-time nonstationary stochastic processes. J Stat Plan Inf 150:142–158

    Article  MATH  Google Scholar 

  • Dehay D, Leśkow J (1996) Functional limit theory for the spectral covariance estimator. J Appl Probab 33:1077–1092

    Article  MATH  MathSciNet  Google Scholar 

  • Doukhan P (1994) Mixing. Properies and examples. In: Lecture notes in statistics, vol 85. Springer, New York

  • Dudek AE, Leśkow J, Maiz S (2014a) Block bootstrap for the autocovariance coefficients of periodically correlated time series. In: Akritas MG, Lahiri SN, Politis DN (eds) Topics in nonparametric statistics. Proceedings of the first conference of the international society for nonparametric statistics, Springer, New York (to appear)

  • Dudek AE, Leśkow J, Politis D, Paparoditis E (2014b) A generalized block bootstrap for seasonal time series. J Time Ser Anal 35:89–114

  • Gardner WA, Napolitano A, Paura L (2006) Cyclostationarity: half a century of research. Signal Process 86(4):639–697

    Article  MATH  Google Scholar 

  • Hurd H (1989) Nonparametric time series analysis for periodically correlated processes. IEEE Trans Inf Theory 35:350–359

    Article  MATH  MathSciNet  Google Scholar 

  • Hurd H (1991) Nonparametric time series analysis for periodically correlated processes. J Multivar Anal 37(1):24–45

    Article  MATH  MathSciNet  Google Scholar 

  • Hurd H, Leśkow J (1992a) Estimation of the Fourier coefficient functions and their spectral densities for \(\varphi \)-mixing almost periodically correlated processes. Stat Probab Lett 14(4):299–306

    Article  MATH  Google Scholar 

  • Hurd H, Leśkow J (1992b) Strongly consistent and asymptotically normal estimation of the covariance for almost periodically correlated processes. Stat Decis 10(3):201–225

    MATH  Google Scholar 

  • Kim T (1994) Moment bounds for non-stationary dependent sequences. J Appl Probab 31(3):731–742

    Article  MATH  MathSciNet  Google Scholar 

  • Künsch H (1989) The jackknife and the bootstrap for general stationary observations. Ann Stat 17:1217–1241

    Article  MATH  Google Scholar 

  • Lahiri SN (2003) Resampling methods for dependent data. Springer series in statistics. Springer, New York

    Book  Google Scholar 

  • Lenart Ł (2013) Non-parametric frequency identification and estimation in mean function for almost periodically correlated time series. J Multivar Anal 115:252–269

    Article  MATH  MathSciNet  Google Scholar 

  • Lenart Ł, Leśkow J, Synowiecki R (2008) Subsampling in testing autocovariance for periodically correlated time series. J Time Ser Anal 29:995–1018

    Article  MATH  MathSciNet  Google Scholar 

  • Leśkow J, Synowiecki R (2010) On bootstrapping periodic random arrays with increasing period. Metrika 71:253–279

    Article  MATH  MathSciNet  Google Scholar 

  • Liu R, Singh K (1992) Moving block jackknife and bootstrap capture weak dependence. Exploring the limits of bootstrap. Wiley series in probability and mathematical statistics. Wiley, New York

    Google Scholar 

  • Napolitano A (2012) Generalizations of cyclostationary signal processing: spectral analysis and applications. Wiley-IEEE Press, New York

    Book  Google Scholar 

  • Politis DN, Romano JP (1992) A circular block-resampling procedure for stationary data. Exploring the limits of bootstrap. Wiley series in probability and mathematical statistics. Wiley, New York

    Google Scholar 

  • Rio E (2000) Théorie asymptotique pour des processus aléatoires faiblement dépendants. Mathématiques et Applications. Springer, Berlin

    Google Scholar 

  • Synowiecki R (2007) Consistency and application of moving block bootstrap for nonstationary time series with periodic and almost periodic structure. Bernoulli 13:1151–1178

    Article  MATH  MathSciNet  Google Scholar 

  • Synowiecki R (2008) Metody resamplingowe w dziedzinie czasu dla niestacjonarnych szeregów czasowych o strukturze okresowej i prawie okresowej. Dissertation, AGH University of Science and Techniology, Krakow, Poland. http://winntbg.bg.agh.edu.pl/rozprawy2/10012/full10012

Download references

Acknowledgments

The author is grateful to Sofiane Maiz for the help with simulation study and real data example and LASPI (Laboratoire d’Analyse des Signaux et des Processus Industriels) in Roanne, France for making the real data set available. The author expresses her sincere gratitude to the Editor and an Associated Editor who helped to improve the presentation of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. E. Dudek.

Additional information

Research was partially supported by the Polish Ministry of Science and Higher Education and AGH local grant.

Appendix

Appendix

First we present lemmas and definitions that will be used a few times while proving results.

Definition 1

(Synowiecki 2008, Definition 2.2) Time series \(\{X_t, t\in \mathbb {Z}\}\), \(\{Y_t, t\in \mathbb {Z}\}\) are called jointly almost periodically correlated (JAPC), if for each \(t\in \mathbb {Z}\), \(\mathrm {E}|X_t|^2<\infty \), \(\mathrm {E}|Y_t|^2<\infty \) and function \(B_{XY}(t,\tau )=\mathrm {Cov}(X_t,Y_{t+\tau })\) is an almost periodic function of \(t\) for any fixed \(\tau \in \mathbb {Z}\).

Lemma 1

(Synowiecki 2008, Lemma 2.11) Let \(\{X_t, t\in \mathbb {Z}\}\) and \(\{Y_t, t\in \mathbb {Z}\}\) be JAPC. Assume that the autocovariance function is uniformly summable e.g. \(\left| B_{XY}(t,\tau )\right| \le c_{\tau },\) where the sequence \(\{c_{\tau }\}_{\tau =0}^{\infty }\) is summable. Then for each divergent (to infinity) sequence \(\{b_n\}\)

$$\begin{aligned} \sup _{t\in \mathbb {Z}}\left| \mathrm {Cov}\left( \frac{1}{\sqrt{b_n}}\sum _{j=t}^{b_n+t-1}X_j,\frac{1}{\sqrt{b_n}}\sum _{j=t}^{b_n+t-1}Y_j\right) -\sigma _{XY}\right| \rightarrow 0 \quad \text {as}\quad n\rightarrow \infty , \end{aligned}$$

where \(\sigma _{XY}=M_t\left( \sum _{\tau =-\infty }^{\infty }B_{XY}(t,\tau )\right) .\)

In fact Lemma 1 is an extended version of Lemma 2 from Lenart et al. (2008), which is for jointly periodically correlated time series.

Proof of Theorem 2

First we show the consistency for the real part of \(\widehat{a}^*_n\left( \lambda ,\tau \right) \) i.e.

$$\begin{aligned}&\sup _{x\in \mathbb {R}}\left| P\left( \sqrt{n}\left( \mathfrak {R}\left( \widehat{a}_n\left( \lambda ,\tau \right) \right) -\mathfrak {R}\left( a\left( \lambda ,\tau \right) \right) \right) \le x)\right) \right. \\&\left. -P^*\left( \sqrt{n}\left( \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) -\mathrm {E}^*\left( \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \right) \right) \le x \right) \right| \mathop {\longrightarrow }\limits ^{P}0. \end{aligned}$$

Without loss of generality we assume that the sample size \(n\) is an integer multiple of the block length \(b\) (\(n=lb\), \(l\in \mathbb {N}\)). The proof will be performed in two steps. First we define new bootstrap estimator of \(\mathfrak {R}\left( \widehat{a}_n\left( \lambda ,\tau \right) \right) \). In the second part the consistency of the new estimator will be shown.

By \(\widetilde{Z}_{t,b}\) we denote

$$\begin{aligned} \widetilde{Z}_{t,b}=X_t X_{t+\tau }\cos (\lambda t)+\dots +X_{t+b-\tau -1} X_{t+b-1}\cos (\lambda (t+b-\tau -1)), \end{aligned}$$

so this is a part of the sum (5) that can be obtained from elements contained in the block \(B_t=\{X_t,\dots ,X_{t+b-1}\}\).

The new bootstrap estimator is defined as

$$\begin{aligned} \mathfrak {R}\left( \widetilde{a}^*_n\left( \lambda ,\tau \right) \right) =\frac{1}{n}\sum _{k=0}^{l-1}\widetilde{Z}^*_{1+bk,b}, \end{aligned}$$

where \(\widetilde{Z}^*_{j,b}\) are conditionally independent and have the common distribution

$$\begin{aligned} P\left( \widetilde{Z}^*_{j,b}=\widetilde{Z}_{t,b}\right) =\frac{1}{n}\quad \text { for } t=1,\dots ,n. \end{aligned}$$

Note that \(\widetilde{a}^*_n\left( \lambda ,\tau \right) \) is not equal to (6) because the sum does not contain summands based on observations belonging to two different blocks. In the first step we show the asymptotic equivalence of \(\mathfrak {R}(\widetilde{a}^*_n\left( \lambda ,\tau \right) )\) an \(\mathfrak {R}(\widehat{a}^*_n\left( \lambda ,\tau \right) )\) e.g.

$$\begin{aligned} \sqrt{n}\left| \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) -\mathfrak {R}\left( \widetilde{a}^*_n\left( \lambda ,\tau \right) \right) \!-\!\mathrm {E}^*\left( \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \right) - \mathrm {E}^*\left( \mathfrak {R}\left( \widetilde{a}^*_n\left( \lambda ,\tau \right) \right) \right) \right| \mathop {\longrightarrow }\limits ^{P^*}0. \end{aligned}$$

Using Tchebychev’s inequality it is enough to show that

$$\begin{aligned} n\mathrm {Var}^*\left( \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) -\mathfrak {R}\left( \widetilde{a}^*_n\left( \lambda ,\tau \right) \right) \right) \mathop {\longrightarrow }\limits ^{P^*}0. \end{aligned}$$
(10)

Note that

$$\begin{aligned} \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) -\mathfrak {R}\left( \widetilde{a}^*_n\left( \lambda ,\tau \right) \right) =\frac{1}{n}\sum _{t\in C^*_{b,\tau }} X^*_t X^*_{t+\tau } \cos (\lambda t^*), \end{aligned}$$
(11)

where \(C^*_{b,\tau }\) is the set of such \(t\) that \(t\) and \(t+\tau \) belong to different blocks of size \(b\) i.e.

$$\begin{aligned} C^*_{b,\tau }\!=\!\left\{ t\!\in \! \{1,\dots ,n\}\!: X_t^*\!\in \! B^*_i, X_{t+\tau }^*\!\in \! B^*_{i+b} \text { for } i\!\in \!\{1,b+1,\dots ,(l-1)b+1\}\right\} . \end{aligned}$$

By \(B^*_i\) we denote the block of the form \(B^*_i=(X^*_i,\dots ,X^*_{i+b-1}).\) The set \(C^*_{b,\tau }\) contains \(\tau (l-1)\) elements.

Now condition (10) is equivalent to

$$\begin{aligned} \frac{1}{n}\mathrm {Var}^*\left( \sum _{t\in C^*_{b,\tau }} X^*_t X^*_{t+\tau } \cos (\lambda t^*)\right) \mathop {\longrightarrow }\limits ^{P^*}0. \end{aligned}$$

The left-hand side of the expression above can be rewritten as

$$\begin{aligned}&\frac{2}{n}\sum _{k=0}^{l-2}\sum _{k'\ge k}\mathrm {Cov}^*\left( \sum _{i=1}^{\tau } X^*_{(k+1)b-\tau +i} X^*_{(k+1)b+i} \cos (\lambda ((k+1)b-\tau +i)^*), \right. \nonumber \\&\quad \left. \sum _{j=1}^{\tau }X^*_{(k'+1)b-\tau +j} X^*_{(k'+1)b+j} \cos (\lambda ((k'+1)b-\tau +j)^*)\right) \!. \end{aligned}$$
(12)

In this sum the covariances are non-zero only when \(k'=k\) and \(k'=k+1.\) In the first case we have

$$\begin{aligned}&\mathrm {Var}^*\left( \sum _{i=1}^{\tau } X^*_{(k+1)b-\tau +i} X^*_{(k+1)b+i} \cos (\lambda ((k+1)b-\tau +i)^*)\right) \\&\quad =\mathrm {E}^*\left( \sum _{i=1}^{\tau } X^*_{(k+1)b-\tau +i} X^*_{(k+1)b+i} \cos \left( \lambda ((k+1)b-\tau +i)^*\right) \right) ^2\\&\qquad -\left( \mathrm {E}^*\left( \sum _{i=1}^{\tau } X^*_{(k+1)b-\tau +i} X^*_{(k+1)b+i} \cos (\lambda ((k+1)b-\tau +i)^*)\right) \right) ^2\\&\quad =\frac{1}{n}\sum _{j_1=1}^{n}\frac{1}{n}\sum _{j_2=1}^{n}\left( \sum _{i=1}^{\tau } X_{j_1+b-\tau +i-1} X_{j_2+i-1} \cos \left( \lambda (j_1+b-\tau +i-1)\right) \right) ^2\\&\qquad -\left( \frac{1}{n}\sum _{j_3=1}^{n}\frac{1}{n}\sum _{j_4=1}^{n}\left( \sum _{i=1}^{\tau } X_{j_3+b-\tau +i-1} X_{j_4+i-1} \cos (\lambda (j_3+b-\tau +i-1))\right) \right) ^2. \end{aligned}$$

Under assumption (ii) the absolute expected value of the last expression can be bounded from above by \(C_1\tau ^2,\) where \(C_1\) is some positive constant independent of \(n\). For \(k'=k+1\) we get

$$\begin{aligned}&\mathrm {E}^*\left( \sum _{i_1=1}^{\tau } X^*_{(k+1)b-\tau +i_1} X^*_{(k+1)b+i_1} \cos \left( \lambda ((k+1)b-\tau +i_1)^*\right) \right. \\&\qquad \left. \cdot \sum _{i_2=1}^{\tau } X^*_{(k+2)b-\tau +i_2} X^*_{(k+2)b+i_2} \cos \left( \lambda ((k+2)b-\tau +i_2)^*\right) \right) ^2\\&=\frac{1}{n}\!\sum _{j_1=1}^{n}\!\frac{1}{n}\!\sum _{j_2=1}^{n}\!\frac{1}{n}\!\sum _{j_3=1}^{n}\!\left( \!\sum _{i_1=1}^{\tau }\! X_{j_1+b-\tau +i_2-1} X_{j_2+i_2-1} \cos \left( \!\lambda (j_1+b-\tau +i_2-1)\!\right) \right. \\&\qquad \left. \cdot \sum _{i_2=1}^{\tau } X_{j_2+b-\tau +i_2-1} X_{j_3+i_2-1} \cos \left( \lambda (j_2+b-\tau +i_2-1)\right) \right) ^2 \end{aligned}$$

and again the absolute expected value of the right-hand side is less or equal to \(C_2\tau ^2,\) where \(C_2\) is some positive constant independent of \(n\). Finally, the absolute expected value of (12) is \(O(1/b)\) and simultaneously we get (10). Thus, from the conditional Slutsky’s theorem (see Lahiri 2003, p. 77) it is enough to prove that

$$\begin{aligned}&\sup _{x\in \mathbb {R}}\left| P\left( \sqrt{n}\left( \mathfrak {R}\left( \widehat{a}_n\left( \lambda ,\tau \right) \right) -\mathfrak {R}\left( a\left( \lambda ,\tau \right) \right) \right) \le x)\right) \right. \nonumber \\&\quad \left. -P^*\left( \sqrt{n}\left( \mathfrak {R}\left( \widetilde{a}^*_n\left( \lambda ,\tau \right) \right) -\mathrm {E}^*\left( \mathfrak {R}\left( \widetilde{a}^*_n\left( \lambda ,\tau \right) \right) \right) \right) \le x \right) \right| \mathop {\longrightarrow }\limits ^{P}0. \end{aligned}$$
(13)

Without loss of generality we consider the transformed \(\widetilde{Z}_{t,b}\) variable

$$\begin{aligned} Z_{t,b}=\widetilde{Z}_{t,b}-\mathrm {E}\left( \widetilde{Z}_{t,b}\right) \end{aligned}$$

and its bootstrap version

$$\begin{aligned} Z^*_{t,b}=\widetilde{Z}^*_{t,b}-\mathrm {E}^*\left( \widetilde{Z}^*_{t,b}\right) . \end{aligned}$$

By Corollary 2.4.8 in Araujo and Giné (1980) to get condition (13) we need to show that for any \(\nu >0\)

$$\begin{aligned}&\sum _{k=0}^{l-1}P^*\left( \frac{1}{\sqrt{n}}\left| Z^*_{1+kb,b}\right| >\nu \right) \mathop {\longrightarrow }\limits ^{P}0, \end{aligned}$$
(14)
$$\begin{aligned}&\sum _{k=0}^{l-1}\mathrm {E}^*\left( \frac{1}{\sqrt{n}}Z^*_{1+kb,b}\mathbf{1}_{\left\{ \left| Z^*_{1+kb,b}\right| >\sqrt{n}\nu \right\} }\right) \mathop {\longrightarrow }\limits ^{P}0, \end{aligned}$$
(15)
$$\begin{aligned}&\sum _{k=0}^{l-1}\mathrm {Var}^*\left( \frac{1}{\sqrt{n}}Z^*_{1+kb,b}\mathbf{1}_{\left\{ \left| Z^*_{1+kb,b}\right| \le \sqrt{n}\nu \right\} }\right) \mathop {\longrightarrow }\limits ^{P}\sigma _1^2. \end{aligned}$$
(16)

In order to prove (14), notice that

$$\begin{aligned} \sum _{k=0}^{l-1}P^*\left( \frac{1}{\sqrt{n}}\left| Z^*_{1+kb,b}\right| >\nu \right) =\sum _{k=0}^{l-1}\frac{1}{n}\sum _{s=1}^{n}\mathbf{1}_{\left\{ \left| Z_{s,b}\right| >\sqrt{n}\nu \right\} }=\frac{1}{b}\sum _{s=1}^{n}\mathbf{1}_{\left\{ \left| Z_{s,b}\right| >\sqrt{n}\nu \right\} } \end{aligned}$$

and

$$\begin{aligned}&\mathrm {E}\left| \frac{1}{b}\sum _{s=1}^{n}\mathbf{1}_{\left\{ \left| Z_{s,b}\right| >\sqrt{n}\nu \right\} }\right| \le \frac{1}{b}\sum _{s=1}^{n} P\left( \left| Z_{s,b}\right| >\sqrt{n}\nu \right) \le \frac{1}{b}\sum _{s=1}^{n}\frac{\mathrm {E}\left| Z_{s,b}\right| ^3}{n^{3/2}\nu ^3}. \end{aligned}$$

Since under the assumptions of the theorem \(X_t X_{t+\tau }\) has moments of order \(4+\delta \) uniformly bounded and an \(\alpha \)-mixing function \(\alpha (k)=\alpha _X(\max \{0,k-\tau \})\), we obtain

$$\begin{aligned} \sup _s\mathrm {E}\left| \frac{1}{\sqrt{b-\tau }}\left( Z_{s,b}-\mathrm {E}\left( Z_{s,b}\right) \right) \right| ^4 <\infty . \end{aligned}$$

For more details see Kim (1994).

Additionally, using Lemma A.5. from Synowiecki (2007) it follows that

$$\begin{aligned} \sup _s\mathrm {E}\left| \frac{1}{\sqrt{b-\tau }}Z_{s,b}\right| ^4 <\infty . \end{aligned}$$
(17)

Thus,

$$\begin{aligned}&\mathrm {E}\left| \frac{1}{b}\sum _{s=1}^{n}\mathbf{1}_{\left\{ \left| Z_{s,b}\right| >\sqrt{n}\nu \right\} }\right| \le \frac{1}{b}\sum _{s=1}^{n}\frac{\left( b-\tau \right) ^{3/2}\mathrm {E}\left| \frac{1}{\sqrt{b-\tau }}Z_{s,b}\right| ^3}{n^{3/2}\nu ^3}\le D_1\sqrt{\frac{b}{n}}=\frac{D_1}{\sqrt{l}}, \end{aligned}$$

where \(D_1\) is some positive constant independent of \(n\). This completes the proof of (14).

To get (15), notice that

$$\begin{aligned} \sum _{k=0}^{l-1}\mathrm {E}^*\left( \frac{1}{\sqrt{n}}Z^*_{1+kb,b}\mathbf{1}_{\left\{ \left| Z^*_{1+kb,b}\right| >\sqrt{n}\nu \right\} }\right)&= \sum _{k=0}^{l-1}\frac{1}{n}\sum _{t=1}^{n}\frac{1}{\sqrt{n}}Z_{t,b}\mathbf{1}_{\left\{ \left| Z_{t,b}\right| >\sqrt{n}\nu \right\} }\\&= \frac{\sqrt{b-\tau }}{b\sqrt{n}}\sum _{t=1}^{n}\frac{1}{\sqrt{b-\tau }}Z_{t,b}\mathbf{1}_{\left\{ \left| Z_{t,b}\right| >\sqrt{n}\nu \right\} }. \end{aligned}$$

Using Hölder inequality, the expected absolute value of the last expression is less than or equal to

$$\begin{aligned} \frac{\sqrt{b-\tau }}{b\sqrt{n}}\sum _{t=1}^{n}\sqrt{\mathrm {E}\left| \frac{1}{\sqrt{b-\tau }}Z_{t,b}\right| ^2 P\left( \left| Z_{t,b}\right| >\sqrt{n}\nu \right) }. \end{aligned}$$
(18)

Additionally, using (17) we have

$$\begin{aligned} P\left( \left| Z_{t,b}\right| >\sqrt{n}\nu \right) \le \frac{(b-\tau )^{3/2}\mathrm {E}\left| \frac{1}{\sqrt{b-\tau }}Z_{t,b}\right| ^3}{n^{3/2}\nu ^3}\le \frac{D_2}{l^{3/2}}, \end{aligned}$$
(19)

where \(D_2\) is some positive constant independent of \(n\). Thus, (18) is at most

$$\begin{aligned} D_3\frac{n\sqrt{b}}{\sqrt{n}bl^{3/4}}=\frac{D_3}{l^{1/4}}, \end{aligned}$$

where \(D_3\) is some positive constant independent of \(n\). This completes the proof of the condition (15).

To get (16), notice that

$$\begin{aligned}&\sum _{k=0}^{l-1}\mathrm {Var}^*\left( \frac{1}{\sqrt{n}}Z^*_{1+kb,b}\mathbf{1}_{\left\{ \left| Z^*_{1+kb,b}\right| \le \sqrt{n}\nu \right\} }\right) =\sum _{k=0}^{l-1}\frac{1}{n}\sum _{t=1}^{n}\frac{1}{n}Z_{t,b}^2\mathbf{1}_{\left\{ \left| Z_{t,b}\right| \le \sqrt{n}\nu \right\} }\\&\quad -\sum _{k=0}^{l-1}\left( \frac{1}{n}\sum _{t=1}^{n}\frac{1}{\sqrt{n}}Z_{t,b} \mathbf{1}_{\left\{ \left| Z_{t,b}\right| \le \sqrt{n}\nu \right\} } \right) ^2 =\frac{b-\tau }{bn}\sum _{t=1}^{n}\frac{1}{b-\tau }Z_{t,b}^2\mathbf{1}_{\left\{ \left| Z_{t,b}\right| \le \sqrt{n}\nu \right\} }\\&\quad -\left( \frac{\sqrt{b-\tau }}{n\sqrt{b}}\sum _{t=1}^{n}\!\frac{1}{\sqrt{b-\tau }}Z_{t,b} \mathbf{1}_{\left\{ \left| \!Z_{t,b}\!\right| \le \sqrt{n}\nu \right\} } \right) ^2\!. \end{aligned}$$

Denote the summands on the right-hand side by \(I\) and \(II\), respectively. First we show the convergence of \(II\) to zero in probability. We have

$$\begin{aligned} P\left( |II|>\varepsilon \right)&\le \frac{1}{\varepsilon }\left( \frac{\sqrt{b-\tau }}{n\sqrt{b}}\sum _{t=1}^{n}\frac{1}{\sqrt{b-\tau }}Z_{t,b} \mathbf{1}_{\left\{ \left| Z_{t,b}\right| \le \sqrt{n}\nu \right\} } \right) ^2\\&\le \frac{2}{\varepsilon }\mathrm {E}\left( \frac{\sqrt{b-\tau }}{n\sqrt{b}}\sum _{t=1}^{n}\frac{1}{\sqrt{b-\tau }}Z_{t,b} \right) ^2\\&\!\!\!\!\!\!+\frac{2}{\varepsilon }\mathrm {E}\left( \frac{\sqrt{b-\tau }}{n\sqrt{b}}\sum _{t=1}^{n}\frac{1}{\sqrt{b-\tau }}Z_{t,b}\mathbf{1}_{\left\{ \left| Z_{t,b}\right| >\sqrt{n}\nu \right\} } \right) ^2. \end{aligned}$$

The second term can be bounded as follows:

$$\begin{aligned}&\sqrt{\mathrm {E}\left( \frac{\sqrt{b-\tau }}{n\sqrt{b}}\sum _{t=1}^{n}\frac{1}{\sqrt{b-\tau }}Z_{t,b}\mathbf{1}_{\left\{ \left| Z_{t,b}\right| >\sqrt{n}\nu \right\} } \right) ^2}\\&\quad \le \frac{\sqrt{b-\tau }}{n\sqrt{b}}\sum _{t=1}^{n}\sqrt{\mathrm {E}\left( \frac{1}{\sqrt{b-\tau }}Z_{t,b}\mathbf{1}_{\left\{ \left| Z_{t,b}\right| >\sqrt{n}\nu \right\} } \right) ^2}\\&\quad \le \frac{\sqrt{b-\tau }}{n\sqrt{b}}\sum _{t=1}^{n}\sqrt{\frac{b-\tau }{n\nu ^2}\mathrm {E}\left( \frac{1}{\sqrt{b-\tau }}Z_{t,b} \right) ^4}=O\left( \frac{1}{\sqrt{l}}\right) . \end{aligned}$$

For the first term we have

$$\begin{aligned}&\mathrm {E}\left( \frac{\sqrt{b-\tau }}{n\sqrt{b}}\sum _{t=1}^{n}\frac{1}{\sqrt{b-\tau }}Z_{t,b} \right) ^2\\&\quad =\mathrm {Var}\left( \frac{\sqrt{b-\tau }}{n\sqrt{b}}\sum _{t=1}^{n}\frac{1}{\sqrt{b-\tau }}Z_{t,b} \right) +\mathrm {E}^2\left( \frac{\sqrt{b-\tau }}{n\sqrt{b}}\sum _{t=1}^{n}\frac{1}{\sqrt{b-\tau }}Z_{t,b} \right) \\&\quad \le \frac{b-\tau }{n^2 b}\sum _{t=1}^{n}\sum _{s=0}^{n-1}\left| \mathrm {Cov}\left( \frac{Z_{t,b}}{\sqrt{b-\tau }},\frac{Z_{t+s,b}}{\sqrt{b-\tau }}\right) \right| + \frac{b-\tau }{n^2 b}\mathrm {E}^2\left( \sum _{t=1}^{n}\frac{Z_{t,b}}{\sqrt{b-\tau }} \right) \\&\quad \le \frac{8(b-\tau )}{n^2 b}\sum _{t=1}^{n}\sum _{s=0}^{n-1}\left( \sup _{t}\mathrm {E}\left| \frac{1}{\sqrt{b-\tau }}Z_{t,b} \right| ^4\right) ^{1/2}\alpha _X^{1/2}\left( \max \{0,s-b+1\}\right) \\&\qquad +\frac{b-\tau }{n^2 b}\mathrm {E}^2\left( \sum _{t=1}^{n}\frac{1}{\sqrt{b-\tau }}Z_{t,b} \right) . \end{aligned}$$

The last inequality is a consequence of the inequality for \(\alpha \)-mixing sequences with bounded fourth moments (see Doukhan 1994).

For some \(\zeta >0.5\), the first summand on the right-hand side is bounded by

$$\begin{aligned}&\frac{D_4(b-\tau )}{n b}\left( \sum _{s=b}^{n-1}\alpha _X^{1/2}\left( s-b+1\right) +\sum _{s=0}^{b-1}\alpha _X^{1/2}(0)\right) \!\le \!\frac{D_4(b-\tau )}{n b}\sum _{s=1}^{n-1}\!\frac{1}{s^\zeta }+O\left( \!\frac{1}{l}\!\right) , \end{aligned}$$

where \(D_4\) is some positive constant independent of \(n\). From the Toeplitz lemma we get the convergence of \(1/n \sum _{s=1}^{n-1}1/s^\zeta \) to zero.

Moreover, because

$$\begin{aligned} \mathrm {E}\left( \frac{1}{b-\tau }Z_{t,b} \right) =O\left( \frac{1}{b-\tau }\right) \end{aligned}$$
(20)

(see for example Synowiecki 2007), the second summand of the right-hand side is \(O(1/b)\) and simultaneously we obtain the convergence of \(II\) to zero in probability.

To show the convergence of \(I\) to \(\sigma _1^2\) in probability we use the decomposition

$$\begin{aligned} I=\frac{b-\tau }{bn}\sum _{t=1}^{n}\frac{1}{b-\tau }Z_{t,b}^2 -\frac{b-\tau }{bn}\sum _{t=1}^{n}\frac{1}{b-\tau }Z_{t,b}^2\mathbf{1}_{\left| Z_{t,b}\right| >\sqrt{n}\nu }. \end{aligned}$$

We show that the first and second term of the right-hand side tends to \(\sigma _1^2\) and zero in probability, respectively. For the first term we use Lemma 5 from Leśkow and Synowiecki (2010) for the array \(\{Q_{n,t}, t=1,\dots ,n\}\), where

$$\begin{aligned} Q_{n,t}=\frac{1}{b-\tau }Z_{t,b}^2. \end{aligned}$$

As a consequence of Lemma A.6 from Synowiecki (2007) and (20), we have

$$\begin{aligned} \frac{1}{n}\sum _{t=1}^{n}\mathrm {E}\left( Q_{n,t}\right) =\frac{1}{n}\sum _{t=1}^{n} \mathrm {Var}\left( \frac{1}{\sqrt{b-\tau }}Z_{t,b}\right) +\mathrm {E}^2\left( \frac{1}{\sqrt{b-\tau }}Z_{t,b}\right) \rightarrow \sigma _1^2. \end{aligned}$$

Moreover, \(\mathrm {E}\left| Q_{n,t}\right| ^2\) is uniformly bounded and the considered array is \(\alpha \)-mixing with \(\alpha _{Q}(\omega )\le \alpha _X\left( \max \{0,\omega -b+1\}\right) ,\) which means that we get the desired convergence to \(\sigma _1^2\).

The expected absolute value of the second term is less than or equal to

$$\begin{aligned} \frac{b-\tau }{bn}\sum _{t=1}^{n}\mathrm {E}\left| \frac{1}{b-\tau }Z_{t,b}^2\mathbf{1}_{\left\{ \left| Z_{t,b}\right| >\sqrt{n}\nu \right\} }\right| . \end{aligned}$$

Using Hölder’s inequality and (19), this expression can be bounded from above by

$$\begin{aligned} \frac{b-\tau }{bn}\sum _{t=1}^{n}\sqrt{\mathrm {E}\left| \frac{Z_{t,b}}{\sqrt{b-\tau }}\right| ^4 P\left( \left| Z_{t,b}\right| >\sqrt{n}\nu \right) }\le \frac{b-\tau }{bn}\sum _{t=1}^{n}\sqrt{\mathrm {E}\left| \frac{Z_{t,b}}{\sqrt{b-\tau }}\right| ^4 \frac{D_2}{l^{3/2}}}, \end{aligned}$$

which is \(O(1/l^{3/4}).\) This means that \(I\) tends to \(\sigma _1^2\) in probability and simultaneously gives the consistency of our bootstrap method in estimation of \(\mathfrak {R}(a(\lambda ,\tau ))\).

The proof for the imaginary part is the same, so it is omitted.

Finally, to finish the proof Cramér-Wold device needs to be used. It is enough to prove that for any \(b_1, b_2 \in \mathbb {R}\)

$$\begin{aligned}&\sup _{x\in \mathbb {R}}\left| \varPhi \left( x,\sigma ^2\right) -P^*\left( \sqrt{n}\left( \left( \widetilde{aw}^*_n\left( \lambda ,\tau \right) \right) -\mathrm {E}^*\left( \left( \widetilde{aw}^*_n\left( \lambda ,\tau \right) \right) \right) \right) \le x \right) \right| \mathop {\longrightarrow }\limits ^{P}0, \end{aligned}$$

where \(\widetilde{aw}^*_n=b_1\mathfrak {R}\left( \widetilde{a}^*_n\left( \lambda ,\tau \right) \right) +b_2 \mathfrak {I}\left( \widetilde{a}^*_n\left( \lambda ,\tau \right) \right) \) and \(\varPhi \) is the cumulative distribution function of the zero mean normal distribution with variance \(\sigma ^2=b_1^2\sigma ^2_1+2b_1b_2\sigma _{12}+b_2^2\sigma _2^2\).

Moreover, one may notice that the series \(\{X_t X_{t+\tau } \cos (\lambda t), t \in \mathbb {Z}\}\) and \(\{X_t X_{t+\tau } \sin (\lambda t), t \in \mathbb {Z}\}\) are JAPC (see Definition 1). Applying the same arguments as in the real part case together with Lemma 1, gives the desired convergence. \(\square \)

Proof of Corollary 1

By the same reasoning as in the proof of Theorem 2 we obtain the CBB consistency. The modified assumption (i) together with WAP(3) feature of the considered time series are essential to get the joint almost periodical correlation of the components of \(\widehat{a}_n\left( \lambda ,\tau \right) \). \(\square \)

Proof of Theorem 3

Without loss of generality it is enough to take \(r=2\). We omit the proof because again Cramér-Wold device and the same reasoning as in the proof of Theorem 2 needs to be used. \(\square \)

Proof of Corollaries 2 and 3

All the proofs are following the same reasoning as the previous results. The only change is such that the condition (17) is now a consequence of Theorem 2.2 from Rio (2000). \(\square \)

Proof of Theorem 4

Without loss of generality we present the proof in the one-dimensional case only for \(\mathfrak {R}(a(\lambda ,\tau ))\). The reasoning in the multi-dimensional case is analogous after replacing the absolute value by the norm sign.

Following arguments provided by Lahiri (2003, Theorem 4.1) and Synowiecki (2008, Theorem 3.5) we decompose

$$\begin{aligned}&\sqrt{n}\left( H\left( \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \right) -H\left( \mathrm {E}^*\mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \right) \right) \\&\quad =\sqrt{n}H'\left( \mathfrak {R}\left( a\left( \lambda ,\tau \right) \right) \right) \left( \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) -\mathrm {E}^*\mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \right) +R_n^*. \end{aligned}$$

Using a conditional Slutsky’s theorem (see Lahiri 2003, Lemma 4.1), it is enough to show that for any \(\varepsilon >0\)

$$\begin{aligned} P^*\left( \left| R_n^*\right| >\varepsilon \right) \mathop {\longrightarrow }\limits ^{P}0. \end{aligned}$$

Put \(t_n\!=\!\left| \mathrm {E}^* \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \!-\!\mathfrak {R}\left( a\left( \lambda ,\tau \right) \right) \right| \), \(T_n^*\!=\!\sqrt{n}\left( \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \!-\!\mathrm {E}^*\mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \right) \).

Using the Lagrange theorem \(R_n^*\) can be bounded from above on the set

$$\begin{aligned} \left\{ t_n\le \eta \right\} \cap \left\{ \left| \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) -\mathrm {E}^*\mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \right| \le \eta \right\} \end{aligned}$$

by

$$\begin{aligned} C\left( \left| \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) -\mathrm {E}^*\mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \right| ^{\kappa }+t_n^{\kappa }\right) \left| T_n^*\right| , \end{aligned}$$

where \(C\) are some positive constants independent of \(n\) and \(\zeta \in [0,1].\)

Moreover,

$$\begin{aligned} \left\{ \left| R_n^*\right| >\varepsilon \right\}&\subset \left\{ \left| R_n^*\right| >\varepsilon ,\left| \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) -\mathrm {E}^*\mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \right| \le \eta ,t_n\le \eta \right\} \\&\!\!\!\!\!\!\cup \left\{ \left| \mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) -\mathrm {E}^*\mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \right| >\eta \right\} \cup \left\{ t_n>\eta \right\} . \end{aligned}$$

Finally,

$$\begin{aligned} P^*\left( \left| R_n^*\right| >\varepsilon \right)&\le P^*\left( C n^{-\frac{\kappa }{2}}\left| T_n^*\right| ^{1+\kappa }>\frac{\varepsilon }{2}\right) \\&+P^*\left( C t_n^{\kappa }\left| T_n^*\right| >\frac{\varepsilon }{2}\right) + P^*\left( n^{-\frac{1}{2}}\left| T_n^*\right| >\eta \right) +\mathbf{1}_{t_n>\eta }. \end{aligned}$$

For more details see the proof of Theorem 3.5 in Synowiecki (2008).

Since

$$\begin{aligned} \left\{ C t_n^{\kappa }\left| T_n^*\right| >\frac{\varepsilon }{2}\right\} \subset \left\{ t_n^{\kappa }>\frac{\eta }{\log n}\right\} \cup \left\{ \left| T_n^*\right| >\frac{\varepsilon }{2C\eta }\log n\right\} , \end{aligned}$$

we have

$$\begin{aligned}&P^*\left( \left| R_n^*\right| >\varepsilon \right) \le 3 P^*\left( \left| T_n^*\right| >C\left( \varepsilon ,\kappa ,\eta \right) \log n\right) +2\cdot \mathbf{1}_{t_n>\frac{\eta }{\log n}}. \end{aligned}$$

From the consistency of our bootstrap method the first summand on the right-hand side is \(o_P(1).\) Additionally, we have

$$\begin{aligned}&\mathrm {E}\left( \mathbf{1}_{\left\{ t_n>\frac{\eta }{\log n}\right\} }\right) \le P\left( t_n>\frac{\eta }{\log n}\right) \le \frac{\log ^2 n}{\eta ^2}\mathrm {E}\left( t_n^2\right) \end{aligned}$$

and

$$\begin{aligned} \mathrm {E}\left( t_n^2\right) \le 2 \mathrm {E}\left| \mathrm {E}^*\mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \!-\!\mathfrak {R}\left( \widehat{a}_n\left( \lambda ,\tau \right) \right) \right| ^2+ 2 \mathrm {E}\left| \mathfrak {R}\left( \widehat{a}_n\left( \lambda ,\tau \right) \right) -\mathfrak {R}\left( a\left( \lambda ,\tau \right) \right) \right| ^2. \end{aligned}$$

The second term on the right-hand side is \(O(1/n).\) Moreover, assume that \(n=lb+r,\) where \(0\le r<b\). Using the notation from the proof of Theorem 2 we have

$$\begin{aligned} \mathrm {E}^*\mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) \!=\!\frac{1}{n}\mathrm {E}^*\!\left( \!\sum _{t\in C^*_{b,\tau }}\!X_t^*X^*_{t+\tau }\!\cos \left( \!\lambda t^*\!\right) \!\right) \!+\!\frac{1}{n}\mathrm {E}^*\!\left( \!\sum _{t \not \in C^*_{b,\tau }}\!X_t^*X^*_{t+\tau }\!\cos \left( \!\lambda t^*\!\right) \!\right) . \end{aligned}$$

The set \(C^*_{b,\tau }\) contains \(\tau l\) elements if \(\tau \le r\) and \(\tau (l-1)+r\) in the opposite case. Without loss of generality we restrict our consideration only to the first case.

Using the notation introduced in the proof of Theorem 2 we have

$$\begin{aligned}&\frac{1}{n}\mathrm {E}^*\left( \sum _{t \not \in C^*_{b,\tau }}X_t^*X^*_{t+\tau }\cos \left( \lambda t^*\right) \right) =\frac{1}{n}\sum _{k=0}^{l-1}\mathrm {E}^*\left( \widetilde{Z}^*_{1+bk,b}\right) +\frac{1}{n}\mathrm {E}^*\left( \widetilde{Z}^*_{1+bl,r}\right) \\&\quad =\frac{1}{n}\sum _{k=0}^{l-1}\frac{1}{n}\sum _{t=1}^{n}\widetilde{Z}_{t,b}+\frac{1}{n}\frac{1}{n}\sum _{s=1}^{n}\widetilde{Z}_{s,r}=\frac{1}{n}\sum _{s=1}^{n} \mathfrak {R}\left( \widetilde{a}_n^s\left( \lambda ,\tau \right) \right) , \end{aligned}$$

where \(\mathfrak {R}\left( \widetilde{a}_n^s\left( \lambda ,\tau \right) \right) =\frac{1}{n}\left( \sum _{k=0}^{l-1}\widetilde{Z}_{s+kb,b}+\widetilde{Z}_{s+lb,r}\right) \).

Moreover,

$$\begin{aligned}&\frac{1}{n}\mathrm {E}^*\left( \sum _{t\in C^*_{b,\tau }}X_t^*X^*_{t+\tau }\cos \left( \lambda t^*\right) \right) \\&\quad =\frac{1}{n}\sum _{k=0}^{l-1}\mathrm {E}^*\left( \sum _{i=1}^{\tau } X^*_{(k+1)b-\tau +i} X^*_{(k+1)b+i} \cos (\lambda ((k+1)b-\tau +i)^*)\right) \\&\quad =\frac{1}{n}\sum _{k=0}^{l-1}\frac{1}{n}\sum _{j_1=1}^{n}\frac{1}{n}\sum _{j_2=1}^{n}\left( \sum _{i=1}^{\tau } X_{j_1+b-\tau +i-1} X_{j_2+i-1} \cos (\lambda (j_1+b-\tau +i-1))\right) . \end{aligned}$$

Since \(X_t\) has uniformly bounded eighth moments we get

$$\begin{aligned}&\mathrm {E}\left| \mathrm {E}^*\mathfrak {R}\left( \widehat{a}^*_n\left( \lambda ,\tau \right) \right) -\mathfrak {R}\left( \widehat{a}_n\left( \lambda ,\tau \right) \right) \right| ^2\le 2 \mathrm {E}\left| \frac{1}{n}\sum _{s=1}^{n} \mathfrak {R}\left( \widetilde{a}_n^s\left( \lambda ,\tau \right) \right) -\mathfrak {R}\left( \widehat{a}_n\left( \lambda ,\tau \right) \right) \right| ^2\\&\qquad + 2 \mathrm {E}\left| \frac{1}{n^3}\sum _{k=0}^{l-1}\!\sum _{j_1=1}^{n}\sum _{j_2=1}^{n}\left( \sum _{i=1}^{\tau } X_{j_1+b-\tau +i-1} X_{j_2+i-1} \cos (\!\lambda (j_1+b-\tau +i-1)\!)\right) \!\right| ^2\\&\quad =2 \mathrm {E}\left| \frac{1}{n}\sum _{s=1}^{n}\frac{1}{n}\sum _{k=0}^{l-1}\!\sum _{i=1}^{\tau }\! X_{s+(k+1)b-\tau +i}X_{s+(k+1)b+i}\! \cos (\!\lambda (s+(k+1)b-\tau +i)\!)\right| ^2\\&\qquad + O\left( \frac{l^2}{n^2}\right) = O\left( \frac{l^2}{n^2}\right) + O\left( \frac{l^2}{n^2}\right) , \end{aligned}$$

which means that

$$\begin{aligned} \mathbf{1}_{\left\{ t_n>\frac{\eta }{\log n}\right\} }=o_P(1). \end{aligned}$$

This fact completes proof of theorem. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dudek, A.E. Circular block bootstrap for coefficients of autocovariance function of almost periodically correlated time series. Metrika 78, 313–335 (2015). https://doi.org/10.1007/s00184-014-0505-9

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-014-0505-9

Keywords

Navigation