Skip to main content
Log in

On change-points tests based on two-samples U-Statistics for weakly dependent observations

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

We study change-points tests based on U-statistics for absolutely regular observations. Our method avoids some technical assumptions on the data and the kernel. The asymptotic properties of the U-statistics are studied under the null hypothesis, under fixed alternatives and under a sequence of local alternatives. The asymptotic distributions of the test statistics under the null hypothesis and under the local alternatives are given explicitly and the tests are shown to be consistent. A small set of simulations is done for evaluating the performance of the tests in detecting changes in the mean, variance and autocorrelation of some simple time series.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Amano T (2012) Asymptotic optimality of estimating function estimator for Charn model. Adv Decis Sci

  • Bardet J-M, Kengne W (2014) Monitoring procedure for parameter change in causal time series. J Multivar Anal 125:204–221

    Article  MathSciNet  MATH  Google Scholar 

  • Bardet J-M, Wintenberger O (2009) Asymptotic normality of the quasi-maximum likelihood estimator for multidimensional causal processes. Ann Stat 37(5B):2730–2759

    Article  MathSciNet  MATH  Google Scholar 

  • Bardet J-M, Kengne W, Wintenberger O (2012) Multiple breaks detection in general causal time series using penalized quasi-likelihood. Electr J Stat 6:435–477

    MathSciNet  MATH  Google Scholar 

  • Bhattacharyya GK, Johnson R (1968) Nonparametric tests for shifts at an unknown time point. J Multvar Anal 102(39):1731–1743

    MathSciNet  MATH  Google Scholar 

  • Bhattacharya P, Zhou H (2017) Nonparametric stopping rules for detecting small changes in location and scale families. From statistics to mathematical finance. Springer, Cham

    MATH  Google Scholar 

  • Billingsley P (1999) Convergence of probability measures. Wiley, New York

    Book  MATH  Google Scholar 

  • Chen KM, Cohen A, Sackrowitz H (2011) Consistent multiple testing for change points. J Multvar Anal 102:1339–1343

    Article  MathSciNet  MATH  Google Scholar 

  • Chernoff H, Zacks S (1964) Estimating the current mean of a normal distribution which is subjected to changes in time. Ann Math Stat 35:999–1018

    Article  MathSciNet  MATH  Google Scholar 

  • Ciuperca G (2011) A general criterion to determine the number of change-points. Stat Probab Lett 81(8):1267–1275

    Article  MathSciNet  MATH  Google Scholar 

  • Csörgő M, Horváth L (1987) Nonparametric tests for the changepoint problem. Stat Probab Lett 17:1–9

    MathSciNet  MATH  Google Scholar 

  • Csörgő M, Horváth L (1988) Invariance principales for changepoint problems. J Multivar Anal 17:151–168

    Article  MATH  Google Scholar 

  • Dehling H, Fried R, Garcia I, Wendler M (2015) Change-point detection under dependence based on two-sample \({\mathbf{U}}\)-statistics. asymptotic laws and methods in stochastics. Springer, New York

    MATH  Google Scholar 

  • Dehling H, Rooch A, Taqqu M (2013) Non-parametric change-point tests for long-range dependent data. Scand J Stat 40:153–173

    Article  MathSciNet  MATH  Google Scholar 

  • Dehling H, Franke B, Woerner J (2017a) Estimating drift parameters in a fractional ornstein uhlenbeck process with periodic mean. Stat Inference Stoch Process 20:1–14

    Article  MathSciNet  MATH  Google Scholar 

  • Dehling H, Rooch A, Taqqu M (2017b) Power of change-point tests for long-range dependent data. Elect J Stat 11:2168–2198

    MathSciNet  MATH  Google Scholar 

  • Döring M (2010) Multiple change-point estimation with \({\mathbf{U}}\)-statistics. J Stat Plann Inference 104(7):2003–2017

    Article  MathSciNet  MATH  Google Scholar 

  • Döring M (2011) Convergence in distribution of multiple change point estimators. J Stat Plann Inference 141(7):2238–2248

    Article  MathSciNet  MATH  Google Scholar 

  • Enikeeva F, Munk A, Werner F (2018) Bump detection in heterogeneous gaussian regression. Bernoulli 42(2):1266–1306

    MathSciNet  MATH  Google Scholar 

  • Fotopoulos SB, Jandhyala VK, Tan L (2009) Asymptotic study of the change-point MLE in multivariate gaussian families under contiguous alternatives. J Statist Plann Inference 139(3):1190–1202

    Article  MathSciNet  MATH  Google Scholar 

  • Francq C, Zakoïan JM (2012) Strict stationarity testing and estimation of explosive and stationary generalized autoregressive conditional heteroscedasticity models. Econometrica 80(2):821–861

    Article  MathSciNet  MATH  Google Scholar 

  • Gardner LA (1969) On detecting changes in the mean of normal variates. Ann Math Stat 116–126

  • Gombay E (2008) Change detection in autoregressive time series. J Multvar Anal 99(3):451–464

    Article  MathSciNet  MATH  Google Scholar 

  • Gombay E, Serban D (2009) Monitoring parameter change in \(ar(p)\) time series models. J Multvar Anal 100(4):715–725

    Article  MathSciNet  MATH  Google Scholar 

  • Haccou P, Meelis E, van de Geer S (1988) The likelihood ratio test for a change point in a sequence of independent exponentially distributed random variables. Stoch Process Appl 30:121–139

    MATH  Google Scholar 

  • Härdle W, Tsybakov A (1997) Local polynomial estimators of the volatility function in nonparametric autoregression. J Econometrics 81(1):223–242

    Article  MathSciNet  MATH  Google Scholar 

  • Härdle W, Tsybakov A, Yang L (1998) Local polynomial estimators of the volatility function in nonparametric autoregression. J Stat Plann Inference 68(2):221–245

    Article  MATH  Google Scholar 

  • Harel M, Puri ML (1989) Limiting behavior of \({\mathbf{U}}\)-statistics, \({\mathbf{V}}\)-statistics and one-sample rank order statistics for nonstationary absolutely regular processes. J Multvar Anal 30:180–204

    MathSciNet  MATH  Google Scholar 

  • Harel M, Puri M (1994) Law of the iterated logarithm for perturbed empirical distribution functions evaluated at a random point for nonstationary random variables. J Theor Probab 4:831–855

    Article  MathSciNet  MATH  Google Scholar 

  • Hlávka Z, Hušková M, Meintanis S (2020) Change-point methods for multivariate time-series: paired vectorial observations. Stat Pap. 61:1351–1383

    Article  MathSciNet  MATH  Google Scholar 

  • Horváth L, Hušková M (2005) Testing for changes using permutations of \({\mathbf{U}}\)-statistics. J Stat Plann Infer 128:351–371

    Article  MathSciNet  MATH  Google Scholar 

  • Huh J (2010) Detection of a change point based on local-likelihood. Statistics 101:1–17

    MathSciNet  MATH  Google Scholar 

  • Imhof JP (1961) Computing the distribution of quadratic forms in normal variables. Biometrika 48:419–426

    Article  MathSciNet  MATH  Google Scholar 

  • Kander Z, Zacks S (1966) Test procedures for possible changes in parameters of statistical distributions occurring at unknown time points. Ann Math Stat 37:1196–1210

    Article  MathSciNet  MATH  Google Scholar 

  • Kengne WC (2012) Testing for parameter constancy in general causal time-series models. J Time Ser Anal 33(3):503–518

    Article  MathSciNet  MATH  Google Scholar 

  • Khakhubia TG (1987) A limit theorem for a maximum likelihood estimate of the disorder time. Theor Probab Appl. 31:141–144

    Article  MATH  Google Scholar 

  • Ma L, Grant J, Sofronov G (2020) Multiple change point detection and validation in autoregressive time series data. Stat Pap 61:1507–1528

    Article  MathSciNet  MATH  Google Scholar 

  • MacNeill I (1974) Tests for change of parameter at unknown times and distributions of some related functionals on brownian motion. Ann Stat 31(2):950–962

    MathSciNet  MATH  Google Scholar 

  • Matthews DA, Farewell VT, Pyke R (1985) Asymptotic score-statistic processes and tests for constant hazard against a changepoint alternative. Ann Stat 31(13):583–591

    MATH  Google Scholar 

  • Meintanis SG (2016) A review of testing procedures based on the empirical characteristic function. S Afr Stat J 50:1–14

    Article  MathSciNet  MATH  Google Scholar 

  • Mohr M, Selk L (2020) Estimating change points in nonparametric time series regression models. Stat Pap 61:1437–1463

    Article  MathSciNet  MATH  Google Scholar 

  • Ngatchou-Wandji J (2009) Testing for symmetry in multivariate distributions. Stat Methodol 6:230–250

    Article  MathSciNet  MATH  Google Scholar 

  • Oodaira H, Yoshihara K (1972) Functional central limit theorems for strictly stationary processes satisfying the strong mixing condition. Kodai Math Semin Rep 24:259–269

    Article  MathSciNet  MATH  Google Scholar 

  • Page ES (1954) Continuous inspection schemes. Biometrika 41:100–115

    Article  MathSciNet  MATH  Google Scholar 

  • Page ES (1955) A test for a change in a parameter occurring at an unknown point. Biometrika 42:523–526

    Article  MathSciNet  MATH  Google Scholar 

  • Pettitt AN (1979) A non-parametric approach to the change-point problem. Appl Stat 28:126–135

    Article  MATH  Google Scholar 

  • Phillips P, Durlauf S (1986) Multiple time series regression with integrated processes. Rev Econom Stud 53(4):473–495

    Article  MathSciNet  MATH  Google Scholar 

  • Pycke J (2001) Une généralisation du développement de \({\mathbf{K}}\)arhunen-\({\mathbf{L}}\)oève du pont brownien(french) [a generalization of the \({\mathbf{K}}\)arhunen-\({\mathbf{L}}\)oève expansion of the brownian bridge ]. C R Acad Sci Ser I 333(7):685–688

  • Rackauskas A, Wendler M (2020) Convergence of \(u\)-processes in hölder spaces with application to robust detection of a changed segment. Stat Pap 61:1409–1435

    Article  MATH  Google Scholar 

  • Riesz F, Nagy B (1972) Leçons d’analyse fonctionnelle, 6th edn. Gauthier-Villars, Paris

    MATH  Google Scholar 

  • Sen A, Srivastava MS (1975) On tests for detecting changes in mean. Ann Stat 3:98–108

    Article  MathSciNet  MATH  Google Scholar 

  • Shorack G, Wellner J (1986) Empirical processes with applications to statistics. Wiley series in probability and mathematical statistics. Probability and mathematical statistics. Wiley, New York

    MATH  Google Scholar 

  • Wang Q, Phillips PC (2012) A specification test for nonlinear nonstationary models. Ann Stat 40:727–758

    Article  MathSciNet  MATH  Google Scholar 

  • Wolfe DA, Schechtman E (1984) Nonparametric statistical procedures for the changepoint problem. J Stat Plann Inference 9:389–396

    Article  MathSciNet  MATH  Google Scholar 

  • Yang Y, Song Q (2014) Jump detection in time series nonparametric regression models: a polynomial spline approach. Ann Inst Stat Math 66:325–344

    Article  MathSciNet  MATH  Google Scholar 

  • Yang Q, Li Y-N, Zang Y (2020) Change point detection for nonparametric regression under strongly mixing process. Stat Pap 61:1465–1506

    Article  MathSciNet  MATH  Google Scholar 

  • Yao YC, Davis RA (1986) The asymptotic behavior of the likelihood ratio statistic for testing a shift in mean in a sequence of independent normal variates. Sankhya 48:339–353

    MathSciNet  MATH  Google Scholar 

  • Yoshihara Y (1976) Limiting behavior of \({\mathbf{U}}\)-statistics for stationary absolutely regular processes. Z Wahrscheinlichkeitstheorie Verw Gebiete 35:237–252

    Article  MathSciNet  MATH  Google Scholar 

  • Zhou Z (2014) Nonparametric specification for non-stationary time series regression. Bernoulli 20(1):78–108

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joseph Ngatchou-Wandji.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: proofs of the results

Appendix: proofs of the results

1.1 Preliminary results

In this subsection, we prove some preliminary results necessary to the proofs of Theorems 1 and 2 .

Proposition 1

Under the conditions of Theorem 1, we have, in probability

$$\begin{aligned} n^{-3/2}\sup _{0\le \lambda \le 1}\bigg | \sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n}g_F(X_{i},X_{j})\bigg | \longrightarrow _{n\rightarrow \infty }0. \end{aligned}$$

Under the conditions of Theorem 2, we have, in probability

$$\begin{aligned} n^{-3/2}\sup _{0\le \lambda \le 1}\bigg | \sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n}g_{G^{(n)}}(X_{i},Y_{nj})\bigg | \longrightarrow _{n\rightarrow \infty }0. \end{aligned}$$

Proof

We only prove the first part. This needs two lemmas that we first state and prove.

Lemma 1

Under the conditions of Theorem 1, there exists a Constant \(Cst>0\) such that

$$\begin{aligned} {\mathbb {E}}\left\{ \left[ \sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n}g_F(X_{i},X_{j})\right] ^{2} \right\} \le Cst[n\lambda ](n-[n\lambda ]). \end{aligned}$$

Proof of Lemma 1

We can write

$$\begin{aligned} {\mathbb {E}}\left\{ \left[ \sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n}g_F(X_{i},X_{j})\right] ^{2} \right\}\le & {} \sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n} {\mathbb {E}}\left\{ \left[ g_F(X_{i},X_{j})\right] ^{2} \right\} \\&+2\sum _{1\le i_{1}<i_{2}\le [n\lambda ]}\sum _{[n\lambda ]+1\le j_{1}<j_{2}\le n} H\left[ (i_{1},j_{1}),(i_{2},j_{2}) \right] \\= & {} A_{n}(\lambda )+2B_{n}(\lambda ). \end{aligned}$$

where

$$\begin{aligned} H\left[ (i_{1},j_{1}),(i_{2},j_{2}) \right]= & {} {\mathbb {E}}\bigg \{ \left[ g_F(X_{i_{1},}X_{j_{1}})-h_{F,1}(X_{i_{1}})-h_{F,2}(X_{j_{1}})+\theta (F,F) \right] \\&\times \left[ g_F(X_{i_{2}}X_{j_{2}})-h_{F,1}(X_{i_{2}})-h_{F,2}(X_{j_{2}})+\theta (F,F)\right] \bigg \}. \end{aligned}$$

From the integrability condition, we have

$$\begin{aligned} \sup _{i,j\in {\mathbb {N}}} {\mathbb {E}}\left\{ \left[ g_F(X_{i},X_{j})\right] ^{2} \right\} \le Cst, \end{aligned}$$

then

$$\begin{aligned} A_{n}(\lambda )\le Cst[n\lambda ](n-[n\lambda ]). \end{aligned}$$

Since

$$\begin{aligned} \int _{{\mathbb {R}}}\left[ g_F(x,y)-h_{F,1}(x)-h_{F,2}(y)+\theta (F,F)\right] dF(x)=0 \end{aligned}$$

so from Lemma 1 of Yoshihara (1976), we have the following inequalities:

(a) If \(1\le i_{1}<j_{1}\le [n\lambda ], \ [n\lambda ]+1\le i_{2}<j_{2}\le n\) and \(i_{2}-i_{1}\ge j_{2}-j_{1}\), then

$$\begin{aligned} H\left[ (i_{1},j_{1}),(i_{2},j_{2}) \right] \le Cst \beta ^{\frac{\delta }{2+\delta }}(i_{2}-i_{1}). \end{aligned}$$

Then we deduce

$$\begin{aligned}&\sum _{1\le i_{1}<i_{2}\le [n\lambda ]}\sum _{[n\lambda ]+1\le j_{1}<j_{2}\le n} H\left[ (i_{1},j_{1}),(i_{2},j_{2}) \right] \le Cst [n\lambda ](n-[n\lambda ]) \sum _{k=1}^{n}k\beta ^{\frac{\delta }{2+\delta }}(k) \\&\quad Cst[n\lambda ](n-[n\lambda ])\sum _{k=1}^{n}k\beta ^{\frac{\delta }{2+\delta }}(k)\le Cst[n\lambda ](n-[n\lambda ]), \end{aligned}$$

where \(k=j_{2}-i_{2}\).

Suppose k fixed, we have \([n\lambda ]\) ways to choose \(i_{1},\) once \(i_{1}\) is chosen we have one way to choose \(i_{2}=i_{1}+k\). For \(j_{1}\) we have \(n-[n\lambda ]\) ways to choose \(j_{1}\) and then for each \(j_{1}\), \(j_{2}\) need to be in the interval \([j_{1},j_{1}+k]\) and there are exactly \(\ k\) integers in such interval.

(b) Similarly, if \(1\le i_{1}<j_{1}\le [n\lambda ], \ [n\lambda ]+1\le i_{2}<j_{2}\le n\) and \(i_{2}-i_{1}\le j_{2}-j_{1}\), then

$$\begin{aligned} H\left[ (i_{1},j_{1}),(i_{2},j_{2}) \right] \le M_{0}\beta ^{\frac{\delta }{2+\delta }}(j_{2}-j_{1}). \end{aligned}$$

Thus, we deduce that

$$\begin{aligned} B_{n}(\lambda )\le Cst[n\lambda ](n-[n\lambda ]) \end{aligned}$$

and Lemma 1 is proved. \(\square \)

We now define the process \({\mathcal {G}}_{n}(\lambda ),0\le \lambda \le 1\) by

$$\begin{aligned} {\mathcal {G}}_{n}(\lambda )=n^{-3/2}\sum _{i=1}^{[n\lambda ]} \sum _{j=[n\lambda ]+1}^{n}g_F(X_{i},X_{j}),0\le \lambda \le 1. \end{aligned}$$

Lemma 2

Under the conditions of Theorem 1, we have

$$\begin{aligned} {\mathbb {E}}\Big (\Big | {\mathcal {G}}_{n}(\lambda )-{\mathcal {G}}_{n}(\lambda ^{\prime })\Big |^{2}\Big )\le \frac{Cst}{n}(\lambda -\lambda ^{\prime }), \text{ for } \text{ all } \ 0\le \lambda ^{\prime }\le \lambda \le 1. \end{aligned}$$

Proof of Lemma 2

We can write

$$\begin{aligned} {\mathbb {E}}\big (\left| {\mathcal {G}}_{n}(\lambda )-{\mathcal {G}}_{n}(\lambda ^{\prime })\right| ^{2}\big )\le & {} 2n^{-3}{\mathbb {E}}\left\{ \left[ \sum _{i=1}^{[n\lambda ^{\prime }]}\sum _{j=[n\lambda ^{\prime }]+1}^{[n\lambda ]}g_F(X_{i},X_{j})\right] ^{2} \right\} \\&+2n^{-3}{\mathbb {E}}\left\{ \left[ \sum _{i=1+[n\lambda ^{\prime }]}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n}g_F(X_{i},X_{j})\right] ^{2} \right\} . \end{aligned}$$

From Lemma 1, we deduce that

$$\begin{aligned} {\mathbb {E}}\left( \left| {\mathcal {G}}_{n}(\lambda )-{\mathcal {G}}_{n}(\lambda ^{\prime })\right| ^{2}\right)\le & {} \frac{Cst}{n^{3}}\big \{[n\lambda ^{\prime }]([n\lambda ]-[n\lambda ^{\prime }]) +([n\lambda ]-[n\lambda ^{\prime }])(n-[n\lambda ^{\prime }])\big \}\\&\times \frac{Cst}{n}(\lambda -\lambda ^{\prime }) \end{aligned}$$

and Lemma 2 is proved. \(\square \)

From Lemma 2, we deduce that

$$\begin{aligned} {\mathbb {P}}\left( \left| {\mathcal {G}}_{n}(\lambda )-{\mathcal {G}}_{n}(\lambda ^{\prime })\right| \ge \epsilon \right) \le \frac{Cst}{\epsilon ^{2}n}(\lambda -\lambda ^{\prime }) \end{aligned}$$

for all \(\epsilon >0.\) It implies for \(0\le l_{1}\le l_{2}\le n\) with \(l_{1},l_{2},n\in {\mathbb {N}}\),

$$\begin{aligned} {\mathbb {P}}\left( \left| {\mathcal {G}}_{n}\Big (\frac{l_{2}}{n}\Big )-{\mathcal {G}}_{n}\Big (\frac{l_{1}}{n}\Big )\right| \ge \epsilon \right) \le \frac{Cst}{\epsilon ^{2}n}(l_{2}-l_{1})\le \frac{Cst}{\epsilon ^{2}n^{\frac{5}{3}}}(l_{2}-l_{1})^{\frac{4}{3}}. \end{aligned}$$

Consider the partial sum process defined by \(S_{0}=0\) and \(S_{i}=\sum _{j=1}^{i}A_{j}\) where \(A_{j}={\mathcal {G}}_{n}(\frac{j}{n})-{\mathcal {G}}_{n}(\frac{j-1}{n})\) if \(1\le j\le n-1\) and 0 otherwise. It results that \(S_{i}={\mathcal {G}}_{n}(\frac{i}{n})\).

The last inequality is equivalent to

$$\begin{aligned} {\mathbb {P}}\big (\left| S_{l_{2}}-S_{l_{1}}\right| \ge \epsilon \big )\le \frac{Cst}{\epsilon ^{2}}\left( \frac{l_{2}-l_{1}}{n^{\frac{5}{4}}}\right) ^{\frac{4}{3}}. \end{aligned}$$

From Theorem 10.2 of Billingsley (1999), we easily deduce that

$$\begin{aligned} {\mathbb {P}}\Big (\max _{1\le i\le n-1}\left| S_{i}\right| \ge \epsilon \Big )\le \frac{Cst}{\epsilon ^{2}}\left( \frac{ l_{2}-l_{1}}{n^{\frac{5}{4}}}\right) ^{\frac{4}{3}}\le \frac{Cst}{\epsilon ^{2}n^{\frac{1}{3}}} \end{aligned}$$

which implies that, in probability,

$$\begin{aligned} n^{-3/2}\sup _{0\le \lambda \le 1}\bigg | \sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n}g_F(X_{i},X_{j})\bigg | \longrightarrow _{_{n\rightarrow \infty }}0. \end{aligned}$$

This completes the proof of Proposition 1. \(\square \)

We need the following result proved by Oodaira and Yoshihara (1972).

Let \(\xi _1,\xi _2,\ldots ,\xi _n,\ldots \) be a strictly stationary sequence of zero-mean random variables, and let

$$\begin{aligned} \sigma _{*}^{2}={\mathbb {E}}(\xi _1^{2})+2\sum _{i=1}^{\infty } {\mathbb {E}} \left( \xi _1\xi _{i+1} \right) . \end{aligned}$$

Proposition 2

Assume \( {\mathbb {E}} \left( \left| \xi _{i}\right| ^{2+\delta } \right) <\infty \) for some positive \(\delta \) and \(\xi _1,\xi _2,\ldots ,\xi _n,\ldots \) is \(\alpha \)-mixing with \(\alpha \)-rate satisfying

$$\begin{aligned} \sum _{i=1}^{\infty }[\alpha (i)]^{\frac{\delta }{2+\delta }}<\infty . \end{aligned}$$

Then \(\sigma _{*}^{2}<\infty .\)

If \(\sigma _{*}>0\), then the sequence of processes

$$\begin{aligned} S_{n}(\lambda )=\frac{1}{\sigma _{*}\sqrt{n}}\sum _{i=1}^{[n\lambda ]}\xi _{i}, \lambda \in [0,1] \end{aligned}$$

converges weakly to a Wiener measure on \((D,\mathcal {D)}\), where \({\mathcal {D}}\) is the \(\sigma \)-fields of Borel sets for the Skorohod topology.

Proof

See the proof of Theorem 2 of Oodaira and Yoshihara (1972).

Proposition 3

Under the conditions of Theorem 1, we have

$$\begin{aligned} \left\{ \frac{1}{\sqrt{n}}\sum _{i=1}^{[n\lambda ]} \begin{pmatrix} h_{F,1}(X_{i}) \\ h_{F,2}(X_{i}) \end{pmatrix} \right\} _{0\le \lambda \le 1}\longrightarrow _{n\rightarrow \infty } \left\{ \begin{pmatrix} W_{1}(\lambda ) \\ W_{2}(\lambda ) \end{pmatrix} \right\} _{0\le \lambda \le 1}. \end{aligned}$$
(10)

Under the conditions of Theorem 2, we have

$$\begin{aligned} \left\{ \frac{1}{\sqrt{n}}\sum _{i=1}^{[n\lambda ]} \begin{pmatrix} h_{F,1}(X_{i}) \\ h_{G^{(n)},2}(Y_{ni}) \end{pmatrix} \right\} _{0\le \lambda \le 1}\longrightarrow _{n\rightarrow \infty } \left\{ \begin{pmatrix} W_{1}(\lambda ) \\ W_{2}(\lambda ) \end{pmatrix} \right\} _{0\le \lambda \le 1}. \end{aligned}$$
(11)

Proof

We only prove (11). The part relating to (10) involves a series which is not a triangular array. It is more easier to handle than (11).

For establishing (11), we need to establish a finite-dimensional convergence and a tightness results.

Starting by the finite-dimensional convergence, by the Cramér-Wold device it suffices to show that for any \(k\in {\mathbb {N}}^{*}\), any \(a_{j,}b_{j},\lambda _{j}\in {\mathbb {R}}\), \(a_{1}<\ldots <a_{k}\), \(b_{1}<\ldots <b_{k}\), \(0=\lambda _{0}<\lambda _{1}<\ldots <\lambda _{k}=1\)

$$\begin{aligned} \sum _{j=1}^{k}\frac{1}{\sqrt{n}}\sum _{i=[n\lambda _{j-1}]+1}^{[n\lambda _{j}]}\left[ a_{j}h_{F,1}(X_{i})+b_{j}h_{G^{(n)},2}(Y_{ni}) \right] \end{aligned}$$

converges in distribution to a Gaussian random variable.

For that, we need the following lemma.

Lemma 3

(Harel and Puri 1989) Let \(\{X_{ni}\}\) be a sequence of zero-mean absolutely regular random variables (rv)’s with rates satisfying

$$\begin{aligned} \sum _{n\ge }\big [\beta (n)\big ]^{\delta /(2+\delta )} <\infty \ \text{ for } \text{ some } \ \delta > 0. \end{aligned}$$
(12)

Suppose that for any \(\kappa \), there exists a sequence \(\{Y^\kappa _{ni}\}\) of rv’s satisfying (12) such that

$$\begin{aligned} \sup _{n\in {\mathbb {N}}}\max _{0\le i\le n}|Y^\kappa _{ni}|\le B_\kappa < \infty , \end{aligned}$$
(13)

where \(B_\kappa \) is some positive constant

$$\begin{aligned}&\sup _{n\in {\mathbb {N}}}\max _{0\le i\le n} {\mathbb {E}}\big (|X^*_{ni}-Y^\kappa _{ni}|^{2+\delta }\big )\longrightarrow 0 \ \text{ as } \ \kappa \rightarrow \infty \end{aligned}$$
(14)
$$\begin{aligned}&\frac{1}{n} {\mathbb {E}}\bigg [\Big (\sum ^n_{i=1}X^*_{ni}\Big )^2\bigg ]\longrightarrow c \ \text{ as } \ n\rightarrow \infty , \end{aligned}$$
(15)

where c is some positive constant

$$\begin{aligned} \frac{1}{n} {\mathbb {E}}\bigg [\Big (\sum ^n_{i=1}Y^\kappa _{ni}-{\mathbb {E}}(Y^\kappa _{ni})\Big )^2\bigg ]\longrightarrow c_\kappa \ \text{ as } \ n\rightarrow \infty , \end{aligned}$$
(16)

where \(c_\kappa \) is some constant \(> 0\)

$$\begin{aligned} c_\kappa \longrightarrow c \ \text{ as } \ \kappa \rightarrow \infty . \end{aligned}$$
(17)

Then

$$\begin{aligned} \frac{1}{n}\sum ^n_{i=1}X^*_{ni} \end{aligned}$$

converges in distribution to the normal distribution with mean 0 and variance c.

Without loss of generality, we take \(k=2\) and \(0=\lambda _{0}<\lambda _{1}<\lambda _{2}=1\), \(a_{1}<a_{2}\), \(b_{1}<b_{2}\).

The assumption (12) readily holds from (5).

Define, for \(j=1,2\),

$$\begin{aligned} \psi _{ni}^{(j)}=a_{j}h_{F,1}(X_{i})+b_{j}h_{G^{(n)},2}(Y_{ni}). \end{aligned}$$

For establishing (15), we need proving that, as n tends to infinity,

$$\begin{aligned} {\mathbb {E}} \left[ \left( \frac{1}{\sqrt{n}}\left\{ \sum _{i=1}^{[n\lambda _{1}]} \psi _{ni}^{(1)} +\sum _{i=[n\lambda _{1}]+1}^{n} \psi _{ni}^{(2)}\right\} \right) ^{2} \right] \end{aligned}$$

tends to some positive constant c.

We have

$$\begin{aligned}&{\mathbb {E}}\left\{ \left[ \frac{1}{\sqrt{n}}\left( \sum _{i=1}^{[n\lambda _{1}]}\psi _{ni}^{(1)}+\sum _{i=[n\lambda _{1}]+1}^{n}\psi _{ni}^{(2)}\right) \right] ^{2} \right\} = \frac{1}{n}\bigg \{{\mathbb {E}}\left[ \left( \sum _{i=1}^{[n\lambda _{1}]}\psi _{ni}^{(1)}\right) ^{2} \right] \\&\ \ +{\mathbb {E}}\left[ \left( \sum _{i=1}^{[n\lambda _{1}]} \psi _{ni}^{(1)}\right) \left( \sum _{i=[n\lambda _{1}]+1}^{n}\psi _{ni}^{(2)}\right) \right] +{\mathbb {E}}\left[ \left( \sum _{i=[n\lambda _{1}]+1}^{n}\psi _{ni}^{(2)}\right) ^{2}\right) \bigg \}. \end{aligned}$$

Since the random variables \(\psi _{ni}^{(1)}\) and \(\psi _{ni}^{(2)}\) are centered, we obtain

$$\begin{aligned} {\mathbb {E}}\left[ \bigg (\sum _{i=1}^{[n\lambda _{1}]}\psi _{ni}^{(1)}\bigg )^{2} \right] =[n\lambda _{1}] {\mathbb {E}}\left[ \big (\psi _{n1}^{(1)}\big )^{2} \right] +2\sum _{i=1}^{[n\lambda _{1}]} \sum _{j=1}^{[n\lambda _{1}]-i}{\mathbb {E}}\left[ \big (\psi _{ni}^{(1)}\psi _{n,i+j}^{(1)}\big ) \right] . \end{aligned}$$

From the condition of Theorem 2, we deduce that \({\mathbb {E}}\left[ \left( \psi _{n1}^{(1)}\right) ^{2+\delta }\right] <\infty \), which implies that

$$\begin{aligned} \sup _{n, i,j\ge 1}\left| {\mathbb {E}}\left( \psi _{ni}^{(1)}\psi _{n,i+j}^{(1)}\right) \right| \le \beta ^{\frac{\delta }{2+\delta }}(j)\big \{{\mathbb {E}}\left( \big |\psi _{ni}^{(1)}\big |^{2+\delta }\right) \big \}^{\frac{1}{2+\delta }} \big \{{\mathbb {E}}\left( \big |\psi _{n,i+j}^{(1)}\big |^{2+\delta }\right) \big \}^{\frac{1}{2+\delta }}. \end{aligned}$$

We get

$$\begin{aligned} {\mathbb {E}}\left[ \bigg (\sum _{i=1}^{[n\lambda _{1}]}\psi _{ni}^{(1)}\bigg )^{2} \right] \le [n\lambda _{1}] {\mathbb {E}}\left[ \left( \psi _{n1}^{(1)}\right) ^{2} \right] +2[n\lambda _{1}]\sum _{j=1}^{[n\lambda _{1}]}\beta ^{\frac{\delta }{2+\delta }}(j)M^{2}, \end{aligned}$$

where \(M=\sup _{n \ge 1}\left\{ {\mathbb {E}}\left[ \left( \psi _{n1}^{(1)}\right) ^{2+\delta }\right] \right\} ^{\frac{1}{2+\delta }}\).

It results that

$$\begin{aligned}&\lim _{n\rightarrow \infty }\frac{1}{n}\bigg \{[n\lambda _{1}]{\mathbb {E}}\left[ \big (\psi _{n1}^{(1)}\big )^{2} \right] +2\sum _{i=1}^{[n\lambda _{1}]}\sum _{j=1}^{[n\lambda _{1}]-i}\Big |{\mathbb {E}}\big (\psi _{ni}^{(1)}\psi _{n,i+j}^{(1)}\big )\Big |\bigg \} \nonumber \\&\quad \le \lambda _1\Big \{{\mathbb {E}}\left[ \big (\psi _{n1}^{(1)}\big )^{2}\right] +2\sum _{j=1}^{\infty }\beta ^{\frac{\delta }{2+\delta }}(j)M^{2}\Big \}. \end{aligned}$$
(18)

We also have

$$\begin{aligned} {\mathbb {E}}\left[ \left( \sum _{i=1}^{[n\lambda _{1}]}\psi _{ni}^{(1)}\right) \left( \sum _{i=[n\lambda _{1}]+1}^{n}\psi _{ni}^{(2)}\right) \right] =\sum _{i=1}^{[n\lambda _{1}]}\sum _{j=[n\lambda _{1}]+1}^{n}{\mathbb {E}}\big (\psi _{ni}^{(1)}\psi _{nj}^{(2)}\big ). \end{aligned}$$

From

$$\begin{aligned} \sup _{n \ge 1}\sup _{i,j\ge 1}\left| {\mathbb {E}}\big (\psi _{ni}^{(1)}\psi _{nj}^{(2)}\big )\right| \le \beta ^{\frac{\delta }{2+\delta }}(j-i)MM^{*}, \end{aligned}$$

where \(M^{*}=\sup _{n \ge 1}\left\{ {\mathbb {E}}\left[ \left( \psi _{n1}^{(2)}\right) ^{2+\delta } \right] \right\} ^{\frac{1}{2+\delta }}\), it results that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^{[n\lambda _{1}]}\sum _{j=[n\lambda _{1}]+1}^{n}\left| {\mathbb {E}}\big (\psi _{ni}^{(1)}\psi _{nj}^{(2)}\big )\right| \le \lambda _{1}\sum _{j=1}^{\infty }\beta ^{\frac{\delta }{2+\delta }}(j)MM^{*}. \end{aligned}$$
(19)

Similarly, we get

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}{\mathbb {E}}\left[ \bigg (\sum _{i=[n\lambda _{1}]+1}^{n}\psi _{ni}^{(2)}\bigg )^{2}\right] \le (1-\lambda _{1})\Big \{{\mathbb {E}}\left[ \big (\psi _{n1}^{(1)}\big )^2 \right] +2\sum _{j=1}^{\infty }\beta ^{\frac{\delta }{2+\delta }}(j)(M^{*})^{2}\Big \}. \end{aligned}$$
(20)

From (18)-(20), we deduce (15).

Now, we turn to proving (14). For all \(i\ge 1\), and for any \(\kappa >0\), define

$$\begin{aligned} \psi _{ni}^{(j),\kappa }= \left\{ \begin{array}{ll} \psi _{ni}^{(j)} &{} \text{ if } \big |\psi _{ni}^{(j)}\big | \le \kappa \\ 0 &{} \text{ if } \big |\psi _{ni}^{(j)}\big | \ge \kappa , \ j=1,2. \end{array} \right. \end{aligned}$$

It is immediate that

$$\begin{aligned} \sup _{n\ge 1}\sup _{i\ge 1}\big | \psi _{ni}^{(j)}\big | \le \kappa <\infty . \end{aligned}$$

It results from the integrability condition in Theorem 2 that the sequences \(\{\psi _{ni}^{(j)}; \ i\ge 1, \ j=1,2\}\) are uniformly integrable.

Whence

$$\begin{aligned} \sup _{i\ge 1}{\mathbb {E}}\left( \Big | \psi _{ni}^{(j)}-\psi _{ni}^{(j),\kappa }\Big |^{2+\delta }\right) \longrightarrow 0 \ \text { as } \ \kappa \rightarrow \infty , \ j=1,2 \end{aligned}$$

and (14) is proved.

The proof of (16), that is

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathbb {E}}\left[ \left( \frac{1}{\sqrt{n}}\left\{ \sum _{i=1}^{[n\lambda _{1}]}\left[ \psi _{ni}^{(1),\kappa }-{\mathbb {E}}\left( \psi _{ni}^{(1),\kappa }\right) \right] +\sum _{i=[n\lambda _{1}]+1}^{n}\left[ \psi _{ni}^{(2),\kappa }-{\mathbb {E}}\left( \psi _{ni}^{(2),\kappa }\right) \right] \right\} \right) ^{2} \right] =c_\kappa , \end{aligned}$$

where \(c_\kappa \) is some positive constant, is similar to that of (15).

It remains to prove (17).

For any \(i,j=1,2\), denote by \(\psi _{i}^{(j),\kappa }\) the counterpart of \(\psi _{ni}^{(j),\kappa }\) obtained by substituting the \(Y_{ni}\)’s for the \(X_i\)’s.

We have

$$\begin{aligned} c_\kappa= & {} \lambda _{1} {\mathbb {E}}\left\{ \left[ \psi _{1}^{(1),\kappa }- {\mathbb {E}}\big (\psi _{1}^{(1),\kappa }\big )\right] ^{2} \right\} +2\lambda _{1}\sum _{i=1}^{\infty } {\mathbb {E}}\left\{ \left[ \psi _{1}^{(1),\kappa }-{\mathbb {E}}\big (\psi _{1}^{(1),\kappa }\big )\right] \left[ \psi _{i+1}^{(1),\kappa }-{\mathbb {E}}\big (\psi _{i+1}^{(1),\kappa }\big )\right] \right\} \\&+\lambda _{1}\sum _{i=1}^{\infty }{\mathbb {E}}\left\{ \left[ \psi _{1}^{(1),\kappa }-{\mathbb {E}}\big (\psi _{1}^{(1),\kappa }\big )\right] \left[ \psi _{i+1}^{(2),\kappa }-{\mathbb {E}}\big (\psi _{i+1}^{(2),\kappa }\big )\right] \right\} \\&+(1-\lambda _{1}) {\mathbb {E}}\left\{ \left[ \psi _{1}^{(2),\kappa }-{\mathbb {E}}\big (\psi _{1}^{(2),\kappa }\big )\right] ^{2} \right\} \\&+2(1-\lambda _{1})\sum _{i=1}^{\infty }{\mathbb {E}}\left\{ \left[ \psi _{1}^{(2),\kappa }-{\mathbb {E}}\big (\psi _{1}^{(2),\kappa }\big )\right] \left[ \psi _{i+1}^{(1),\kappa }-{\mathbb {E}}\big (\psi _{i+1}^{(2),\kappa }\big )\right] \right\} . \end{aligned}$$

By the Lebesgue dominated convergence theorem, one obtains

$$\begin{aligned}&{\mathbb {E}}\left\{ \left[ \psi _{1}^{(1),\kappa }-{\mathbb {E}}\left( \psi _{1}^{(1),\kappa }\right) \right] ^{2} \right\} \longrightarrow {\mathbb {E}}\left[ \left( \psi _{1}^{(1)}\right) ^{2} \right] \text{ as } \kappa \rightarrow \infty , \\&{\mathbb {E}}\left\{ \left[ \psi _{1}^{(1),\kappa }-{\mathbb {E}}\big (\psi _{1}^{(1),\kappa } \right] \left[ (\psi _{i+1}^{(1),\kappa }-{\mathbb {E}}\big (\psi _{i+1}^{(1),\kappa }\big )\right] \right\} \longrightarrow {\mathbb {E}} \left( \psi _{1}^{(1)}\psi _{i+1}^{(1)}\right) \text{ as } \kappa \rightarrow \infty ,\\&{\mathbb {E}}\left\{ \left[ \psi _{1}^{(1),\kappa }-{\mathbb {E}}\big (\psi _{1}^{(1),\kappa }\big )\right] \left[ \psi _{i+1}^{(2),\kappa }-{\mathbb {E}}\big (\psi _{i+1}^{(2),\kappa }\big )\right] \right\} \longrightarrow {\mathbb {E}}\left( \psi _{1}^{(2)}\psi _{i+1}^{(2)}\right) \text{ as } \kappa \rightarrow \infty , \\&{\mathbb {E}}\left\{ \left[ \psi _{1}^{(2),\kappa }-{\mathbb {E}}\big (\psi _{1}^{(2),\kappa }\big )\right] ^{2}\right\} \longrightarrow {\mathbb {E}}\left[ \left( \psi _{1}^{(2)}\right) ^{2} \right] \text{ as } \kappa \rightarrow \infty \end{aligned}$$

and

$$\begin{aligned} {\mathbb {E}}\left\{ \left[ \psi _{1}^{(2),\kappa }-{\mathbb {E}}\big (\psi _{1}^{(2),\kappa }\big )\right] \left[ \psi _{i+1}^{(2),\kappa } -{\mathbb {E}}\big (\psi _{i+1}^{(2),\kappa }\big )\right] \right\} \longrightarrow {\mathbb {E}}\left( \psi _{1}^{(2)} \psi _{i+1}^{(2)} \right) \text{ as } \kappa \rightarrow \infty . \end{aligned}$$

Therefore

$$\begin{aligned} \lim _{\kappa \rightarrow \infty }c_\kappa =c \end{aligned}$$

and (17) is proved. Whence, the finite dimensional convergence is established.

For proving the tightness, we need the following Lemma.

Lemma 4

(Phillips and Durlauf 1986) Probability measures on a product space are tight iff all the marginal probability measures are tight on the component spaces.

It results from this lemma that it suffices to prove the tightness of each component of the sequence of processes in (11). It is immediate from Proposition 2 that the first is tight. For the second, define

$$\begin{aligned} {\mathcal {M}}_n(\lambda )=\sigma _{22}^{-1/2}\frac{1}{\sqrt{n}}\sum _{i=1}^{[n\lambda ]} h_{G^{(n)},2}(Y_{ni}). \end{aligned}$$

If \(\lambda _1\le \lambda \le \lambda _2\), from the integral conditions and condition (5), there exists a constant C such that

$$\begin{aligned} {\mathbb {E}}\left( |{\mathcal {M}}_n(\lambda )-{\mathcal {M}}_n(\lambda _1)|^2|{\mathcal {M}}_n(\lambda _2)-{\mathcal {M}}_n(\lambda )|^2 \right)\le & {} C{1 \over n^2}([n\lambda ]-[n\lambda _1])([n\lambda _2]-[n\lambda ])\\\le & {} C{1 \over n^2}([n\lambda _1]-[n\lambda ])([n\lambda ]-[n\lambda _2])\\\le & {} C{1 \over n^2}([n\lambda _2]-[n\lambda _1])^2 \\\le & {} C(\lambda _2-\lambda _1)^2. \end{aligned}$$

If \(\lambda _2-\lambda _1\ge 1/n\) the last inequality follows and if \(\lambda _2-\lambda _1<1/n\), then either \(\lambda _1\) and \(\lambda \) lie in the same subinterval \([(i-1)/n,i/n]\) or else \(\lambda \) and \(\lambda _2\) do. In either of these cases the left hand of last inequality vanishes. From Theorem 13.5 of Billingsley (1999), the process \({\mathcal {M}}_n\) is tight. This ends the proof of Proposition 3. \(\square \)

1.2 Proof of Theorem 1

Using the Hoeffding decomposition, we can write \(Z_{n}(\lambda )\) as

$$\begin{aligned} Z_{n}(\lambda )= & {} n^{-3/2}\sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n} \left[ h_{F,1}(X_{i})+h_{F,2}(X_{j})+g_F(X_{i},X_{j})\right] \nonumber \\= & {} n^{-3/2}\left[ (n-[n\lambda ])\sum _{i=1}^{[n\lambda ]}h_{F,1}(X_{i})+[n\lambda ] \sum _{j=[n\lambda ]+1}^{n}h_{F,2}(X_{j})\right] \nonumber \\&+n^{-3/2}\sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n}g_F(X_{i},X_{j}). \end{aligned}$$
(21)

From Proposition 1, we have

$$\begin{aligned} n^{-3/2}\sup _{0\le \lambda \le 1}\bigg |\sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n}g_F(X_{i},X_{j})\bigg | \longrightarrow _{_{n\rightarrow \infty }}0 \end{aligned}$$

in probability.

Thus, by Slutsky’s lemma, it suffices to show that the sum of the first two terms

$$\begin{aligned} \Big \{n^{-3/2}(n-[n\lambda ])\sum _{i=1}^{[n\lambda ]} h_{F,1}(X_{i})+ n^{-3/2} [n\lambda ] \sum _{j=[n\lambda ]+1}^{n} h_{F,2}(X_{j})\Big \}_{0\le \lambda \le 1} \end{aligned}$$

converges in distribution to the desired limit process.

It results from Proposition 2 that the process

$$\begin{aligned} \Big \{\frac{1}{\sqrt{n}}\sum _{i=1}^{[n\lambda ]}h_{F,1}(X_{i})\Big \}_{0\le \lambda \le 1} \end{aligned}$$

converges weakly to a Brownian motion \(\{W(\lambda )\}_{0\le \lambda \le 1}\).

Proposition 3 yields

$$\begin{aligned} \Big \{\frac{1}{\sqrt{n}}\sum _{i=1}^{[n\lambda ]} \begin{pmatrix} h_{F,1}(X_{i}) \\ h_{F,2}(X_{i}) \end{pmatrix} \Big \}_{0\le \lambda \le 1}\longrightarrow _{n\rightarrow \infty } \Big \{\begin{pmatrix} W_{1}(\lambda ) \\ W_{2}(\lambda ) \end{pmatrix} \Big \}_{0\le \lambda \le 1} \end{aligned}$$

in distribution on the space \((D[0,1])^{2}\) to \((D[0,1])^{2}.\)

Now, we consider the mapping defined by

$$\begin{aligned} \begin{pmatrix} x_{1}(\lambda ) \\ x_{2}(\lambda ) \end{pmatrix} \mapsto (1-\lambda )x_{1}(\lambda )+\lambda (x_{2}(1)-x_{2}(\lambda )), \ 0\le \lambda \le 1. \end{aligned}$$

This is a continuous mapping from \((D[0,1])^{2}\) to D[0, 1]. Whence,

$$\begin{aligned}&\left\{ n^{-3/2}(n-[n\lambda ])\displaystyle \sum _{i=1}^{[n\lambda ]}h_{F,1}(X_{i})+n^{-3/2}[n\lambda ] \displaystyle \sum _{j=[n\lambda ]+1}^{n}h_{F,2}(X_{j})\right\} _{0\le \lambda \le 1} \longrightarrow _{_{n\rightarrow \infty }} \left\{ Z(\lambda )\right\} _{0\le \lambda \le 1}, \end{aligned}$$

where for any \(\lambda \in [0,1]\),

$$\begin{aligned} Z(\lambda )= (1-\lambda )W_{1}(\lambda )+\lambda [W_{2}(1)-W_{2}(\lambda )]. \end{aligned}$$

Whence, Theorem 1 is proved. \(\square \)

1.3 Proof of Theorem 2

Now we prove Theorem 2. Under the conditions of Theorem 2, we have the following equality

$$\begin{aligned} Z_{n}(\lambda )= & {} Z_{n}^{*}(\lambda )-n^{-3/2}[n\lambda ](n-[n\lambda ])\theta (F,F) \\= & {} n^{-3/2}[n\lambda ](n-[n\lambda ])\theta (F,G^{(n)})-n^{-3/2}[n\lambda ](n-[n\lambda ])\theta (F,F) \\&+\frac{[n\lambda ](n-[n\lambda ])}{n^{3/2}}\frac{1}{[n\lambda ]}\sum _{i=1}^{[n\lambda ]}h_{F,1}(X_{i}) \\&+\frac{[n\lambda ](n-[n\lambda ])}{n^{3/2}}\frac{1}{(n-[n\lambda ])}\sum _{j=[n\lambda ]+1}^{n}h_{G^{(n)},2}(Y_{nj}) \\&+\frac{1}{n^{3/2}}\sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n}g_F(X_{i},Y_{nj}). \end{aligned}$$

From Proposition 1, we deduce that

$$\begin{aligned} n^{-3/2}\sup _{0\le \lambda \le 1}\bigg | \sum _{i=1}^{[n\lambda ]}\sum _{j=[n\lambda ]+1}^{n}g_F(X_{i},Y_{nj})\bigg | \longrightarrow _{n\rightarrow \infty }0 \end{aligned}$$

in probability.

From Proposition 2, we deduce that

$$\begin{aligned} n^{-1/2}\sum _{i=1}^{[n\lambda ]}h_{F,1}(X_{i}) \end{aligned}$$

converges weakly to the Brownian process \(\{W_1(\lambda )\}_{_{0\le \lambda \le 1}}\) and

$$\begin{aligned} n^{-1/2}\sum _{j=[n\lambda ]+1}^{n}h_{G^{(n)},2}(Y_{nj}) \end{aligned}$$

converges weakly to the Brownian process \(\{W_2(1)-W_2(\lambda )\}_{_{0\le \lambda \le 1}}\).

We have also from the \({\mathcal {H}}_{1,n}\)

$$\begin{aligned} \lim _{n\rightarrow \infty } n^{-3/2}[n\lambda ](n-[n\lambda ])\theta (F,G^{(n)})-n^{-3/2}[n\lambda ](n-[n\lambda ])\theta (F,F)=\lambda (1-\lambda )A. \end{aligned}$$

From Proposition 3, we obtain that

$$\begin{aligned}&\left\{ n^{-3/2}(n-[n\lambda ])\displaystyle \sum _{i=1}^{[n\lambda ]}h_{F,1}(X_{i})+n^{-3/2}[n\lambda ] \displaystyle \sum _{j=[n\lambda ]+1}^{n}h_{G^{(n)},2}(Y_{nj})\right\} _{0\le \lambda \le 1}\\&\quad \longrightarrow _{_{n\rightarrow \infty }} \left\{ {{\widetilde{Z}}}(\lambda )\right\} _{0\le \lambda \le 1}, \end{aligned}$$

where for any \(\lambda \in [0,1]\),

$$\begin{aligned} {{\widetilde{Z}}}(\lambda )=(1-\lambda )W_{1}(\lambda )+\lambda (W_{2}(1)-W_{2}(\lambda )). \end{aligned}$$

This establishes Theorem 2.

1.4 Proof of Theorem 3

Let \(1\le [(n+1)t]\le [n\lambda _0]\), then

$$\begin{aligned} Z_n^*(t)= & {} n^{-3/2}\sum _{1\le i<j\le [n\lambda _0]}h(X_i,X_j) + n^{-3/2}\sum _{i=1}^{[n\lambda _0]}\sum _{j=[n\lambda _0]+1}^nh(X_i,X_j)\\&-n^{-3/2}\bigg [\sum _{1\le i<j\le [(n+1)t]}h(X_i,X_j) +\sum _{[(n+1)t] +1\le i<j\le [n\lambda _0]}h(X_i,X_j)\\&+\sum _{[(n+1)t]+1\le i\le [n\lambda _0]}\sum _{[n\lambda _0]+1\le j\le n}h(X_i,X_j)\bigg ] \\= & {} R_n^{(1)} + R_n^{(2)} - \left( R_n^{(3)} + R_n^{(4)} + R_n^{(5)}\right) . \end{aligned}$$

First we prove that

$$\begin{aligned} n^{-1/2}R_n^{(1)}{\mathop {\longrightarrow }\limits ^{ a.s. }}_{n\rightarrow \infty } \lambda ^2_0\theta (F,F)/2. \end{aligned}$$

From the Hoeffding decomposition (4), we have

$$\begin{aligned} n^{-1/2}R_n^{(1)}= & {} \frac{1}{2n^{2}}U_{[n\lambda _0]}\nonumber \\= & {} \frac{[n\lambda _0]([n\lambda _0]-1)}{2n^2}\theta (F,F)+ \frac{[n\lambda _0]([n\lambda _0]-1)}{n^2}\frac{2}{[n\lambda _0]}\sum _{i=1}^{[n\lambda _0]}h_{F,1}(X_i)\nonumber \\&+ \frac{[n\lambda _0]([n\lambda _0]-1)}{2n^2} U_{[n\lambda _0]}^{(2)}. \end{aligned}$$
(22)

As \((h_{1}^{(1)}(X_{i}))_{1\le i\le n}\) is stationary and ergodic, we have

$$\begin{aligned} \frac{1}{[n\lambda _0]}\sum _{i=1}^{[n\lambda _0]}h_{F,1}(X_i){\mathop {\longrightarrow }\limits ^{ a.s. }}_{n\rightarrow \infty } 0. \end{aligned}$$

For any \(\varepsilon >0\), put

$$\begin{aligned} A_{[n\lambda _0]}={\mathbb {P}}\Big (\Big |U_{[n\lambda _0]}^{(2)}\Big |>\varepsilon \Big ). \end{aligned}$$

One has from Markov inequality and Lemma 2 of Yoshihara (1976)

$$\begin{aligned} {\mathbb {P}}\Big (\Big |U_{[n\lambda _0]}^{(2)}\Big |>\varepsilon \Big )\le & {} \frac{1}{\varepsilon ^2} {\mathbb {E}}\left[ \left( U_{[n\lambda _0]}^{(2)}\right) ^2 \right] \\= & {} {\mathcal {O}}([n\lambda _0]^{-1-\gamma }), \ \gamma >0, \end{aligned}$$

which implies

$$\begin{aligned} \sum _{i=1}^{[n\lambda _0]}A_{[i\lambda _0]}<\infty . \end{aligned}$$

Then from Borel–Cantelli lemma

$$\begin{aligned} \frac{[n\lambda _0]([n\lambda _0]-1)}{2n^2} U_{[n\lambda _0]}^{(2)}{\mathop {\longrightarrow }\limits ^{ a.s. }}_{n\rightarrow \infty } 0. \end{aligned}$$

Then from (22), we have

$$\begin{aligned} n^{-1/2}R_n^{(1)}{\mathop {\longrightarrow }\limits ^{ a.s. }}_{n\rightarrow \infty } \lambda ^2_0\theta (F,F)/2. \end{aligned}$$

Similarly, we prove

$$\begin{aligned} n^{-1/2}R_n^{(3)}{\mathop {\longrightarrow }\limits ^{ a.s. }}_{n\rightarrow \infty } t^2\theta (F,F)/2 \end{aligned}$$

and

$$\begin{aligned} n^{-1/2}R_n^{(4)}{\mathop {=}\limits ^{ {\mathcal {D}} }}\sum _{1\le i < j \le [n\lambda _0] -[(n+1)t]}h(X_i,X_j){\mathop {\longrightarrow }\limits ^{ a.s. }}_{n\rightarrow \infty }(t-\lambda _0)^2\theta (F,F)/2. \end{aligned}$$

Now, we establish that

$$\begin{aligned} n^{-1/2}R_n^{(2)}{\mathop {\longrightarrow }\limits ^{ a.s. }}_{n\rightarrow \infty } \lambda _0(1-\lambda _0)\theta (F,G). \end{aligned}$$

From (21), we have

$$\begin{aligned} n^{-1/2}R_n^{(2)}= & {} \frac{1}{n^2}\sum _{i=1}^{[n\lambda _0]}\sum _{j=[n\lambda _0]+1}^n\{\theta (F,G)+ h_{G,1}(X_i)+ h_{G,2}(Y_j)+ g_G(X_i,Y_j)\}\nonumber \\= & {} \frac{[n\lambda _0](n-[n\lambda _0])}{n^{2}}\theta (F,G) +\frac{[n\lambda _0](n-[n\lambda _0])}{n^{2}}\frac{1}{[n\lambda _0]} \sum _{i=1}^{[n\lambda _0]}h_{F,1}(X_{i})\nonumber \\&+\frac{[n\lambda _0](n-[n\lambda _0])}{n^{2}}\frac{1}{(n-[n\lambda _0])} \sum _{j=[n\lambda _0]+1}^{n}h_{G,2}(Y_{j})\nonumber \\&+\frac{1}{n^{2}}\sum _{i=1}^{[n\lambda _0]}\sum _{j=[n\lambda _0]+1}^{n}g_F(X_{i},Y_{j}), \end{aligned}$$
(23)

where we recall that the \(Y_{j}\)’s are random variables with cumulative distribution function G and satisfy (5).

From the ergodic theorem, we have that

$$\begin{aligned} \frac{1}{[n\lambda _0]}\sum _{i=1}^{[n\lambda _0]}h_{F,1}(X_i){\mathop {\longrightarrow }\limits ^{ a.s. }}_{n\rightarrow \infty } 0. \end{aligned}$$

and

$$\begin{aligned} \frac{1}{(n-[n\lambda _0])}\sum _{j=[n\lambda _0]+1}^{n}h_{G,2}(Y_{j}){\mathop {\longrightarrow }\limits ^{ a.s. }}_{n\rightarrow \infty }0. \end{aligned}$$

From Lemma 2, we deduce that

$$\begin{aligned} {\mathbb {E}}\left\{ \left[ \frac{1}{n^{2}}\sum _{i=1}^{[n\lambda _0]}\sum _{j=[n\lambda _0]+1}^{n}g_F(X_{i},Y_{j})\right] ^{2} \right\} \le Cst[n\lambda _0](n-[n\lambda _0])n^{-4}. \end{aligned}$$

From Markov inequality, we deduce for any \(\epsilon >0\) that

$$\begin{aligned} {\mathbb {P}}\left( \left| \frac{1}{n^{2}}\sum _{i=1}^{[n\lambda _0]}\sum _{j=[n\lambda _0]+1}^{n}g_F(X_{i},Y_{j})\right| >\epsilon \right) ={\mathcal {O}}(n^{-2}). \end{aligned}$$

Also, by Borel–Cantelli Lemma one has

$$\begin{aligned} \frac{1}{n^{2}}\sum _{i=1}^{[n\lambda _0]}\sum _{j=[n\lambda _0]+1}^{n}g_F(X_{i},Y_{j}){\mathop {\longrightarrow }\limits ^{ a.s. }}_{n\rightarrow \infty }0. \end{aligned}$$

Similarly, we prove that

$$\begin{aligned} n^{-1/2}R_n^{(5)}{\mathop {\longrightarrow }\limits ^{{\mathbb {P}} }}(\lambda _0-t)(1-\lambda _0)\theta (F,G). \end{aligned}$$

These observations clearly imply the first part of (7). The proof of its second part is similar. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ngatchou-Wandji, J., Elharfaoui, E. & Harel, M. On change-points tests based on two-samples U-Statistics for weakly dependent observations . Stat Papers 63, 287–316 (2022). https://doi.org/10.1007/s00362-021-01242-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-021-01242-3

Mathematics Subject Classification

Navigation