Skip to main content
Log in

Consistent nonparametric tests for detecting gradual changes in the marginals and the copula of multivariate time series

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

A Publisher Correction to this article was published on 12 September 2019

This article has been updated

Abstract

From a series of observations \({\mathbf Y}_1, \ldots , {\mathbf Y}_n\) in \(\mathbb {R}^d\) taken sequentially, an interesting question is to know whether or not a significant change occurred in their stochastic behavior. The problem has been largely investigated both for univariate and multivariate observations, where the null hypothesis states that \(F_1 = \cdots = F_n\), where \(F_j({\mathbf y}) = \mathrm{P}({\mathbf Y}_j \le {\mathbf y})\). In most of the works done so far, the alternative hypothesis is generally that of an abrupt change at some unknown time K, i.e. \(F_j = D_1\) for \(j \le K\) and \(F_j = D_2\) when \(j > K\). This assumption is unrealistic in applications where changes tend to occur gradually. In this paper, a more general gradual-change model is proposed in which one admits the existence of times \(K_1 < K_2\) where the distribution smoothly changes from \(D_1\) to \(D_2\). A general class of consistent test statistics for the detection of gradual changes is introduced and their large-sample behavior is investigated under a general \(\alpha \)-mixing condition. The proposed framework allows to detect changes in the marginal series as well as in the copula. Monte-Carlo simulations indicate the good sampling properties of the tests and their usefulness is illustrated on climatic data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Change history

  • 12 September 2019

    Unfortunately, due to a technical error, the articles published in issues 60:2 and 60:3 received incorrect pagination. Please find here the corrected Tables of Contents. We apologize to the authors of the articles and the readers.

References

  • Ali MM, Giaccotto C (1982) The identical distribution hypothesis for stock market prices. location-and scale-shift alternatives. J Am Stat Assoc 77(377):19–28

    Google Scholar 

  • Antoch J, Hušková M (2001) Permutation tests in change point analysis. Stat Probab Lett 53(1):37–46

    Article  MathSciNet  MATH  Google Scholar 

  • Aue A, Steinebach J (2002) A note on estimating the change-point of a gradually changing stochastic process. Stat Probab Lett 56(2):177–191

    Article  MathSciNet  MATH  Google Scholar 

  • Bhattacharyya GK, Johnson RA (1968) Nonparametric tests for shift at an unknown time point. Ann Math Stat 39:1731–1743

    Article  MathSciNet  MATH  Google Scholar 

  • Bissell AF (1984) The performance of control charts and cusums under linear trend. Appl Stat 33(2):145–151

    Article  MATH  Google Scholar 

  • Bücher A, Kojadinovic I (2016) A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing. Bernoulli 22(2):927–968

    Article  MathSciNet  MATH  Google Scholar 

  • Bücher A, Ruppert M (2013) Consistent testing for a constant copula under strong mixing based on the tapered block multiplier technique. J Multivar Anal 116:208–229

    Article  MathSciNet  MATH  Google Scholar 

  • Bücher A, Kojadinovic I, Rohmer T, Segers J (2014) Detecting changes in cross-sectional dependence in multivariate time series. J Multivar Anal 132:111–128

    Article  MathSciNet  MATH  Google Scholar 

  • Bühlmann, P. (1993). The blockwise bootstrap in time series and empirical processes. Ph.D. Thesis, Diss. Math. Wiss ETH Zurich

  • Carlstein E (1988) Nonparametric change-point estimation. Ann Stat 16(1):188–197

    Article  MathSciNet  MATH  Google Scholar 

  • Carrasco M, Chen X (2002) Mixing and moment properties of various GARCH and stochastic volatility models. Econom Theory 18(1):17–39

    Article  MathSciNet  MATH  Google Scholar 

  • Cohen J, Barlow M (2005) The nao, the ao, and global warming: how closely related? J Clim 18(21):4498–4513

    Article  Google Scholar 

  • Csörgő M, Horváth L (1997) Limit theorems in change-point analysis (Wiley series in probability and statistics). Wiley, Chichester

    MATH  Google Scholar 

  • Gan F (1992) Cusum control charts under linear drift. The Statistician 41(1):71–84

    Article  Google Scholar 

  • Genest C, Nešlehová JG (2014) On tests of radial symmetry for bivariate copulas. Stat Papers 55(4):1107–1119

    Article  MathSciNet  MATH  Google Scholar 

  • Gombay E, Horvath L (1999) Change-points and bootstrap. Environmetrics 10(6):725–736

    Article  Google Scholar 

  • Holmes M, Kojadinovic I, Quessy J-F (2013) Nonparametric tests for change-point detection à la Gombay and Horváth. J Multivar Anal 115:16–32

    Article  MATH  Google Scholar 

  • Horváth L, Hušková M (2005) Testing for changes using permutations of U-statistics. J Stat Plan Inference 128(2):351–371

    Article  MathSciNet  MATH  Google Scholar 

  • Horváth L, Kokoszka P, Steinebach J (1999) Testing for changes in multivariate dependent observations with an application to temperature changes. J Multivar Anal 68(1):96–119

    Article  MathSciNet  MATH  Google Scholar 

  • Hušková M (1999) Gradual changes versus abrupt changes. J Stat Plan Inference 76(1–2):109–125

    Article  MathSciNet  MATH  Google Scholar 

  • Inoue A (2001) Testing for distributional change in time series. Econom Theory 17(1):156–187

    Article  MathSciNet  MATH  Google Scholar 

  • Jones P, Raper S, Wigley T (1986) Southern hemisphere surface air temperature variations: 1851–1984. J Clim Appl Meteorol 25(9):1213–1230

    Article  Google Scholar 

  • Kosorok M (2008) Introduction to empirical processes and semiparametric inference. Springer, New York

    Book  MATH  Google Scholar 

  • Lombard F (1983) Asymptotic distributions of rank statistics in the change-point problem. S Afr Stat J 17(1):83–105

    MathSciNet  MATH  Google Scholar 

  • Lombard F (1987) Rank tests for changepoint problems. Biometrika 74(3):615–624

    Article  MathSciNet  MATH  Google Scholar 

  • Pettitt A (1979) A non-parametric approach to the change point problem. Appl Stat 28(2):126–135

    Article  MATH  Google Scholar 

  • Philipp W, Pinzur L (1980) Almost sure approximation theorems for the multivariate empirical process. Z Wahrsch Verw Gebiete 54(1):1–13

    Article  MathSciNet  MATH  Google Scholar 

  • Quessy J-F (2016) A general framework for testing homogeneity hypotheses about copulas. Electron J Stat 10(1):1064–1097

    Article  MathSciNet  MATH  Google Scholar 

  • Quessy J-F, Saïd M, Favre A-C (2013) Multivariate Kendall’s tau for change-point detection in copulas. Can J Stat 41(1):65–82

    Article  MathSciNet  MATH  Google Scholar 

  • Rémillard B, Scaillet O (2009) Testing for equality between two copulas. J Multivar Anal 100(3):377–386

    Article  MathSciNet  MATH  Google Scholar 

  • Rio E (2000) Théorie asymptotique des processus aléatoires faiblement dépendants. Mathématiques et Applications. Springer, Berlin

    MATH  Google Scholar 

  • Salvadori G, De Michele C, Kottegoda NT, Rosso R (2007) Extremes in nature: an approach using copulas. Springer, Berlin

    Book  Google Scholar 

  • Scaillet O (2005) A Kolmogorov–Smirnov type test for positive quadrant dependence. Can J Stat 33(3):415–427

    Article  MathSciNet  MATH  Google Scholar 

  • Schechtman E (1982) A nonparametric test for detecting changes in location. Commun Stat Theory Methods 11(13):1475–1482

    Article  Google Scholar 

  • Vogt M, Dette H (2015) Detecting gradual changes in locally stationary processes. Ann Stat 43(2):713–740

    Article  MathSciNet  MATH  Google Scholar 

  • Wied D, Dehling H, van Kampen M, Vogel D (2014) A fluctuation test for constant Spearman’s rho with nuisance-free limit distribution. Comput Stat Data Anal 76:723–736

    Article  MathSciNet  MATH  Google Scholar 

  • Zähle H (2014) Qualitative robustness of von Mises statistics based on strongly mixing data. Stat Papers 55(1):157–167

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang Q, Yang W, Hu S (2014) On Bahadur representation for sample quantiles under \(\alpha \)-mixing sequence. Stat Papers 55(2):285–299

    Article  MathSciNet  MATH  Google Scholar 

  • Zou C, Liu Y, Qin P, Wang Z (2007) Empirical likelihood ratio test for the change-point problem. Stat Probab Lett 77(4):374–382

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Two referees are gratefully acknowledged for their suggestions that led to an improvement of this work. Ph.D. student Félix Camirand Lemyre is also gratefully acknowledged for his help on the simulations and for suggesting the recursive formulas for the computation of the test statistics. This research was supported in part by an individual grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) and by the Canadian Statistical Science Institute (CANSSI).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean-François Quessy.

Appendices

Appendix A: Proofs

1.1 Proof of Proposition 1

The next lemma will be useful in the sequel; its proof is postponed to “Proof of Lemma 1” in Appendix.

Lemma 1

For any \(\mathbf{b} = (b_1, \ldots , b_n) \in \mathbb {R}^n\),

$$\begin{aligned} \langle \varvec{\omega }({\mathbf K}), \mathbf{b} \rangle = { 1 \over K_2-K_1 } \sum _{j=K_1}^{K_2-1} j \left( \bar{b}_{1:j} - \bar{b}_{1:n} \right) , \end{aligned}$$

where \(\bar{b}_{1:j}\) is the mean of \(b_1, \ldots , b_j\).

From Eq. (6),

$$\begin{aligned} \sqrt{n} \, L_{{\mathbf K},n}({\mathbf y}) = { 1 \over K_2 - K_1 } \sum _{j=K_1}^{K_2-1} \mathbb {F}_n \left( {j\over n}, {\mathbf y}\right) , \end{aligned}$$

where for \((s,{\mathbf y}) \in [0,1] \times \mathbb {R}^d\),

$$\begin{aligned} \mathbb {F}_n(s,{\mathbf y}) = { \lfloor ns \rfloor \over \sqrt{n} } \left\{ F_{1:\lfloor ns \rfloor }({\mathbf y}) - F_{1:n}({\mathbf y}) \right\} \end{aligned}$$

is the sequential empirical process. From Theorem 2 of Philipp and Pinzur (1980),

$$\begin{aligned} \sup _{(s,{\mathbf y}) \in [0,1] \times \mathbb {R}^d} \left| \mathbb {F}_n \left( {\lfloor ns \rfloor \over n}, {\mathbf y}\right) - \mathbb {F}(s,{\mathbf y}) \right| {\mathop {\longrightarrow }\limits ^{\mathrm{P}}}0, \end{aligned}$$

where \(\mathbb {F}\) is a tight centered Gaussian process such that \(\mathrm{Cov}\left\{ \mathbb {F}(s,{\mathbf y}), \mathbb {F}(s',{\mathbf y}') \right\} = \{ \min (s,s') - ss' \} \, \varGamma ({\mathbf y},{\mathbf y}')\) for each \((s,{\mathbf y}),(s',{\mathbf y}') \in [0,1] \times \mathbb {R}^d\) and

$$\begin{aligned} \varGamma ({\mathbf y},{\mathbf y}') = \sum _{\ell \in \mathbb {Z}} \mathrm{Cov}\left\{ \mathbb {I}\left( {\mathbf Y}_0 \le {\mathbf y}\right) , \mathbb {I}\left( {\mathbf Y}_\ell \le {\mathbf y}' \right) \right\} . \end{aligned}$$

Letting \(\varvec{\kappa }= (\kappa _1,\kappa _2) \in (0,1)^2\) be such that \({\mathbf K}= \lfloor n\varvec{\kappa } \rfloor \), one can then write

$$\begin{aligned} \sqrt{n} \, L_{{\mathbf K},n}({\mathbf y}) = { n \over \lfloor n\kappa _2 \rfloor - \lfloor n\kappa _1 \rfloor } \int _{\kappa _1}^{\kappa _2} \mathbb {F}_n \left( { \lfloor ns \rfloor \over n }, {\mathbf y}\right) \mathrm{d}s. \end{aligned}$$

If one defines

$$\begin{aligned} \mathbb {L}(\varvec{\kappa },{\mathbf y}) = \int _{\kappa _1}^{\kappa _2} { \mathbb {F}(s,{\mathbf y}) \over \kappa _2 - \kappa _1 } \, \mathrm{d}s, \end{aligned}$$

then

$$\begin{aligned} \left| \sqrt{n} \, L_{{\mathbf K},n} - \mathbb {L}(\varvec{\kappa },\cdot ) \right|\le & {} { 1 \over \kappa _2-\kappa _1 } \left| \int _{\kappa _1}^{\kappa _2} \left\{ \mathbb {F}_n \left( {\lfloor ns \rfloor \over n}, \cdot \right) - \mathbb {F}(s,\cdot ) \right\} \mathrm{d}s \right| \\&\quad + \, \left| \int _{\kappa _1}^{\kappa _2} \mathbb {F}_n \left( {\lfloor ns \rfloor \over n}, \cdot \right) \mathrm{d}s \right| ~ \left| { n \over \lfloor n\kappa _2 \rfloor - \lfloor n\kappa _1 \rfloor } - { 1 \over \kappa _2 - \kappa _1 } \right| . \end{aligned}$$

The first summand on the righthand side is bounded above by

$$\begin{aligned} \sup _{s \in [0,1]} \left| \mathbb {F}_n \left( {\lfloor ns \rfloor \over n}, \cdot \right) - \mathbb {F}(s,\cdot ) \right| . \end{aligned}$$

Since \(\int _{\kappa _1}^{\kappa _2} \mathbb {F}_n(\lfloor ns \rfloor /n,\cdot ) \mathrm{d}s\) is tight, the second summand converges to zero as \(n\rightarrow \infty \). As a consequence,

$$\begin{aligned} \sup _{(\varvec{\kappa },{\mathbf y}) \in \Delta \times \mathbb {R}^d} \left| \sqrt{n} \, L_{{\mathbf K},n}({\mathbf y}) - \mathbb {L}(\varvec{\kappa },{\mathbf y}) \right| {\mathop {\longrightarrow }\limits ^{\mathrm{P}}}0. \end{aligned}$$

For the computation of the covariance structure, note that

$$\begin{aligned} \mathrm{Cov}\left\{ \mathbb {L}(\varvec{\kappa },{\mathbf y}), \mathbb {L}(\varvec{\kappa }',{\mathbf y}') \right\}= & {} \mathrm{E}\left\{ \int _{\kappa _1}^{\kappa _2} { \mathbb {F}(s,{\mathbf y}) \over \kappa _2-\kappa _1 } \, \mathrm{d}s \times \int _{\kappa _1'}^{\kappa _2'} { \mathbb {F}(s',{\mathbf y}') \over \kappa _2'-\kappa _1' } \, \mathrm{d}s' \right\} \\= & {} \int _{\kappa _1}^{\kappa _2} \int _{\kappa _1'}^{\kappa _2'} { \mathrm{E}\left\{ \mathbb {F}(s,{\mathbf y}) \times \mathbb {F}(s',{\mathbf y}') \right\} \over (\kappa _2-\kappa _1) (\kappa _2'-\kappa _1') } \, \mathrm{d}s' \, \mathrm{d}s, \end{aligned}$$

where the last equality follows from Fubini’s Theorem. Since from a simple computation, \(\mathrm{E}\{ \mathbb {F}(s,{\mathbf y}) \times \mathbb {F}(s',{\mathbf y}') \} = \{ \min (s,s') - s s' \} \, \varGamma ({\mathbf y},{\mathbf y}')\),

$$\begin{aligned} \mathrm{Cov}\left\{ \mathbb {L}(\varvec{\kappa },{\mathbf y}), \mathbb {L}(\varvec{\kappa }',{\mathbf y}') \right\}= & {} \left\{ \int _{\kappa _1}^{\kappa _2} \int _{\kappa _1'}^{\kappa _2'} { \left\{ \min (s,s') - s s' \right\} \over (\kappa _2-\kappa _1) (\kappa _2'-\kappa _1') } \, \mathrm{d}s' \, \mathrm{d}s \right\} \varGamma ({\mathbf y},{\mathbf y}') \\= & {} \varLambda (\varvec{\kappa },\varvec{\kappa }') \, \varGamma ({\mathbf y},{\mathbf y}'). \end{aligned}$$

1.2 Proof of Proposition 2

Since by assumption, \(\Psi (rg) = |r| \Psi (g)\), one can write \(\sqrt{n} \, \Psi (L_{{\mathbf K},n}) = \Psi ( \sqrt{n} \, L_{{\mathbf K},n} )\). Hence, in view of the conclusion of Proposition 1 and from the Continuous Mapping Theorem, \(\sqrt{n} \, \Psi (L_{{\mathbf K},n}) \rightsquigarrow \Psi \left\{ \mathbb {L}(\varvec{\kappa },\cdot ) \right\} \). Then, upon noting that

$$\begin{aligned} S_n^{\Psi } = \int _{[0,1]^2} \mathbb {I}\left( \lfloor n\kappa _1 \rfloor < \lfloor n\kappa _2 \rfloor \right) \eta ( \lfloor n\varvec{\kappa } \rfloor / n) \, \Psi ( \sqrt{n} \, L_{\lfloor n\varvec{\kappa } \rfloor ,n}) \, \mathrm{d}\varvec{\kappa }, \end{aligned}$$

it readily follows that

$$\begin{aligned} S_n^{\Psi } \rightsquigarrow \mathbb {S}^{\Psi } = \int _\Delta \eta (\varvec{\kappa }) \, \Psi \left\{ \mathbb {L}(\varvec{\kappa },\cdot ) \right\} \mathrm{d}\varvec{\kappa }. \end{aligned}$$

Similarly,

$$\begin{aligned} T_n^{\Psi } = \sup _{\varvec{\kappa }\in \Delta } \Psi ( \sqrt{n} \, L_{\lfloor n\varvec{\kappa } \rfloor ,n} ) \rightsquigarrow \mathbb {T}^{\Psi } = \sup _{\varvec{\kappa }\in \Delta } \eta (\varvec{\kappa }) \, \Psi \left\{ \mathbb {L}(\varvec{\kappa },\cdot ) \right\} . \end{aligned}$$

1.3 Proof of Proposition 3

Suppose \({\mathbf Y}_1, \ldots , {\mathbf Y}_n\) have distributions \(F_1, \ldots , F_n\) that follow the gradual-change model with distributions \(D_1 \ne D_2\) and time of change \({\mathbf G}= (\lfloor n\gamma _1 \rfloor ,\lfloor n\gamma _2 \rfloor ) \in \Delta _n\). Then, for \({\mathbf K}\in \Delta _n\), define the empirical process

$$\begin{aligned} \widetilde{L}_{{\mathbf K},n}({\mathbf y}) = { 1 \over \sqrt{n} } \sum _{j=1}^n \left\{ \omega _j({\mathbf K}) - \bar{\omega }({\mathbf K}) \right\} \left\{ \mathbb {I}({\mathbf Y}_j \le {\mathbf y}) - F_j({\mathbf y}) \right\} . \end{aligned}$$

Using arguments similar as those in the proof of Proposition 1, one can show that \(\widetilde{L}_{{\mathbf K},n}({\mathbf y})\) converges weakly uniformly in \(\Delta \times \mathbb {R}^d\) to some centered Gaussian process, say \(\widetilde{\mathbb {L}}(\varvec{\kappa },{\mathbf y})\). For \(\mathbf {F} = (F_1, \ldots , F_n)\), one can write

$$\begin{aligned} L_{{\mathbf K},n}({\mathbf y}) = { \widetilde{L}_{{\mathbf K},n}({\mathbf y}) \over \sqrt{n} } + { \langle \varvec{\omega }({\mathbf K}), \mathbf {F}({\mathbf y}) \rangle \over n } \, , \end{aligned}$$

so that in view of (7),

$$\begin{aligned} L_{{\mathbf K},n}({\mathbf y}) {\mathop {\longrightarrow }\limits ^{\mathrm{P}}}\lim _{n\rightarrow \infty } { \langle \varvec{\omega }({\mathbf K}), \mathbf {F}({\mathbf y}) \rangle \over n } = \left\{ D_1({\mathbf y}) - D_2({\mathbf y}) \right\} \varLambda (\varvec{\kappa },\varvec{\gamma }). \end{aligned}$$

As a consequence, \(\Psi (L_{{\mathbf K},n}) {\mathop {\longrightarrow }\limits ^{\mathrm{P}}}\Psi (D_1-D_2) \, \varLambda (\varvec{\kappa },\varvec{\gamma }) > 0\) as \(n \rightarrow \infty \) for each \({\mathbf K}\in \Delta _n\). Therefore, \(\sqrt{n} \, \Psi (L_{{\mathbf K},n}) \longrightarrow + \infty \) in probability and thus the tests based on \(S_n^{\Psi }\) and \(T_n^{\Psi }\) are consistent.

1.4 Proof of Proposition 4

From the proof of Proposition 3 and the fact that \(\Psi (D_1-D_2) > 0\),

$$\begin{aligned} \varvec{\gamma }_n^{\Psi } \overset{\mathrm{P}}{\longrightarrow }\underset{ \varvec{\kappa }\in \Delta }{\mathrm {argsup}} \, { \Psi (D_1-D_2) \, \varLambda (\varvec{\kappa },\varvec{\gamma }) \over \{ \varLambda (\varvec{\kappa },\varvec{\kappa }) \}^{1/2} } = \underset{ \varvec{\kappa }\in \Delta }{\mathrm {argsup}} \, { \varLambda (\varvec{\kappa },\varvec{\gamma }) \over \{ \varLambda (\varvec{\kappa },\varvec{\kappa }) \}^{1/2} } \, . \end{aligned}$$

Since \(\varLambda (\varvec{\kappa },\varvec{\gamma })\) is the limit as \(n \rightarrow \infty \) of \(\langle \varvec{\omega }({\mathbf K}), \varvec{\omega }({\mathbf G}) \rangle / n\), the Cauchy–Schwartz inequality entails \(\varLambda (\varvec{\kappa },\varvec{\gamma }) / \{ \varLambda (\varvec{\kappa },\varvec{\kappa }) \}^{1/2} \le \varLambda (\varvec{\gamma },\varvec{\gamma }) \}^{1/2}\), with equality if and only if \(\varvec{\kappa }= \varvec{\gamma }\). One concludes that \(\varvec{\gamma }_n^{\Psi } \overset{\mathrm{P}}{\longrightarrow }\varvec{\gamma }\), because

$$\begin{aligned} \underset{ \varvec{\kappa }\in \Delta }{\mathrm {argsup}} \, { \varLambda (\varvec{\kappa },\varvec{\gamma }) \over \{ \varLambda (\varvec{\kappa },\varvec{\kappa }) \}^{1/2} } = \varvec{\gamma }. \end{aligned}$$

1.5 Proof of Proposition 5

From Bücher and Ruppert (2013), \((\mathbb {F}_n,{\widehat{\mathbb {F}}}_n)\) converges weakly to \((\mathbb {F},{\widetilde{\mathbb {F}}})\), where \({\widetilde{\mathbb {F}}}\) is an independent copy of \(\mathbb {F}\). Then, note that for \(\kappa _1\), \(\kappa _2\) such that \(K_1 = \lfloor n\kappa _1 \rfloor \) and \(K_2 = \lfloor n\kappa _2 \rfloor \),

$$\begin{aligned} {\widehat{L}}_{{\mathbf K},n}({\mathbf y}) = {n \over K_2-K_1 } \int _{\kappa _1}^{\kappa _2} {\widehat{\mathbb {F}}}_n \left( { \lfloor ns \rfloor \over n}, {\mathbf y}\right) \mathrm{d}s. \end{aligned}$$

From arguments similar as those in the proof of Proposition 1, one can conclude that \({\widehat{L}}_{{\mathbf K},n}\) is an independent copy of

$$\begin{aligned} \mathbb {L}_{\mathbf K}({\mathbf y}) = {1 \over \kappa _2 - \kappa _1 } \int _{\kappa _1}^{\kappa _2} \mathbb {F}(s,{\mathbf y}) \, \mathrm{d}s. \end{aligned}$$

1.6 Proof of Lemma 1

First note that for all \(j \in \{ 1, \ldots , n \}\),

$$\begin{aligned} \omega _j({\mathbf K}) = \left\{ K_2 - \max (j, K_1) \over K_2 - K_1 \right\} \mathbb {I}(j<K_2). \end{aligned}$$

The remaining of the proof is a computation:

$$\begin{aligned} \langle \varvec{\omega }({\mathbf K}), \mathbf{b} \rangle= & {} \sum _{j=1}^n \left\{ \omega _j({\mathbf K}) - \bar{\omega }({\mathbf K}) \right\} \left( b_j - \bar{b}_{1:n} \right) \\= & {} \sum _{j=1}^n \left( b_j - \bar{b}_{1:n} \right) \omega _j({\mathbf K}) \\= & {} \sum _{j=1}^n \left( b_j - \bar{b}_{1:n} \right) \left\{ K_2 - \max (j, K_1) \over K_2 - K_1 \right\} \mathbb {I}(j<K_2) \\= & {} { 1 \over K_2-K_1 } \sum _{j=1}^n \sum _{i=\max (j,K_1)}^{K_2-1} \left( b_j - \bar{b}_{1:n} \right) \mathbb {I}(j<K_2) \\= & {} { 1 \over K_2-K_1 } \sum _{j=1}^n \sum _{i=1}^n \left( b_j - \bar{b}_{1:n} \right) \mathbb {I}\left\{ \max (j,K_1) \le i< K_2 \right\} \mathbb {I}(j<K_2) \\= & {} { 1 \over K_2-K_1 } \sum _{i=K_1}^{K_2-1} \sum _{j \le i} \left( b_j - \bar{b}_{1:n} \right) \\= & {} { 1 \over K_2-K_1 } \sum _{j=K_1}^{K_2-1} j \left( \bar{b}_{1:j} - \bar{b}_{1:n} \right) . \end{aligned}$$

Appendix B: Explicit expressions

1.1 Test statistics

First define \(A, Z, Z^\star \in \mathbb {R}^{n\times n}\) such that

$$\begin{aligned} A_{ji} = \mathbb {I}({\mathbf Y}_j \le {\mathbf Y}_i), \quad Z_{\ell \cdot } = \sum _{j \le \ell } A_{j\cdot } \quad \text{ and } \quad Z^\star _{\ell \cdot } = \sum _{j \le \ell } j \, A_{j}. \end{aligned}$$

Then, observe that

$$\begin{aligned} W_{\mathbf K}= \left( \sqrt{n} \, L_{{\mathbf K},n}({\mathbf Y}_1), \ldots , \sqrt{n} \, L_{{\mathbf K},n}({\mathbf Y}_n) \right) = { \varvec{\omega }({\mathbf K}) \, A - \bar{\omega }({\mathbf K}) \, \mathbf{1} \, A \over \sqrt{n} } \,. \end{aligned}$$

One has \(\bar{\omega }({\mathbf K}) \, \mathbf{1} \, A = \bar{\omega }({\mathbf K}) \, Z_{n\cdot }\), where \(\bar{\omega }({\mathbf K}) = (K_1+K_2-1) / 2n\) and from the definition of \(\varvec{\omega }({\mathbf K})\) and straightforward computations, one obtains

$$\begin{aligned} \varvec{\omega }({\mathbf K}) \, A = \sum _{j=1}^n \omega _j({\mathbf K}) \, A_{j\cdot } = \left( K_2 \, Z_{K_2-1,\cdot } - K_1 \, Z_{K_1,\cdot } \over K_2 - K_1 \right) - \left( Z^\star _{K_2-1,\cdot } - Z^\star _{K_1,\cdot } \over K_2 - K_1 \right) . \end{aligned}$$

The computation of Z and \(Z^\star \) can exploit the fact that \(Z_{1\cdot } = Z^\star _{1\cdot } = A_{1\cdot }\) and for \(\ell \in \{ 2, \ldots , n \}\), \(Z_{\ell \cdot } = Z_{\ell -1,\cdot } + A_{\ell \cdot }\) and \(Z^\star _{\ell \cdot } = Z^\star _{\ell -1,\cdot } + \ell \, A_{\ell \cdot }\). Finally,

$$\begin{aligned} \Psi ^\mathrm{CvM}(\sqrt{n} \, L_{{\mathbf K},n})= & {} \sqrt{ {1 \over n } \sum _{i=1}^n W_{{\mathbf K},i}^2 }, \\ \Psi ^\mathrm{KS}(\sqrt{n} \, L_{{\mathbf K},n})= & {} \max _{1 \le i \le n} \left| W_{{\mathbf K},i} \right| , \\ \Psi ^\mathrm{Kui}(\sqrt{n} \, L_{{\mathbf K},n})= & {} \max _{1 \le i \le n} W_{{\mathbf K},i} - \min _{1 \le i \le n} W_{{\mathbf K},i}. \end{aligned}$$

1.2 Multiplier versions

In order to derive expressions for the multiplier versions of the test statistics, let \(A^{(j)} \in \mathbb {R}^{j \times n}\) be the matrix of the first j lines of A and define

$$\begin{aligned} \varvec{\gamma }^{(j)} = \left( { \xi _1 \over \bar{\xi }_{1:j} } - 1, \ldots , { \xi _j \over \bar{\xi }_{1:j} } - 1 \right) . \end{aligned}$$

One can then write

$$\begin{aligned} \left( {\widehat{\mathbb {F}}}_n \left( { j \over n }, {\mathbf Y}_1 \right) , \ldots , {\widehat{\mathbb {F}}}_n \left( { j \over n }, {\mathbf Y}_n \right) \right) = { 1 \over \sqrt{n} } \left( \varvec{\gamma }^{(j)} A^{(j)} - { j \over n } \, \varvec{\gamma }^{(n)} A \right) . \end{aligned}$$

Let \(\widehat{Z}, \widehat{Z}^\star \in \mathbb {R}^{n\times n}\) be such that \(\widehat{Z}_{\ell \cdot } = \varvec{\gamma }^{(\ell )} A^{(\ell )}\) and \(\widehat{Z}^\star _{\ell \cdot } = \sum _{j\le \ell } \varvec{\gamma }^{(j)} A^{(j)}\). With this notation,

$$\begin{aligned} \widehat{W}_{\mathbf K}= \left( {\widehat{L}}_{{\mathbf K},n}({\mathbf Y}_1), \ldots , {\widehat{L}}_{{\mathbf K},n}({\mathbf Y}_n) \right)= & {} { 1 \over \sqrt{n} (K_2-K_1) } \left\{ \widehat{Z}^\star _{K_2 \cdot } - \widehat{Z}^\star _{K_1 \cdot }\right. \\&\quad \left. -\,\left( { 1 \over n } \sum _{j=K_1+1}^{K_2} j \right) \widehat{Z}_{n\cdot } \right\} . \end{aligned}$$

Observe that \(\widehat{Z}_{1\cdot } = \widehat{Z}^\star _{1\cdot } = \varvec{\gamma }^{(1)} A^{(1)}\) and for \(\ell \in \{ 2, \ldots , n \}\), \(\widehat{Z}_{\ell \cdot } = \widehat{Z}_{\ell -1,\cdot } + \varvec{\gamma }^{(\ell )}_\ell \, A_{\ell \cdot }\) and \(\widehat{Z}^\star _{\ell \cdot } = \widehat{Z}^\star _{\ell -1,\cdot } + \widehat{Z}_{\ell \cdot }\). Finally,

$$\begin{aligned} \Psi ^\mathrm{CvM}({\widehat{L}}_{{\mathbf K},n})= & {} \sqrt{ {1 \over n } \sum _{i=1}^n \widehat{W}_{{\mathbf K},i}^2 }, \\ \Psi ^\mathrm{KS}({\widehat{L}}_{{\mathbf K},n})= & {} \max _{1 \le i \le n} \left| \widehat{W}_{{\mathbf K},i} \right| , \\ \Psi ^\mathrm{Kui}({\widehat{L}}_{{\mathbf K},n})= & {} \max _{1 \le i \le n} \widehat{W}_{{\mathbf K},i} - \min _{1 \le i \le n} \widehat{W}_{{\mathbf K},i}. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Quessy, JF. Consistent nonparametric tests for detecting gradual changes in the marginals and the copula of multivariate time series. Stat Papers 60, 717–746 (2019). https://doi.org/10.1007/s00362-016-0846-8

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-016-0846-8

Keywords

Navigation