Skip to main content
Log in

Gradual change-point analysis based on Spearman matrices for multivariate time series

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

It may happen that the behavior of a multivariate time series is such that the underlying joint distribution is gradually moving from one distribution to another between unknown times of change. Under this context of a possible gradual-change, tests of change-point detection in the dependence structure of multivariate series are developed around the associated sequence of Spearman matrices. It is formally established that the proposed test statistics for that purpose are asymptotically marginal-free under a general strong-mixing assumption, and written as functions of integrated Brownian bridges. Consistent estimators of the pair of times of change, as well as of the before-the-change and after-the-change Spearman matrices, are also proposed. A simulation study examines the sampling properties of the introduced tools, and the methodologies are illustrated on a synthetic dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Andrews, D. W. K. (1991). Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica, 59(3), 817–858.

    Article  MathSciNet  Google Scholar 

  • Aue, A., Steinebach, J. (2002). A note on estimating the change-point of a gradually changing stochastic process. Statistics & Probability Letters, 56(2), 177–191.

    Article  MathSciNet  Google Scholar 

  • Bissell, A. F. (1984). The performance of control charts and cusums under linear trend. Journal of the Royal Statistical Society: Series C (Applied Statistics), 33(2), 145–151.

    Google Scholar 

  • Brodsky, B. E., Darkhovsky, B. S. (1993). Nonparametric methods in change-point problems. Mathematics and its Applications, Vol. 243. Kluwer Academic Publishers Group, Dordrecht.

  • Bücher, A., Kojadinovic, I., Rohmer, T., Segers, J. (2014). Detecting changes in cross-sectional dependence in multivariate time series. Journal of Multivariate Analysis, 132, 111–128.

  • Bücher, A., Ruppert, M. (2013). Consistent testing for a constant copula under strong mixing based on the tapered block multiplier technique. Journal of Multivariate Analysis, 116, 208–229.

    Article  MathSciNet  Google Scholar 

  • Carlstein, E. (1988). Nonparametric change-point estimation. The Annals of Statistics, 16(1), 188–197.

    Article  MathSciNet  Google Scholar 

  • Dehling, H., Vogel, D., Wendler, M., Wied, D. (2017). Testing for changes in Kendall’s tau. Econometric Theory, 33(6), 1352–1386.

    Article  MathSciNet  Google Scholar 

  • Dehling, H., Vuk, K., Wendler, M. (2022). Change-point detection based on weighted two-sample U-statistics. Electronic Journal of Statistics, 16(1), 862–891.

    Article  MathSciNet  Google Scholar 

  • Fermanian, J. -D., Radulović, D., Wegkamp, M. H. (2004). Weak convergence of empirical copula processes. Bernoulli, 10, 847–860.

    Article  MathSciNet  Google Scholar 

  • Gan, F. (1992). Cusum control charts under linear drift. Journal of the Royal Statistical Society: Series D (The Statistician), 41(1), 71–84.

    Google Scholar 

  • Gombay, E., Horváth, L. (1995). An application of \(U\)-statistics to change-point analysis. Acta Universitatis Szegediensis. Acta Scientiarum Mathematicarum, 60(1–2), 345–357.

  • Gombay, E., Horváth, L. (1999). Change-points and bootstrap. Environmetrics, 10(6), 725–736.

    Article  Google Scholar 

  • Hušková, M. (1999). Gradual changes versus abrupt changes. Journal of Statistical Planning and Inference, 76(1–2), 109–125.

    Article  MathSciNet  Google Scholar 

  • Hušková, M, Meintanis, S. G. (2006a). Change point analysis based on empirical characteristic functions. Metrika, 63(2), 145–168.

    Article  MathSciNet  Google Scholar 

  • Hušková, M., Meintanis, S. G. (2006b). Change-point analysis based on empirical characteristic functions of ranks. Sequential Analysis. Design Methods & Applications, 25(4), 421–436.

    Article  MathSciNet  Google Scholar 

  • Inoue, A. (2001). Testing for distributional change in time series. Econometric Theory, 17(1), 156–187.

    Article  MathSciNet  Google Scholar 

  • Kander, Z., Zacks, S. (1966). Test procedures for possible changes in parameters of statistical distributions occurring at unknown time points. Annals of Mathematical Statistics, 37, 1196–1210.

    Article  MathSciNet  Google Scholar 

  • Kojadinovic, I., Quessy, J. -F., Rohmer, T. (2016). Testing the constancy of Spearman’s rho in multivariate time series. Annals of the Institute of Statistical Mathematics, 68(5), 929–954.

    Article  MathSciNet  Google Scholar 

  • Lombard, F. (1987). Rank tests for changepoint problems. Biometrika, 74(3), 615–624.

    Article  MathSciNet  Google Scholar 

  • Nasri, B. R., Rémillard, B. N., Bahraoui, T. (2022). Change-point problems for multivariate time series using pseudo-observations. Journal of Multivariate Analysis, 187, 104857.

    Article  MathSciNet  Google Scholar 

  • Nelsen, R. B. (2006). An introduction to copulas. Springer Series in Statistics, second edition, Springer, New York.

  • Page, E. S. (1955). A test for a change in a parameter occurring at an unknown point. Biometrika, 42, 523–527.

    Article  MathSciNet  Google Scholar 

  • Parzen, E. (1962). On estimation of a probability density function and mode. Annals of Mathematical Statistics, 33, 1065–1076.

    Article  MathSciNet  Google Scholar 

  • Pettitt, A. (1979). A non-parametric approach to the change point problem. Applied Statistics, 28(2), 126–135.

    Article  MathSciNet  Google Scholar 

  • Quessy, J. -F. (2019). Consistent nonparametric tests for detecting gradual changes in the marginals and the copula of multivariate time series. Statistical Papers, 60(3), 367–396.

    Article  MathSciNet  Google Scholar 

  • Quessy, J. -F, Saïd, M., Favre, A. -C. (2013). Multivariate Kendall’s tau for change-point detection in copulas. Canadian Journal of Statistics, 41(1), 65–82.

    Article  MathSciNet  Google Scholar 

  • Rio, E. (2000). Théorie asymptotique des processus aléatoires faiblement dépendants. Mathématiques et Applications. Berlin: Springer-Verlag.

    Google Scholar 

  • Sklar, A. (1959). Fonctions de répartition à \(n\) dimensions et leurs marges. Publications de l’Institut de statistique de l’Université de Paris, 8, 229–231.

    MathSciNet  Google Scholar 

  • van der Vaart, A. W., Wellner, J. A. (1996). Weak convergence and empirical processes: With applications to statistics. Springer Series in Statistics. New York: Springer-Verlag.

  • Vogt, M., Dette, H. (2015). Detecting gradual changes in locally stationary processes. The Annals of Statistics, 43(2), 713–740.

  • Wied, D., Dehling, H., van Kampen, M., Vogel, D. (2014). A fluctuation test for constant Spearman’s rho with nuisance-free limit distribution. Computational Statistics & Data Analysis, 76, 723–736.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author acknowledges financial support by individual grants from the Natural Sciences and Engineering Research Council of Canada (NSERC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean-François Quessy.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Proofs of the main results

A Proofs of the main results

1.1 A.1: Proof of Proposition 1

From the definition of \({\mathscr {S}}_{\ell \ell '}^\textrm{Sp} = (\rho ^\textrm{Sp}_{\ell \ell ',1}, \ldots , \rho ^\textrm{Sp}_{\ell \ell ',T})\) and in view of (4), a direct computation yields

$$\begin{aligned} \varUpsilon _{\ell \ell '}^\textbf{k}= \langle \varvec{\textbf{w}}^\textbf{k}, \mathscr {S}_{\ell \ell '}^\textrm{Sp} \rangle= & {} {1\over T} \sum\nolimits_{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \rho ^\textrm{Sp}_{\ell \ell ',t} \\= & {} {1 \over T} \sum\nolimits_{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \left\{ \left( 1 - w_t^{\textbf{k}_0} \right) \rho ^\textrm{Sp}_{F_{\ell \ell '}} + w_t^{\textbf{k}_0} \rho ^\textrm{Sp}_{G_{\ell \ell '}} \right\} \\= & {} {1\over T} \sum\nolimits_{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \left\{ \rho ^\textrm{Sp}_{F_{\ell \ell '}} + w_t^{\textbf{k}_0} \left( \rho ^\textrm{Sp}_{G_{\ell \ell '}} - \rho ^\textrm{Sp}_{F_{\ell \ell '}} \right) \right\} \\= & {} \left( \rho ^\textrm{Sp}_{G_{\ell \ell '}} - \rho ^\textrm{Sp}_{F_{\ell \ell '}} \right) {1\over T} \sum\nolimits_{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) w_t^{\textbf{k}_0} \\= & {} \left( \rho ^\textrm{Sp}_{G_{\ell \ell '}} - \rho ^\textrm{Sp}_{F_{\ell \ell '}} \right) \left\langle \varvec{\textbf{w}}^\textbf{k}, \varvec{\textbf{w}}^{\textbf{k}_0} \right\rangle . \end{aligned}$$

It readily follows that \(\varUpsilon ^\textbf{k}= (\varSigma _G^\textrm{Sp} - \varSigma _F^\textrm{Sp}) \, \langle \varvec{\textbf{w}}^\textbf{k}, \varvec{\textbf{w}}^{\textbf{k}_0} \rangle \). \(\square \)

1.2 A.2: Proof of Proposition 2

First recall that for any \(\ell <\ell ' \in \{1,\ldots ,d\}\), one can associate \(j \in \{ 1, \ldots , L \}\) via the rule \(j = (\ell -1) d + \ell ' - {\ell +1 \atopwithdelims ()2}\). Letting \({{\widetilde{\textbf{u}}}}_j = (\textbf{1}_{\ell -1},u_\ell ,\textbf{1}_{\ell '-\ell -1},u_{\ell '},\textbf{1}_{d-\ell '})\), one can write

$$\begin{aligned} \sqrt{T} \, {\widehat{\varUpsilon }}^\textbf{k}_{\ell \ell '} = \sqrt{T} \, {\widehat{\varUpsilon }}^\textbf{k}_j= & {} {12\over \sqrt{T}} \sum\nolimits_{t=1}^T \left( \textbf{w}_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) (1-{\widehat{U}}_{t\ell }) (1-{\widehat{U}}_{t\ell '}) \nonumber \\= & {} {12\over \sqrt{T}} \sum\nolimits_{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \int _{[0,1]^2} \mathbb {I}\left( {\widehat{U}}_{t\ell } \le u_\ell , {\widehat{U}}_{t\ell '} \le u_{\ell '} \right) \textrm{d}u_\ell \textrm{d}u_{\ell '} \nonumber \\= & {} 12 \int _{[0,1]^2} \left\{ {1\over \sqrt{T}} \sum\nolimits _{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \mathbb {I}\left( {\widehat{\textbf{U}}}_t \le {{\widetilde{\textbf{u}}}}_j \right) \right\} \textrm{d}u_\ell \textrm{d}u_{\ell '} \nonumber \\= & {} 12 \int _{[0,1]^2} {\widehat{\mathbb {L}}}^\textbf{k}({{\widetilde{\textbf{u}}}}_j) \, \textrm{d}u_\ell \textrm{d}u_{\ell '}, \end{aligned}$$
(13)

where \({\widehat{\mathbb {L}}}^\textbf{k}\) is the empirical process defined for \(\textbf{u}\in [0,1]^d\) by

$$\begin{aligned} {\widehat{\mathbb {L}}}^\textbf{k}(\textbf{u}) = {1\over \sqrt{T}} \sum\nolimits _{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \, \mathbb {I}\left( {\widehat{\textbf{U}}}_t \le \textbf{u}\right) . \end{aligned}$$

It will now be shown that \({\widehat{\mathbb {L}}}^\textbf{k}\) is asymptotically equivalent to

$$\begin{aligned} {{\widetilde{\mathbb {L}}}}^\textbf{k}(\textbf{u}) = {1\over \sqrt{T}} \sum\nolimits_{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \, \mathbb {I}\left( \textbf{U}_t \le \textbf{u}\right) . \end{aligned}$$

Letting \(\widehat{\textbf{F}}^{-1}(\textbf{u}) = ({\widehat{F}}_1^{-1}(u_1), \ldots , {\widehat{F}}_d^{-1}(u_d))\), one can write

$$\begin{aligned} \sup _{\textbf{u}\in [0,1]^d} \left| {\widehat{\mathbb {L}}}^\textbf{k}(\textbf{u}) - {{\widetilde{\mathbb {L}}}}^\textbf{k}(\textbf{u}) \right| \le \sup _{\textbf{u}\in [0,1]^d} \left| {\widehat{\mathbb {L}}}^\textbf{k}(\textbf{u}) - {{\widetilde{\mathbb {L}}}}^\textbf{k}(\widehat{\textbf{F}}^{-1}(\textbf{u})) \right| + \sup _{\textbf{u}\in [0,1]^d} \left| {{\widetilde{\mathbb {L}}}}^\textbf{k}(\widehat{\textbf{F}}^{-1}(\textbf{u})) - {{\widetilde{\mathbb {L}}}}^\textbf{k}(\textbf{u}) \right| . \end{aligned}$$
(14)

Under the assumption that the \(\alpha \)-mixing coefficients of \((\textbf{U}_t)_{t\in \mathbb {Z}}\) are such that \(\alpha (r) = O(r^{-4-d(1+\epsilon )})\) for some \(\epsilon \in (0,1/4]\), one can invoke Proposition 1 of Quessy (2019) and conclude that \({{\widetilde{\mathbb {L}}}}^\textbf{k}\) converges weakly in the space \(\varDelta \times \ell ^\infty ([0,1]^d)\) to a centered Gaussian process \(\mathbb {L}^\textbf{k}\) such that for \((\textbf{k},\textbf{u}),(\textbf{k}',\textbf{u}') \in \varDelta \times [0,1]^d\),

$$\begin{aligned} \textrm{E}\left\{ \mathbb {L}^\textbf{k}(\textbf{u}) \, \mathbb {L}^{\textbf{k}'}(\textbf{u}') \right\} = \varLambda (\textbf{k},\textbf{k}') \, \sum\nolimits_{t\in \mathbb {Z}} \textrm{cov}\left\{ \mathbb {I}(\textbf{U}_0 \le \textbf{u}), \mathbb {I}(\textbf{U}_t \le \textbf{u}') \right\} , \end{aligned}$$

where for \(\textbf{k}= (k_1,k_2)\) and \(\textbf{k}' = (k_1',k_2')\),

$$\begin{aligned} \varLambda (\textbf{k},\textbf{k}') = \int _{k_1}^{k_2} \int _{k_1'}^{k_2'} \left\{ \min (s,s') - s s' \over (k_2-k_1) (k_2'-k_1') \right\} \textrm{d}s \, \textrm{d}s'. \end{aligned}$$

Since \(\sqrt{T}(\widehat{\textbf{F}}^{-1}(\textbf{u})-\textbf{u})\) converges weakly and \({{\widetilde{\mathbb {L}}}}^\textbf{k}\) is asymptotically continuous, the second expression on the right of inequality (14) converges in probability to zero. To deal with the first expression on the right of inequality (14), note that

$$\begin{aligned} {\widehat{\mathbb {L}}}^\textbf{k}(\textbf{u}) - {{\widetilde{\mathbb {L}}}}^\textbf{k}(\widehat{\textbf{F}}^{-1}(\textbf{u})) = {1\over \sqrt{T}} \sum\nolimits _{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \left\{ \mathbb {I}\left( \widehat{\textbf{F}}(\textbf{U}_t) \le \textbf{u}\right) - \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{u}) \right) \right\} . \end{aligned}$$

Invoking Lemma 1 in Quessy (2019) that establishes that

$$\begin{aligned} \sum\nolimits _{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) a_t = {1 \over K_2-K_1} \sum\nolimits_{j=K_1}^{K_2-1} \left( \sum\nolimits_{t=1}^j a_t - {j\over T} \sum\nolimits _{t=1}^T a_t \right) , \end{aligned}$$

where \(K_1 = \lfloor T k_1 \rfloor \) and \(K_2 = \lfloor T k_2 \rfloor \), one has

$$\begin{aligned} \left| {\widehat{\mathbb {L}}}^\textbf{k}(\textbf{u}) - {{\widetilde{\mathbb {L}}}}^\textbf{k}(\widehat{\textbf{F}}^{-1}(\textbf{u})) \right|= & {} {1 \over \sqrt{T} (K_2-K_1)} \left| \sum\nolimits_{j=K_1}^{K_2-1} \left[ \sum\nolimits_{t=1}^j \left\{ \mathbb {I}\left( \widehat{\textbf{F}}(\textbf{U}_t) \le \textbf{u}\right) - \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{u}) \right) \right\} \right. \right. \\{} & {} \left. \left. - \, {j\over T} \sum\nolimits_{t=1}^T \left\{ \mathbb {I}\left( \widehat{\textbf{F}}(\textbf{U}_t) \le \textbf{u}\right) - \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{u}) \right) \right\} \right] \right| \\\le & {} {1 \over \sqrt{T} (K_2-K_1)} \sum\nolimits_{j=K_1}^{K_2-1} \left| \sum\nolimits_{t=1}^j \left\{ \mathbb {I}\left( \widehat{\textbf{F}}(\textbf{U}_t) \le \textbf{u}\right) - \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{u}) \right) \right\} \right| \\{} & {} + \, {1 \over \sqrt{T} (K_2-K_1)} \sum\nolimits_{j=K_1}^{K_2-1} {j\over T} \left| \sum\nolimits _{t=1}^T \left\{ \mathbb {I}\left( \widehat{\textbf{F}}(\textbf{U}_t) \le \textbf{u}\right) - \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{u}) \right) \right\} \right| . \end{aligned}$$

Now proceeding as in Fermanian et al. (2004), define the grid \(G_T = \{(i_1,...,i_d)/T: i_1, \ldots , i_d \in \{1,\ldots ,T\} \}\) and observe that \(\mathbb {I}(\widehat{\textbf{F}}(\textbf{U}_t) \le \textbf{u}) = \mathbb {I}(\textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{u}))\) when \(\textbf{u}\in G_T\). From the fact that

$$\begin{aligned} \left| \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{s}) \right) - \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{s}-\textbf{1}/T) \right) \right|= & {} \left| \sum\nolimits_{\ell =1}^d \prod _{\ell '=1}^{\ell -1} \mathbb {I}\left( U_{t\ell '} \le {\widehat{F}}_{\ell '}^{-1}(s_{\ell '}) \right) \prod _{\ell '=\ell +1}^d \mathbb {I}\left( U_{t\ell '} \le {\widehat{F}}_{\ell '}^{-1}(s_{\ell '}-1/T) \right) \right. \\{} & {} \left. \times \, \left\{ \mathbb {I}\left( U_{t\ell } \le {\widehat{F}}_{\ell }^{-1}(s_\ell ) \right) - \mathbb {I}\left( U_{t\ell } \le {\widehat{F}}_{\ell }^{-1}(s_\ell -1/T) \right) \right\} \right| \\\le & {} \sum\nolimits_{\ell =1}^d \left| \mathbb {I}\left( U_{t\ell } \le \widehat{F}_{\ell }^{-1}(s_\ell ) \right) - \mathbb {I}\left( U_{t\ell } \le \widehat{F}_{\ell }^{-1}(s_\ell -1/T) \right) \right| , \end{aligned}$$

one can write

$$\begin{aligned} \sup _{\textbf{u}\in [0,1]^d} \left| \sum\nolimits_{t=1}^j \left\{ \mathbb {I}\left( \widehat{\textbf{F}}(\textbf{U}_t) \le \textbf{u}\right) - \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{u}) \right) \right\} \right|\le & {} \max _{\textbf{s} \in G_T} \left| \sum\nolimits_{t=1}^j \left\{ \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{s}) \right) - \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{s}-\textbf{1}/T) \right) \right\} \right| \\\le & {} \max _{\textbf{s} \in G_T} \sum\nolimits_{t=1}^j \left| \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{s}) \right) - \mathbb {I}\left( \textbf{U}_t \le \widehat{\textbf{F}}^{-1}(\textbf{s}-\textbf{1}/T) \right) \right| \\\le & {} d. \end{aligned}$$

It follows that

$$\begin{aligned} \sup _{\textbf{u}\in [0,1]^d} \left| {\widehat{\mathbb {L}}}^\textbf{k}(\textbf{u}) - {{\widetilde{\mathbb {L}}}}^\textbf{k}(\widehat{\textbf{F}}^{-1}(\textbf{u})) \right|\le & {} {d\over \sqrt{T}(K_2-K_1)} \sum\nolimits_{j=K_1}^{K_2-1} \left( 1 + {j\over T} \right) \\= & {} {d\over \sqrt{T}} \left( 1 + { K_1+K_2-1 \over 2 T } \right) \approx {d\over \sqrt{T}} \left( 1 + { k_1 + k_2 \over 2 } \right) . \end{aligned}$$

This shows that as \(T\rightarrow \infty \)

$$\begin{aligned} \sup _{\textbf{u}\in [0,1]^d} \left| {\widehat{\mathbb {L}}}^\textbf{k}(\textbf{u}) - {{\widetilde{\mathbb {L}}}}^\textbf{k}(\textbf{u}) \right| {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}0. \end{aligned}$$

As a consequence, referring back to (13),

$$\begin{aligned} \sqrt{T} \, {\widehat{\varUpsilon }}^\textbf{k}_{\ell \ell '} = \sqrt{T} \, {\widehat{\varUpsilon }}^\textbf{k}_j = 12 \int _{[0,1]^2} {{\widetilde{\mathbb {L}}}}^\textbf{k}({{\widetilde{\textbf{u}}}}_j) \, \textrm{d}u_\ell \textrm{d}u_{\ell '} + o_\mathbb {P}(1). \end{aligned}$$

One can then write

$$\begin{aligned} { \sqrt{T} \, \mathscr {V}\left( {\widehat{\varUpsilon }}^\textbf{k}\right) \over 12 } = \left( \int _{[0,1]^2} {{\widetilde{\mathbb {L}}}}^\textbf{k}({{\widetilde{\textbf{u}}}}_1) \, \textrm{d}u_1 \textrm{d}u_2, \ldots , \int _{[0,1]^2} {{\widetilde{\mathbb {L}}}}^\textbf{k}({{\widetilde{\textbf{u}}}}_L) \, \textrm{d}u_{d-1} \textrm{d}u_d \right) + o_\mathbb {P}(1). \end{aligned}$$

An application of the Continuous mapping Theorem then ensures that \(\sqrt{T} \, \mathscr {V}({\widehat{\varUpsilon }}^\textbf{k}) / 12\) converges weakly to a vector of processes of the form \(\mathbb {V}(\textbf{k}) = \left( \mathbb {V}_1(\textbf{k}), \ldots , \mathbb {V}_L(\textbf{k}) \right) \), where

$$\begin{aligned} \mathbb {V}_j(\textbf{k}) = \int\nolimits_{[0,1]^2} \mathbb {L}^\textbf{k}({{\widetilde{\textbf{u}}}}_j) \, \textrm{d}u_\ell \, \textrm{d}u_{\ell '}, \quad j \in \{1, \ldots , L\}. \end{aligned}$$

Therefore, \(\mathbb {V}\) is a vector of centered Gaussian processes such that for \(j' = (m-1) d + m' - {m+1 \atopwithdelims ()2}\) associated to \(m<m' \in \{1,\ldots ,d\}\), its covariance structure is managed for \(\textbf{k}, \textbf{k}' \in \varDelta \) by

$$\begin{aligned} \textrm{E}\left\{ \mathbb {V}_j(\textbf{k}) \, \mathbb {V}_{j'}(\textbf{k}') \right\}= & {} \int\nolimits_{[0,1]^4} \textrm{E}\left\{ \mathbb {L}^\textbf{k}({{\widetilde{\textbf{u}}}}_j) \, \mathbb {L}^{\textbf{k}'}({{\widetilde{\textbf{u}}}}_{j'}) \right\} \textrm{d}u_\ell \, \textrm{d}u_{\ell '} \, \textrm{d}u_m \, \textrm{d}u_{m'} \\= & {} \varLambda (\textbf{k},\textbf{k}') \int\nolimits_{[0,1]^4} \sum\nolimits _{t\in \mathbb {Z}} \textrm{cov}\left\{ \mathbb {I}(\textbf{U}_0 \le {{\widetilde{\textbf{u}}}}_j), \mathbb {I}(\textbf{U}_t \le {{\widetilde{\textbf{u}}}}_{j'}) \right\} \, \textrm{d}u_\ell \, \textrm{d}u_{\ell '} \, \textrm{d}u_m \, \textrm{d}u_{m'} \\= & {} \varLambda (\textbf{k},\textbf{k}') \sum\nolimits _{t\in \mathbb {Z}} \textrm{cov}\left\{ \int _{[0,1]^2} \mathbb {I}(\textbf{U}_0 \le {{\widetilde{\textbf{u}}}}_j) \,\right. \\{} & {} \left. \textrm{d}u_\ell \, \textrm{d}u_{\ell '}, \int _{[0,1]^2} \mathbb {I}(\textbf{U}_t \le {{\widetilde{\textbf{u}}}}_{j'}) \, \textrm{d}u_m \, \textrm{d}u_{m'} \right\} \\= & {} \varLambda (\textbf{k},\textbf{k}') \sum\nolimits_{t\in \mathbb {Z}} \textrm{cov}\left\{ (1-U_{0\ell })(1-U_{0\ell '}), (1-U_{tm})(1-U_{tm'}) \right\} \\= & {} \varLambda (\textbf{k},\textbf{k}') \, \varOmega _{jj'}. \end{aligned}$$

Hence, \(\textrm{E}\{ \mathbb {V}(\textbf{k})^\top \mathbb {V}(\textbf{k}') \} = \varLambda (\textbf{k},\textbf{k}') \, \varOmega \) and

$$\begin{aligned} \textrm{E}\left\{ \left( \mathbb {V}(\textbf{k}) \, \varOmega ^{-1/2} \right) ^\top \left( \mathbb {V}(\textbf{k}') \, \varOmega ^{-1/2} \right) \right\} = \varOmega ^{-1/2} \, \textrm{E}\left\{ \mathbb {V}(\textbf{k})^\top \mathbb {V}(\textbf{k}') \right\} \varOmega ^{-1/2} = \varLambda (\textbf{k},\textbf{k}') \, I_L. \end{aligned}$$

Since the covariance function of the integrated Brownian bridge is \(\textrm{E}\{ {{\widetilde{\mathbb {B}}}}(\textbf{k}) \, {{\widetilde{\mathbb {B}}}}(\textbf{k}') \} = \varLambda (\textbf{k},\textbf{k}')\), the covariance structure \(\varLambda (\textbf{k},\textbf{k}') \, I_L\) is the same as that of a vector \(({{\widetilde{\mathbb {B}}}}_1(\textbf{k}), \ldots , {{\widetilde{\mathbb {B}}}}_L(\textbf{k}))\) of independent integrated Brownian bridges. It follows that \(\sqrt{T} \, \mathscr {V}({\widehat{\varUpsilon }}^\textbf{k}) \, \varOmega ^{-1/2} / 12\) converges weakly to \(({{\widetilde{\mathbb {B}}}}_1(\textbf{k}), \ldots , {{\widetilde{\mathbb {B}}}}_L(\textbf{k}))\). \(\square \)

1.3 A.3: Proof of Proposition 3

From the conclusion of Proposition 2 and the Continuous mapping Theorem,

$$\begin{aligned} \sqrt{T} \, S_{T,p} = \sup _{\textbf{k}\in \varDelta } \eta (\textbf{k}) \left\| \sqrt{T} \, \mathscr {V}({\widehat{\varUpsilon }}^\textbf{k}) \, \varOmega ^{-1/2} \over 12 \right\| _p \rightsquigarrow \sup _{\textbf{k}\in \varDelta } \eta (\textbf{k}) \left\| \left( {{\widetilde{\mathbb {B}}}}_1(\textbf{k}), \ldots , {{\widetilde{\mathbb {B}}}}_L(\textbf{k}) \right) \right\| _p, \end{aligned}$$

which completes the proof. \(\square \)

1.4 A.4: Proof of Proposition 4

The first step of the proof consists to write

$$\begin{aligned} {\widehat{\varUpsilon }}^\textbf{k}_{\ell \ell '}= & {} 12 \int _{[0,1]^2} \left[ {1\over T} \sum\nolimits _{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \left\{ \mathbb {I}({\widehat{\textbf{U}}}_t \le {{\widetilde{\textbf{u}}}}_j) - \textrm{E}\left( \mathbb {I}(\textbf{U}_t \le \textbf{u}) \right) + \textrm{E}\left( \mathbb {I}(\textbf{U}_t \le \textbf{u}) \right) \right\} \right] \textrm{d}u_\ell \textrm{d}u_{\ell '} \\= & {} {12 \over \sqrt{T}} \int _{[0,1]^2} {\widehat{\mathbb {L}}}^\textbf{k}({{\widetilde{\textbf{u}}}}_j) \, \textrm{d}u_\ell \textrm{d}u_{\ell '} + 12 \, \langle \textbf{w}^\textbf{k}, \textbf{w}^{\textbf{k}_0} \rangle \left\{ (\varSigma _G)_{\ell \ell '} - (\varSigma _F)_{\ell \ell '} \right\} , \end{aligned}$$

where \({\widehat{\mathbb {L}}}^\textbf{k}\) is the empirical process defined for \(\textbf{u}\in [0,1]^d\) by

$$\begin{aligned} {\widehat{\mathbb {L}}}^\textbf{k}(\textbf{u}) = {1\over \sqrt{T}} \sum\nolimits _{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \left\{ \mathbb {I}({\widehat{\textbf{U}}}_t \le \textbf{u}) - \textrm{E}\left( \mathbb {I}(\textbf{U}_t \le \textbf{u}) \right) \right\} . \end{aligned}$$

By arguments similar as those in the proof of Proposition 2, \({\widehat{\mathbb {L}}}^\textbf{k}\) is asymptotically equivalent to

$$\begin{aligned} {{\widetilde{\mathbb {L}}}}^\textbf{k}(\textbf{u}) = {1\over \sqrt{T}} \sum\nolimits _{t=1}^T \left( w_t^\textbf{k}- {\bar{w}}^\textbf{k}\right) \left\{ \mathbb {I}(\textbf{U}_t \le \textbf{u}) - \textrm{E}\left( \mathbb {I}(\textbf{U}_t \le \textbf{u}) \right) \right\} . \end{aligned}$$

It is easy to see that \({{\widetilde{\mathbb {L}}}}^\textbf{k}\) converges weakly to a non-degenerate centered Gaussian process whose covariance structure is characterized by the finite-dimensional structure of \({{\widetilde{\mathbb {L}}}}^\textbf{k}\). As a consequence,

$$\begin{aligned} \mathscr {V}({\widehat{\varUpsilon }}^\textbf{k})= & {} {12 \over \sqrt{T}} \left( \int _{[0,1]^2} {{\widetilde{\mathbb {L}}}}^\textbf{k}({{\widetilde{\textbf{u}}}}_1) \, \textrm{d}u_1 \textrm{d}u_2, \ldots , \int _{[0,1]^2} {{\widetilde{\mathbb {L}}}}^\textbf{k}({{\widetilde{\textbf{u}}}}_L) \, \textrm{d}u_{d-1} \textrm{d}u_d \right) \\{} & {} + \, 12 \, \left\langle \textbf{w}^\textbf{k}, \textbf{w}^{\textbf{k}_0} \right\rangle \, \left\{ \mathscr {V}\left( \varSigma _G^\textrm{Sp} - \varSigma _F^\textrm{Sp} \right) \right\} + o_\mathbb {P}(1/\sqrt{T}). \end{aligned}$$

Since \(\varSigma _F^\textrm{Sp} \ne \varSigma _G^\textrm{Sp}\), one has in probability that

$$\begin{aligned} \left\| \mathscr {V}({\widehat{\varUpsilon }}^\textbf{k}) \right\| \longrightarrow 12 \, \varLambda (\textbf{k},\textbf{k}_0) \, \left\| \mathscr {V}\left( \varSigma _G^\textrm{Sp} - \varSigma _F^\textrm{Sp} \right) \right\| > 0, \end{aligned}$$

where \(\varLambda (\textbf{k},\textbf{k}_0) = \lim _{T\rightarrow \infty } \langle \textbf{w}^\textbf{k}, \textbf{w}^{\textbf{k}_0} \rangle \). Invoking the argmax continuous mapping Theorem (see Theorem 3.2.2 of van der Vaart and Wellner 1996), one can conclude that in probability,

$$\begin{aligned} {\widehat{\textbf{k}}}_0 = \underset{\varvec{\textbf{k}}\in \varDelta ^\star }{\textrm{argsup}} \, { \Vert \mathscr {V}({\widehat{\varUpsilon }}^\textbf{k}) \Vert \over \langle \textbf{w}^\textbf{k}, \textbf{w}^\textbf{k}\rangle ^{1/2} } \longrightarrow \left\{ 12 \left\| \mathscr {V}\left( \varSigma _G^\textrm{Sp} - \varSigma _F^\textrm{Sp} \right) \right\| \right\} \underset{\varvec{\kappa }\in \varDelta ^\star }{\textrm{argsup}} \, { \left| \varLambda (\textbf{k},\textbf{k}_0) \right| \over \{ \varLambda (\textbf{k},\textbf{k}) \}^{1/2} } = \textbf{k}_0. \end{aligned}$$

\(\square \)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Quessy, JF. Gradual change-point analysis based on Spearman matrices for multivariate time series. Ann Inst Stat Math 76, 423–446 (2024). https://doi.org/10.1007/s10463-023-00891-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-023-00891-5

Keywords

Navigation