Skip to main content
Log in

Estimating non-simultaneous changes in the mean of vectors

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

A sequence of independent vectors with correlated components is considered. It is supposed that there is one change point in the mean of each component and changes need not occur simultaneously. The asymptotic distribution of the change point estimators is studied. If the true change points are well separated, the explicit asymptotic distribution of the change point estimators is presented. In the case the true change points coincide, it is shown that the limit distribution of properly standardized change points estimates exists. It depends not only on the underlying time series dependence structure, but also on the ratio of the sizes of the changes. The asymptotic distribution function is not known, but due to the invariance principle it can be obtained by simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Antoch J, Jarušková D (2013) Testing for multiple change points. Comput Stat 28:2161–2183

    Article  MathSciNet  MATH  Google Scholar 

  • Bai J, Perron P (1998) Estimating and testing linear models with multiple structural changes. Econometrica 66:47–78

    Article  MathSciNet  MATH  Google Scholar 

  • Bhattacharya PK, Brockwell PJ (1976) The minimum of an additive process with application to signal estimation and storage theory. Z Wahrsch Verw Gebiete 37:51–75

    Article  MathSciNet  MATH  Google Scholar 

  • Camuffo D, Jones P (2002) Improved understanding of past climatic variability from early daily European instrumental sources. Clim Change 53:1–3

    Article  Google Scholar 

  • Csörgő M, Horváth L (1997) Limit theorems in change point analysis. Wiley, New York

    MATH  Google Scholar 

  • Fotopoulos SB, Jandhuala VK (2001) Maximum likelihood estimation of a change point for exponentially distributed random variables. Statist Probab Lett 51:423–429

    Article  MathSciNet  MATH  Google Scholar 

  • Fotopoulos SB, Jandhyala VK, Khapalova E (2010) Exact asymptotic distribution of change-point mle for change in the mean of Gaussian sequences. Ann Apl Stat 4:1081–1104

    Article  MathSciNet  MATH  Google Scholar 

  • Hušková M (1995) Estimators for epidemic alternatives. Comment Math Univ Carol 36:279–291

    MathSciNet  MATH  Google Scholar 

  • Jandhyala V, Fotopoulos S, MacNeill I, Liu P (2013) Inference for single and multiple change-points. J Time Ser 34:423–446

    Article  MATH  Google Scholar 

  • Jarušková D (2015) Detecting non-simultaneous changes in means of vectors. Test 24:681–700

    Article  MathSciNet  MATH  Google Scholar 

  • Seber GAF, Wild CJ (1989) Nonlinear regression. Wiley, New York

    Book  MATH  Google Scholar 

  • Stryhn H (1996) The location of the maximum of asymmetric two-sided Brownian motion with triangular drift. Stat Prob Lett 29:279–284

    Article  MathSciNet  MATH  Google Scholar 

  • Yao Y-C (1987) Approximating the distribution of the maximum likelihood estimate of the change-point in a sequence of independent random variables. Ann Stat 15:1321–1328

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The work of the author was supported by Grant GAČR 403-15-09663S.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniela Jarušková.

Ethics declarations

Conflict of interest

The author declares that there is no conflict of interest.

Appendix

Appendix

After transforming \(\big (X_1(i),X_2(i)\big )\) into \(\big (X_1(i), Z(i)\big )\) our problem is a special case of change point estimation in linear regression where the regression parameters are connected by some linear constrains. We have to prove that under the assumptions of Theorem 1 the asymptotic distribution of the estimates of \(k_1^*\) and \(k_2^*\) is the same for known and unknown parameters \(a_n\) and \(b_n\). Steps of the proof are similar as in Hušková (1995).

We prove Theorem 1 for Model II. The proof for Model I and similarly for Model I and II, with \(L=a_n/b_n\) known, goes along the same lines.

To see how far \(\widehat{k}_1\) is from \(k_1^*\), resp. \(\widehat{k}_2\) from \(k_2^*\), we will study \(\chi ^2(k_1^*,k_2^*)-\chi ^2(k_1,k_2)\). Similarly as in Hušková (1995) we can express

$$\begin{aligned} \chi ^2(k_1,k_2)=A_1(k_1,k_2)+2\,A_2(k_1,k_2)+A_3(k_1,k_2), \end{aligned}$$

where for Model II and \(k_1 \le k_2\) we define:

$$\begin{aligned} A_1(k_1,k_2)&=\frac{k_2}{k_1(k_2-\rho ^2k_1)}\left( S_{ev}(k_1,k_2)\right) ^2+\frac{1}{k_2} \left( S_v(k_2)\right) ^2,\\ A_2(k_1,k_2)&= \frac{k_2}{k_1(k_2-\rho ^2k_1)}S_{ev}(k_1,k_2)m_{xz}(k_1,k_2)+\frac{1}{k_2} S_v(k_2) m_z(k_2),\\ A_3(k_1,k_2)&= \frac{k_2}{k_1(k_2-\rho ^2k_1)}\left( m_{xz}(k_1,k_2)\right) ^2+\frac{1}{k_2} \left( m_z(k_2)\right) ^2,\\ S_{ev}(k_1,k_2)&=\sqrt{1-\rho ^2}S_{e}(k_1)-\rho \left( S_v(k_1)-\frac{k_1}{k_2}S_v(k_2)\right) ,\\ m_{xz}(k_1,k_2)&=\sqrt{1-\rho ^2}m_{x1}(k_1)-\rho \left( m_z(k_1)-\frac{k_1}{k_2}m_z(k_2)\right) , \end{aligned}$$

and for \(k_2 < k_1\) we define:

$$\begin{aligned} A_1(k_1,k_2)&=\frac{1}{(k_1-\rho ^2k_2)}\left( S_{ev}(k_1,k_2)\right) ^2+\frac{1}{k_2} \left( S_v(k_2)\right) ^2,\\ A_2(k_1,k_2)&= \frac{1}{(k_1-\rho ^2 k_2)}S_{ev}(k_1,k_2)m_{xz}(k_1,k_2)+\frac{1}{k_2} S_v(k_2) m_z(k_2),\\ A_3(k_1,k_2)&= \frac{1}{(k_1-\rho ^2 k_2)}\left( m_{xz}(k_1,k_2)\right) ^2+\frac{1}{k_2} \left( m_z(k_2)\right) ^2,\\ S_{ev}(k_1,k_2)&= \sqrt{1-\rho ^2}S_{e}(k_1)-\rho \left( S_v(k_1)-S_v(k_2)\right) ,\\ m_{xz}(k_1,k_2)&= \sqrt{1-\rho ^2}m_{x1}(k_1)-\rho \left( m_z(k_1)-m_z(k_2)\right) . \end{aligned}$$

The proof of Theorem 1 is split into several lemmas.

Lemma A.1

As \(n \rightarrow \infty \), it holds

$$\begin{aligned} \max _{\begin{array}{c} {[}\beta n{]} \le k_1 \le n\\ {[}\beta n{]} \le k_2 \le n \end{array}} |A_1(k_1,k_2)| = O_P(1), \quad \max _{\begin{array}{c} [\beta n] \le k_1 \le n\\ {[}\beta n{]} \le k_2 \le n \end{array}} |A_2(k_1,k_2)| = O_P(\Delta _n \sqrt{n}). \end{aligned}$$
(A-1)

Proof

The equalities

$$\begin{aligned} \max _{[\beta n] \le k \le n} \frac{1}{\sqrt{k}} \sum _{i=1}^k e_i(1)=O_P(1)\quad \text {and}\quad \max _{[\beta n] \le k \le n} \frac{1}{\sqrt{k}} \sum _{i=1}^k v_i=O_P(1) \end{aligned}$$

yield the assertion of Lemma A.1.

Now, we need to estimate the difference \(A_3(k_1,k_2)-A_3(k_1^*,k_2^*)\). The set of all possible values \(\mathcal {A}=\{(k_1,k_2), [\beta n] \le k_1 \le n, [\beta n] \le k_2 \le n\}\) can be expresses as: \(\mathcal {A}= \mathcal {A}_1 \cup \ldots \cup \mathcal {A}_6 \cup \mathcal {B}_1 \cup \cdots \cup \mathcal {B}_6\), where

$$\begin{aligned} \begin{array}{llll} &{}\mathcal {A}_{1}=\{(k_1,k_2) \in \mathcal {A}, k_1 \le k_2 \le k_1^*< k_2^*\}, &{}&{}\, \mathcal {A}_{4}=\{(k_1,k_2) \in \mathcal {A}, k_1^* \le k_1 \le k_2 \le k_2^*\}, \\ &{}\mathcal {A}_{2}=\{(k_1,k_2) \in \mathcal {A}, k_1 \le k_1^* \le k_2 \le k_2^*\}, &{}&{}\, \mathcal {A}_{5}=\{(k_1,k_2) \in \mathcal {A}, k_1^* \le k_1 \le k_2^*< k_2\}, \\ &{}\mathcal {A}_{3}=\{(k_1,k_2) \in \mathcal {A}, k_1 \le k_1^*< k_2^* \le k_2\}, &{}&{}\, \mathcal {A}_{6}=\{(k_1,k_2) \in \mathcal {A}, k_1^*< k_2^* \le k_1 \le k_2\}, \\ &{}\mathcal {B}_{1}=\{(k_1,k_2) \in \mathcal {A}, k_2 \le k_1 \le k_1^*< k_2^*\}, &{}&{}\, \mathcal {B}_{4}=\{(k_1,k_2) \in \mathcal {A}, k_1^* \le k_2 \le k_1 \le k_2^*\}, \\ &{}\mathcal {B}_{2}=\{(k_1,k_2) \in \mathcal {A}, k_2 \le k_1^* \le k_1 \le k_2^*\}, &{}&{}\, \mathcal {B}_{5}=\{(k_1,k_2) \in \mathcal {A}, k_1^* \le k_2 \le k_2^*< k_1\}, \\ &{}\mathcal {B}_{3}=\{(k_1,k_2) \in \mathcal {A}, k_2 \le k_1^*< k_2^* \le k_1\}, &{}&{}\, \mathcal {B}_{6}=\{(k_1,k_2) \in \mathcal {A}, k_1^* < k_2^* \le k_2 \le k_1\}. \end{array} \end{aligned}$$

Lemma A.2

There exists a constant \(C_1>0\) such that for \((k_1,k_2) \in \mathcal {B}_1 \cup \cdots \cup \mathcal {B}_6 \cup \mathcal {A}_1 \cup \mathcal {A}_6\) it holds

$$\begin{aligned} A_3(k_1^*,k_2^*)-A_3(k_1,k_2) \ge C_1\, \Delta _n^2\, n, \end{aligned}$$
(A-2)

and there exists a constant \(C_2>0\) such that for \((k_1,k_2) \in \mathcal {A}_2 \cup \cdots \cup \mathcal {A}_5\) it holds

$$\begin{aligned} A_3({k}_1^*,{k}_2^*)-A_3 (k_1,k_2) \ge C_2 \,\Delta _n^2\,\left( |k_1-k_1^*| +|k_2-k_2^*|\right) . \end{aligned}$$
(A-3)

Proof

First , it holds

$$\begin{aligned} A_3(k_1^*,k_2^*)=\frac{a^2k_1^* +b^2k_2^*-2\rho a b k_1^*}{1-\rho ^2 }. \end{aligned}$$

The assertion (A-2) is a consequence of the inequalities

$$\begin{aligned}&A_3(k_1,k_2) \le A_3(k_1^*,k_1^*) \qquad \text {for} \,(k_1,k_2) \in \mathcal {A}_1 \cup \mathcal {B}_1 \cup \mathcal {B}_2,\\&A_3(k_1,k_2) \le A_3(k_2^*,k_1^*) \qquad \text {for} \,(k_1,k_2) \in \mathcal {B}_3,\\&A_3(k_1,k_2) \le \max \big (A_3(k_1^*,k_1^*),A_3(k_2^*,k_2^*)\big ) \qquad \text {for} \,(k_1,k_2) \in \mathcal {B}_4,\\&A_3(k_1,k_2) \le A_3(k_2^*,k_2^*) \qquad \text {for} \, (k_1,k_2) \in \mathcal {A}_6 \cup \mathcal {B}_5 \cup \mathcal {B}_6,\\ \end{aligned}$$

and the equalities

$$\begin{aligned}&A_3(k_1^*,k_2^*)-A_3(k_1^*,k_1^*)= \frac{b^2(k_2^*-k_1^*)}{1-\rho ^2},\\&A_3(k_1^*,k_2^*)-A_3(k_2^*,k_2^*)= \frac{a^2k_1^*(k_2^*-k_1^*)\rho ^2}{k_2^*(1-\rho ^2)},\\&A_3(k_1^*,k_2^*)-A_3(k_2^*,k_1^*)=\frac{k_1^*(k_2^*-k_1^*)(a^2+b^2+2\rho ab)+b^2(k_2^*-k_1^*)^2}{k_2^*-\rho ^2 k_1^*}. \end{aligned}$$

The assertion (A-3) has to be proved for all \(\mathcal {A}_2, \ldots , \mathcal {A}_4\) separately. We obtain it using

$$\begin{aligned}&A_3(k_1,k_2)\\&\quad =\frac{1}{1-\rho ^2}\left( \frac{(k_1k_2-2k_1k_1^*\rho ^2+k_1^{*2}\rho ^2)a^2}{(k_2-\rho ^2k_1)}+k_2b^2- 2\,k_1^*\rho a b\right) \quad \text {for} \, (k_1,k_2) \in \mathcal {A}_2,\\&\quad =\frac{1}{1-\rho ^2}\Bigg (\frac{k_1k_2-2\rho ^2k_1k_1^*+\rho ^2k_1^{*2}}{k_2-\rho ^2k_1}\,a^2+ \frac{k_2^{*2}-2\rho ^2k_1k_2^*+k_1k_2\rho ^2}{k_2-\rho ^2 k_1}\,b^2\\&\qquad -2\,\frac{k_1k_2-\rho ^2k_1k_1^*-k_1k_2^*+k_1^*k_2^*}{k_2-\rho ^2 k_1}\rho a b\Bigg ) \quad \text {for} \, (k_1,k_2) \in \mathcal {A}_3,\\&\quad =\frac{1}{1-\rho ^2}\left( \frac{k_1^{*2} a^2}{k_1}+k_2b^2-2k_1^* \rho ab\right) \quad \text {for} \, (k_1,k_2) \in \mathcal {A}_4,\\&\quad =\frac{1}{1-\rho ^2}\left( \frac{k_1^{*2}a^2}{k_1}+\frac{\rho ^2k_1(k_2-k_2^*)^2+k_2^{*2}(k_2-\rho ^2k_1)}{k_2(k_2-\rho ^2k_1)}b^2 -2k_1^*\rho ab \right) \quad \text {for} \,(k_1,k_2) \in \mathcal {A}_5. \end{aligned}$$

\(\square \)

Lemma A.3

Let \(\{\varepsilon _n\}\) be a sequence such that \(\varepsilon _n \rightarrow 0\) and \(\Delta _n \sqrt{n} \varepsilon _n \rightarrow \infty \). Then,

$$\begin{aligned}&\max _{\begin{array}{c} |k_1-k_1^*| \ge n \varepsilon _n \,\cup \, |k_2-k_2^*| \,\ge \,n \varepsilon _n \end{array}} \frac{A_1(k_1,k_2)}{\left( A_3(k_1,k_2)-A_3(k_1^*,k_2^*)\right) } \rightarrow 0, \end{aligned}$$
(A-4)
$$\begin{aligned}&\max _{\begin{array}{c} |k_1-k_1^*|\, \ge \,n \varepsilon _n \, \cup \, |k_2-k_2^*|\, \ge \, n \varepsilon _n \end{array}} \frac{A_2(k_1,k_2)}{\big (A_3(k_1,k_2)-A_3(k_1^*,k_2^*)\big )} \rightarrow 0. \end{aligned}$$
(A-5)

Consequently,

$$\begin{aligned} P\left( \left| \widehat{k}_1-k_1^*\right| \le \varepsilon _n\,n\right) \rightarrow 1 \quad \mathrm {and} \quad P\left( \left| \widehat{k}_2-k_2^*\right| \le \varepsilon _n\,n\right) \rightarrow 1. \end{aligned}$$
(A-6)

Proof

The equalities (A-4) and (A-5) follow from (A-1), (A-2) and (A-3). \(\square \)

In what follows we will use the equalities:

$$\begin{aligned}&A_1(k_1,k_2)-A_1\left( k_1^*,k_2^*\right) \nonumber \\&\quad =\frac{k_2n}{k_1(k_2-\rho ^2k_1)}\left( \frac{S_{ev}(k_1,k_2)}{\sqrt{n}}+\frac{S_{ev}(k_1^*,k_2^*)}{\sqrt{n}}\right) \left( \frac{S_{ev}(k_1,k_2)}{\sqrt{n}}-\frac{S_{ev}(k_1^*,k_2^*)}{\sqrt{n}}\right) \nonumber \\&\qquad +\left( \frac{S_{ev}(k_1^*,k_2^*)}{\sqrt{n}}\right) ^2\left( \frac{k_2n}{k_1(k_2-\rho ^2 k_1)}-\frac{k_2^* n}{k_1^*(k_2^*-\rho ^2 k_1^*)}\right) \nonumber \\&\qquad +\frac{n}{k_2}\left( \frac{S_v(k_2)}{\sqrt{n}}+\frac{S_v(k_2^*)}{\sqrt{n}}\right) \left( \frac{S_v(k_2)}{\sqrt{n}}-\frac{S_v(k_2^*)}{\sqrt{n}}\right) \nonumber \\&\qquad +\left( \frac{S_v(k_2^*)}{\sqrt{n}}\right) ^2\left( \frac{n}{k_2}-\frac{n}{k_2^*}\right) , \end{aligned}$$
(A-7)
$$\begin{aligned}&A_2(k_1,k_2)-A_2\left( k_1^*,k_2^*\right) \nonumber \\&\quad =\frac{k_2n}{k_1(k_2-\rho ^2k_1)}\left( \frac{m_{xz}(k_1,k_2)}{\sqrt{n}}- \frac{m_{xz}(k_1^*,k_2^*)}{\sqrt{n}}\right) \frac{S_{ev}(k_1,k_2)}{\sqrt{n}}\nonumber \\&\qquad +\frac{k_2n}{k_1(k_2-\rho ^2k_1)} \frac{m_{xz}(k_1^*,k_2^*)}{\sqrt{n}}\left( \frac{S_{ev}(k_1,k_2)}{\sqrt{n}}-\frac{S_{ev}(k_1^*,k_2^*)}{\sqrt{n}}\right) \nonumber \\&\qquad +\left( \frac{k_2n}{k_1(k_2-\rho ^2k_1)} - \frac{k_2^*n}{k_1^*(k_2^*-\rho ^2k_1^*)}\right) \frac{m_{xz}(k_1^*,k_2^*)}{\sqrt{n}}\frac{S_{ev}(k_1^*,k_2^*)}{\sqrt{n}} \nonumber \\&\qquad +\frac{n}{k_2}\left( \frac{m_{z}(k_2)}{\sqrt{n}}- \frac{m_{z}(k_2^*)}{\sqrt{n}}\right) \frac{S_v(k_2)}{\sqrt{n} }+ \frac{n}{k_2}\frac{m_{z}(k_2^*)}{\sqrt{n}}\left( \frac{S_v(k_2)}{\sqrt{n}}-\frac{S_v(k_2^*)}{\sqrt{n}}\right) \nonumber \\&\qquad +\left( \frac{n}{k_2} - \frac{n}{k_2^*}\right) \frac{m_{z}(k_2^*)}{\sqrt{n}}\frac{S_v(k_2^*)}{\sqrt{n}}. \end{aligned}$$
(A-8)

Lemma A.4

Let \(\{\varepsilon _n\}\) be a sequence such that \(\varepsilon _n \rightarrow 0\) and \(\Delta _n \sqrt{n} \varepsilon _n \rightarrow \infty \).

$$\begin{aligned}&\max _{\begin{array}{c} |k_1-k_1^*| \le \varepsilon _n n\\ |k_2-k_2^*| \le \varepsilon _n n \end{array}} \left| \chi ^2(k_1,k_2)-\chi ^2(k_1^*,k_2^*)\nonumber \right. \\&\left. \quad -\left( A_3(k_1,k_2)-A_3(k_1^*,k_2^*)\right) -\left( A_2(k_1,k_2)-A_2(k_1^*,k_2^*\right) \big )\right| = o_p(1) \end{aligned}$$
(A-9)

Proof

For any sequence of zero mean unit variance sequence of i.i.d. random variables \(\{e(i)\}\) it holds it holds

$$\begin{aligned} \max _{ |l-l^*| \le \varepsilon _n n} \frac{1}{\sqrt{\varepsilon _n n} } \left| \sum _{i=\min (l,l^*)+1}^{\max (l,l^*)} e(i)\right| = O_p(1). \end{aligned}$$

Using this bound for \(\{e_1(i)\}\) and \(\{v(i)\}\) with \(k_1^*\) and \(k_2^*\) we get that

$$\begin{aligned} \max _{\begin{array}{c} |k_1-k_1^*| \le \varepsilon _n n\\ |k_2-k_2^*| \le \varepsilon _n n \end{array}} \left| A_1(k_1,k_2) - A_1(k_1^*,k_2^*)\right| = O_P(\sqrt{\varepsilon _n}) \end{aligned}$$
(A-10)

which proves (A-9). \(\square \)

Let \(\{q_n\}\) be a sequence such that \(q_n \rightarrow \infty \). We introduce a set \(\mathcal {C}=\{(k_1,k_2);( |k_1-k_1^*| \le \varepsilon _n n, |k_2-k_2^*| \le \varepsilon _n n\, \mathrm {such\, that}\, (|k_1-k_1^*| \ge q_n/\Delta _n^2\, \mathrm {or}\, |k_2-k_2^*| \ge q_n/\Delta _n^2)\}\). The set \(\mathcal {C}= \mathcal {C}_1 \cup \mathcal {C}_2 \cup \mathcal {C}_3\) where \(\mathcal {C}_1=\{(k_1,k_2) \in \mathcal {A}, q_n/\Delta _n^2 \le |k_1-k_1^*| \le \varepsilon _n n, |k_2-k_2^*| < q_n/\Delta _n^2\}\), \(\mathcal {C}_2=\{(k_1,k_2) \in \mathcal {A}, q_n/\Delta _n^2 \le |k_2-k_2^*| \le \varepsilon _n n, |k_1-k_1^*| < q_n/\Delta _n^2\}\), \(\mathcal {C}_3=\{(k_1,k_2) \in \mathcal {A}, q_n/\Delta _n^2 \le |k_1-k_1^*| \le \varepsilon _n n, q_n/\Delta _n^2 \le |k_2-k_2^*| \le \varepsilon _n n\}\).

Lemma A.5

It holds

$$\begin{aligned}&\max _{(k_1,k_2) \in \mathcal {C}} \frac{\left| A_1(k_1,k_2)-A_1(k_1^*,k_2^*)\right| }{\Delta _n^2\left( \left| k_1-k_1^*\right| +\left| k_2-k_2^*\right| \right) }\,=\,o_P(1), \end{aligned}$$
(A-11)
$$\begin{aligned}&\max _{(k_1,k_2) \in \mathcal {C}} \frac{|A_2(k_1,k_2)-A_2(k_1^*,k_2^*)|}{\Delta _n^2\left( \left| k_1-k_1^*\right| +\left| k_2-k_2^*\right| \right) }\,=\,o_P(1). \end{aligned}$$
(A-12)

Proof

We consider \((k_1,k_2) \in \mathcal {C}_1\), then

$$\begin{aligned}&\frac{1}{\Delta _n} \max _{q_n/\Delta _n^2 \le |k_1-k_1^*| \le \varepsilon _n n} \frac{1}{|k_1-k_1^*|} \sum _{i=\min (k_1,k_1^*)+1}^{\max (k_1,k_1^*)} e_1(i) = o_p(1),\\&\frac{1}{\Delta _n} \max _{|k_2-k_2^*| \le q_n/\Delta _n^2} \frac{\Delta _n^2}{q_n} \sum _{i=\min (k_2,k_2^*)+1}^{\max (k_2,k_2^*)} v(i) = o_p(1). \end{aligned}$$

The similar equalities hold for \(\mathcal {C}_2\) and \(\mathcal {C}_3\). Therefore, it holds

$$\begin{aligned} \max _{(k_1,k_2) \in \mathcal {C}} \frac{\left| S_{ev}(k_1,k_2)-S_{ev}(k_1^*,k_2^*)\right| }{\Delta _n\left( |k_1-k_1^*|+|k_2-k_2^*|\right) } \,=\, o_p(1). \end{aligned}$$
(A-13)

The (A-13) yields (A-11). The (A-13) together with

$$\begin{aligned} \max _{(k_1,k_2) \in \mathcal {C}} \frac{\big |m_{xz}(k_1,k_2)-m_{xz}(k_1^*,k_2^*)\big |}{\Delta _n\big (|k_1-k_1^*|+|k_2-k_2^*|\big )} =O(1). \end{aligned}$$
(A-14)

yields (A-12). \(\square \)

Lemma A.6

It holds

$$\begin{aligned} \Delta _n^2\left| \widehat{k}_1 - k_1^*\right| = O_P(1) \quad \mathrm {and} \quad \Delta _n^2\left| \widehat{k}_2 - k_2^*\right| = O_P(1). \end{aligned}$$

Proof

The assertions are consequence of the previous lemmas. \(\square \)

For any \(C>0\) we introduce a set \(\mathcal {M}_C=\{(k_1,k_2); |k_1-k_1^*|\le C/\Delta _n^2 \cap |k_2-k_2^*|\le C/\Delta _n^2\}\).

Lemma A.7

For any C it holds

$$\begin{aligned}&\max _{(k_1,k_2) \in \mathcal {M}_C}\Bigg |\chi ^2(k_1,k_2)-\chi ^2(k_1^*,k_2^*)-\Bigg (-\frac{a_n^2|k_1-k_1^*|}{1-\rho ^2}-\frac{b_n^2|k_2-k_2^*|}{1-\rho ^2}\nonumber \\&\quad +\frac{2 a_n}{\sqrt{1-\rho ^2}}{\text {sign}}(k_1-k_1^*)\sum _{i=\min (k_1,k_1^*)+1}^{\max (k_1,k_1^*)} \Big (\sqrt{1-\rho ^2}e_1(i) -\rho v(i)\Big ) \nonumber \\&\quad +\frac{2 b_n}{\sqrt{1-\rho ^2}} {\text {sign}}(k_2-k_2^*)\sum _{i=\min (k_2,k_2^*)+1}^{\max (k_2,k_2^*)} v(i)\Bigg )\Bigg |=o_P(1). \end{aligned}$$
(A-15)

Proof

For \(C>0\) it holds

$$\begin{aligned}&\max _{(k_1,k_2) \in \mathcal {M}_C}\left| A_3(k_1,k_2)-A_3(k_1^*,k_2^*)- \left( -\frac{a_n^2|k_1-k_1^*|}{1-\rho ^2}-\frac{b_n^2|k_2-k_2^*|}{1-\rho ^2}\right) \right| \\&\quad = O\left( \Delta _n^2 \frac{(k_1-k_1^*)^2+(k_2-k_2^*)^2}{n}\right) = o(1).\\&\max _{(k_1,k_2) \in \mathcal {M}_C}\left| A_2(k_1,k_2)-A_2(k_1^*,k_2^*) \right. \\&\left. \qquad -\frac{a_n}{\sqrt{1-\rho ^2}}{\text {sign}}(k_1-k_1^*)\sum _{i=\min (k_1,k_1^*)+1}^{\max (k_1,k_1^*)} \left( \sqrt{1-\rho ^2}e_1(i) -\rho v(i)\right) \right. \nonumber \\&\left. \qquad -\frac{b_n}{\sqrt{1-\rho ^2}} {\text {sign}}(k_2-k_2^*)\sum _{i=\min (k_2,k_2^*)+1}^{\max (k_2,k_2^*)} v(i)\right| =o_p(1). \end{aligned}$$

\(\square \)

Proof of Theorem 1.

As it is explained in Csörgő and Horváth (1997), the assertion of Theorem 1 follows from the convergence of partial sums and the fact that the variables \(\{\sqrt{1-\rho ^2}e_1(i) -\rho v(i), i=\lfloor k_1^*-C/\Delta _n^2\rfloor ,\ldots ,\lceil k_1^*+C/\Delta _n^2 \rceil \}\) and \(\{v(i), i=\lfloor k_2^*-C/\Delta _n^2\rfloor ,\ldots ,\lceil k_2^*+C/\Delta _n^2 \rceil \}\) are independent for n large enough.

Proof of Theorem 2.

Theorem 2 is a consequence of the following two lemmas. For any \(C>0\) denote \(\mathcal {M}_C=\{(k_1,k_2); |k_1-k^*| \le C/\Delta _n^2 \cap |k_2-k^*| \le C/\Delta _n^2\}\).

Lemma A.8

Denote \(D_{A3}(k_1,k_2,k^*)=A_3(k_1,k_2)-A_3(k^*,k^*)\). For any \(C>0\) it holds

$$\begin{aligned}&\max _{\begin{array}{c} (k_1,k_2) \in \mathcal {M}_C\\ k_1 \le k_2 \le k^* \end{array}} \left| D_{A3}(k_1,k_2,k^*) - \left( -\frac{a_n^2(k^*-k_1)}{1-\rho ^2}-\frac{(b_n^2-2\rho a_nb_n)(k^*-k_2)}{1-\rho ^2}\right) \right| =o_p(1),\\&\max _{\begin{array}{c} (k_1,k_2) \in \mathcal {M}_C\\ k_2 \le k_1 \le k^* \end{array}} \left| D_{A3}(k_1,k_2,k^*) - \left( -\frac{(a_n^2-2\rho a_n b_n)(k^*-k_1)}{1-\rho ^2}-\frac{b_n^2(k^*-k_2)}{1-\rho ^2}\right) \right| =o_p(1),\\&\max _{\begin{array}{c} (k_1,k_2) \in \mathcal {M}_C\\ k_2 \le k^* \le k_1 \end{array}} \left| D_{A3}(k_1,k_2,k^*) - \left( -\frac{a_n^2(k_1-k^*)}{1-\rho ^2}-\frac{b_n^2(k^*-k_2)}{1-\rho ^2}\right) \right| =o_p(1),\\&\max _{\begin{array}{c} (k_1,k_2) \in \mathcal {M}_C\\ k^* \le k_2 \le k_1 \end{array}} \left| D_{A3}(k_1,k_2,k^*) - \left( -\frac{a_n^2(k_1-k^*)}{1-\rho ^2}-\frac{(b_n^2-2\rho a_nb_n)(k_2-k^*)}{1-\rho ^2}\right) \right| =o_p(1),\\&\max _{\begin{array}{c} (k_1,k_2) \in \mathcal {M}_C\\ k^* \le k_1 \le k_2 \end{array}} \left| D_{A3}(k_1,k_2,k^*) - \left( -\frac{(a_n^2-2\rho a_n b_n)(k_1-k^*)}{1-\rho ^2}-\frac{b_n^2(k_2-k^*)}{1-\rho ^2}\right) \right| =o_p(1),\\&\max _{\begin{array}{c} (k_1,k_2) \in \mathcal {M}_C\\ k_1 \le k^* \le k_2 \end{array}} \left| D_{A3}(k_1,k_2,k^*) - \left( -\frac{a_n^2(k^*-k_1)}{1-\rho ^2}-\frac{b_n^2(k_2-k^*)}{1-\rho ^2}\right) \right| =o_p(1). \end{aligned}$$

Lemma A.9

For any \(C>0\) it holds

$$\begin{aligned}&\max _{(k_1,k_2) \in \mathcal {M}_C}\Big |A_2(k_1,k_2)-A_2(k^*,k^*) \\&\quad -\frac{a_n}{\sqrt{1-\rho ^2}}{\text {sign}}(k_1-k^*)\sum _{i=\min (k_1,k^*)+1}^{\max (k_1,k^*)} \Big (\sqrt{1-\rho ^2}e_1(i) -\rho v(i)\Big ) \nonumber \\&\quad -\frac{b_n}{\sqrt{1-\rho ^2}} {\text {sign}}(k_2-k^*)\sum _{i=\min (k_2,k^*)+1}^{\max (k_2,k^*)} v(i)\Big |=o_p(1). \nonumber \end{aligned}$$

Proof of Theorem 3.

Theorem 3 is a multivariate version of Theorem 1. We need again to express \(\chi ^2(k_1,\ldots ,k_p)\) as a sum of three terms.

We decompose the matrix \(\varvec{\Sigma }^{-1}=||d_{jl}||=\varvec{R}\varvec{R}^T\) with \(\varvec{R}=||r_{jl}||\) being an upper triangular matrix and introduce vectors \(\varvec{Z}=\big (Z_1(i),\ldots ,Z_p(i)\big )^T=\varvec{R}^T\varvec{X}(i)\) and \(\varvec{v}(i)=(v_1(i),\ldots ,v_p(i)\big )^T=\varvec{R}^T\varvec{e}(i)\) with uncorrelated components. The estimates \(\widehat{k}_1,\ldots ,\widehat{k}_p\) maximize

$$\begin{aligned} \chi ^2(k_1,\ldots ,k_p)=\widetilde{\varvec{Z}}^T \varvec{B}^{-1} \widetilde{\varvec{Z}}, \end{aligned}$$

where \(\widetilde{\varvec{Z}}=\left( r_{11}\sum _{i=1}^{k_1}Z_1(i)+\cdots +r_{1p}\sum _{i=1}^{k_1}Z_p(i),\ldots ,r_{pp}\sum _{i=1}^{k_p} Z_p(i)\right) ^T\) and the matrix \(\varvec{B}=\varvec{B}(\varvec{k})=||k_{jl}d_{jl}||\) with \(k_{jl}=\min (k_j,k_l)\). Denoting \(\widetilde{\varvec{m}}=E \widetilde{\varvec{Z}}= \left( \widetilde{m}_1(k_1),\ldots ,\widetilde{m}_p(k_p)\right) ^T\) and \(\widetilde{\varvec{v}}=\left( \widetilde{v}_1(k_1),\ldots ,\widetilde{v}_p(k_p)\right) ^T\), where \(\widetilde{v}_1(k_1)=r_{11}\sum _{i=1}^{k_1} v_1(i) +\cdots + r_{1p}\sum _{i=1}^{k_1} v_p(i)\), ...,\(\widetilde{v}_p(k_p)=r_{pp}\sum _{i=1}^{k_p} v_p(i)\), we may express

$$\begin{aligned} \chi ^2(k_1,\ldots ,k_p)=A_1(k_1,\ldots ,k_p)+2\,A_2(k_1,\ldots ,k_p)+A_3(k_1,\ldots ,k_p), \end{aligned}$$

with \(A_1(k_1,\ldots ,k_p)=\widetilde{\varvec{v}}^T\varvec{B}^{-1}\widetilde{\varvec{v}}\), \(A_2(k_1,\ldots ,k_p)=\widetilde{\varvec{m}}^T\varvec{B}^{-1}\widetilde{\varvec{v}}\), \(A_3(k_1,\ldots ,k_p)=\widetilde{\varvec{m}}^T\varvec{B}^{-1}\widetilde{\varvec{m}}\).

We notice that for \(|B_{qq}(k_1,\ldots ,k_q)|=\det (\,||k_{jl}\,d_{jl}||_{j,l=1}^q\,)\), \(q=1,\ldots ,p\), it holds that

$$\begin{aligned} \min _{\begin{array}{c} {[\beta n]} \le k_1 \le n\\ \vdots \\ {[\beta n]} \le k_q \le n\\ \end{array}} |B_{qq}(k_1,\ldots ,k_q)| \ge const\, n^q. \end{aligned}$$

We may express \(\varvec{B}=\varvec{U}^T\varvec{U}\), where \(\varvec{U}=||u_{jl}||\) is an upper triangle matrix with \(u_{jj}^2=|B_{jj}|/|B_{j-1,j-1}|.\) An inverse \(\varvec{B}^{-1}=\varvec{U}^{-1}\big (\varvec{U}^T\big )^{-1}\) where \(\big (\varvec{U}^T\big )^{-1}=||t_{jl}(k_1,\ldots ,k_p)||\) is a lower triangular matrix. From the algorithm of the Cholesky decomposition we obtain that all \(t_{jl}(k_1,\ldots ,k_p)\), \(1 \le j \le p\), \(1 \le l \le j\), may be expressed in the form

$$\begin{aligned} t_{jl}(k_1,\ldots ,k_l)=\frac{P_{jl}^h(k_1,\ldots ,k_p)}{\sqrt{P_{jl}^d(k_1,\ldots ,k_p)}}, \end{aligned}$$

where \(P_{jl}^h(k_1,\ldots ,k_p)\) and \(P_{jl}^d(k_1,\ldots ,k_p)\) are both polynomials of \(k_1,\ldots ,k_p\) such that \({\text {deg}}(P_{jl}^d) \ge 2\,{\text {deg}}(P_{jl}^h)+1\), where \({\text {deg}}(P_{jl}^h)\) is a degree of the polynomial \(P_{jl}^h(k_1,\ldots ,k_p)\) and \({\text {deg}}(P_{jl}^d)\) is a degree of the polynomial \(P_{jl}^d(k_1,\ldots ,k_p)\). Moreover, \(P_{jl}^d\) is a product of \(|B_{11}|^{q_1},\ldots , |B_{pp}|^{q_p}\) for some \(q_1,\ldots , q_p \in \{0\} \cup \varvec{N}\).

It follows that there exists a constant C such that for \(k_1,k_1^*,\ldots ,k_p,k^*_p \in [\beta n,n]\) and all \(j=1,\ldots ,p\), \(l=1,\ldots ,j\) it holds

$$\begin{aligned}&|t_{jl}(k_1,\ldots ,k_p)| \le \frac{C}{\sqrt{n}},\\&|t_{jl}(k_1,\ldots ,k_p)-t_{jl}(k^*_1,\ldots ,k^*_p)| \le \frac{C}{n^{3/2}} \Big (|k_1-k_1^*|+\cdots +|k_p-k^*_p|\Big ). \end{aligned}$$

For proving a consistency of the estimates \(\widehat{k}_1,\ldots ,\widehat{k}_p\) we prove Lemma A.2 for a multidimensional case. The components of the vectors \(\varvec{Z}(i),i=1,\ldots ,n\), may be expressed as

$$\begin{aligned} \begin{array}{llll} &{}\phantom {:} Z_1(i) = \beta _{11} +\, v_1(i), &{}&{} \quad i=1,\ldots , k^*_1,\, \\ &{}\phantom {:} Z_1(i) = \, v_1(i), &{}&{} \quad i=k^*_1+1,\ldots , n, \nonumber \\ &{}\phantom {:} \vdots &{}&{} \\ &{}\phantom {:} Z_p(i) = \beta _{p1} + \, v_p(i), &{}&{} \quad i=1,\ldots , k^*_1,\, \nonumber \nonumber \\ &{}\phantom {:} \vdots &{}&{} \\ &{}\phantom {:} Z_p(i) = \beta _{pp}+ v_p(i),&{}&{} \quad i=k^*_{p-1}+1,\ldots , k^*_p, \nonumber \\ &{}\phantom {:} Z_p(i) = \, v_p(i), &{}&{} \quad i=k^*_p+1,\ldots , n, \end{array} \end{aligned}$$

where the parameters \(\{\beta _{jl}, j=1,\ldots ,p,\,l=1,\ldots ,j\}\) are connected by \(p(p-1)/2\) linear constrains. Denote \(RSS_w(k_1,\ldots ,k_p)\) an unrestricted minimum with respect to all \(\beta _{11},\ldots ,\beta _{pp}\) of the sum of squares

$$\begin{aligned} \sum _{i=1}^{k_1} \big (Z_1(i)-\beta _{11}\big )^2+\cdots +\sum _{i=1}^{k_1} \big (Z_p(i)-\beta _{11}\big )^2+\cdots + \sum _{i=k_{p-1}+1}^{k_p} \big (Z_p(i)-\beta _{pp}\big )^2 \end{aligned}$$
(A-16)

and denote \(RSS(k_1,\ldots ,k_p)\) a restricted minimum of (A-16) under the condition that those \(p(p-1)/2\) linear constrains are fulfilled. We denote

$$\begin{aligned} \chi ^2_w(k_1,\ldots ,k_p)=\sum _{j=1}^p \sum _{i=1}^n Z_j^2(i) - RSS_w. \end{aligned}$$

For all \(k_1,\ldots ,k_p\)

$$\begin{aligned} \chi ^2(k_1,\ldots ,k_p)=\chi ^2_w(k_1,\ldots ,k_p)-d^2(k_1,\ldots ,k_p), \end{aligned}$$
(A-17)

where \(d^2(k_1,\ldots ,k_p)\) attains non-negative values. For all \(k_1,\ldots , k_p\) we may express the difference

$$\begin{aligned} \chi ^2(k_1^*,\ldots ,k_p^*)-\chi ^2(k_1,\ldots ,k_p)&=\chi _w^2(k_1^*,\ldots ,k_p^*)-\chi _w^2(k_1,\ldots ,k_p)-\\&\quad -\big (d^2(k_1^*,\ldots ,k_p^*)-d^2(k_1,\ldots ,k_p)\big ). \end{aligned}$$

Denote \(A_{3w}(k_1,\ldots ,k_p)\) the value of \(\chi ^2_w(k_1,\ldots ,k_p)\) computed for the sequences \(\{E Z_j(i), 1 \le i \le n\}\), \(j=1,\ldots ,p\), instead for the sequences \(\{Z_j(i),1 \le i \le n\}\), \(j=1,\ldots ,p\), i.e. put all \(v_j(i)=0\), \(1 \le j \le p\), \(1 \le i \le n\). Similarly indeed, \(A_3(k_1,\ldots ,k_p)\) is the value of \(\chi ^2(k_1,\ldots ,k_p)\) for all \(v_j(i)=0\), \(1 \le j \le p\), \(1 \le i \le n\). In agreement with (A-17) we may write

$$\begin{aligned} A_3(k_1,\ldots ,k_p)=A_{3w}(k_1,\ldots ,k_p)-d_3^2(k_1,\ldots ,k_p), \end{aligned}$$

where \(d_3^2(k_1,\ldots ,k_p)\) attains non-negative values. For the model (14) the considered constrains are fulfilled and therefore \(d_3^2(k_1^*,\ldots ,k_p^*)=0\) which yields

$$\begin{aligned} A_3(k_1^*,\ldots ,k_p^*)-A_3(k_1,\ldots ,k_p) \ge A_{3w}(k_1^*,\ldots ,k_p^*)-A_{3w}(k_1,\ldots ,k_p). \end{aligned}$$

It may be easily proved that

$$\begin{aligned} A_{3w}(k_1^*,\ldots ,k_p^*)-A_{3w}(k_1,\ldots ,k_p) \ge const\,\Delta _{wn}^2 \big (|k_1-k_1^*|+\cdots +|k_p-k_p^*|\big ) \end{aligned}$$

with \(\Delta _{wn}^2=\beta _{11}^2+(\beta _{22}-\beta _{21})^2+\cdots +(\beta _{pp}-\beta _{p,p-1})^2+\beta _{pp}^2.\) As we may find a constant C such that \(a_1^2+\cdots +a_p^2 \le C\,\Delta _{wn}^2\), we get the assertion of Lemma A.2:

$$\begin{aligned} A_3(k_1^*,\ldots ,k_p^*)-A_3(k_1,\ldots ,k_p) \ge C\,(a_1^2+\cdots +a_p^2) \left( |k_1-k_1^*| + \cdots + |k_p-k_p^*|\right) . \end{aligned}$$

The assertions of Lemma A.1, A.3, A.4, A.5, A.6 hold true as

$$\begin{aligned} A_1(k_1,\ldots ,k_p)&=\left( t_{11}\widetilde{v}_1(k_1)\right) ^2+\cdots +\left( t_{p1}\widetilde{v}_1(k_1)+\cdots +t_{pp}\widetilde{v}_p(k_p)\right) ^2,\\ A_2(k_1,\ldots ,k_p)&=\left( t_{11}\widetilde{v}_1(k_1)\right) \left( t_{11}\widetilde{m}_1(k_1)\right) +\cdots +\\&\quad +\left( t_{p1}\widetilde{v}_1(k_1)+\cdots +t_{pp}\widetilde{v}_p(k_p)\right) \left( t_{p1}\widetilde{m}_1(k_1)+\cdots +t_{pp}\widetilde{m}_p(k_p)\right) . \end{aligned}$$

Finally, we prove Lemma A.7. The assertion of Theorem 3 is then an easy consequence.

For any \(C>0\) we consider \(k_1,\ldots ,k_p\) satisfying \(|k_1-k_1^*| \le C/\Delta _n^2, \ldots , |k_p-k_p^*| \le C/\Delta _n^2\). It is supposed that \(k_1^*=[n\tau _1^*],\ldots ,k_p^*=[n\tau _p^*]\) and \(\tau _1^*< \cdots < \tau _p^*\), and therefore \(k_1< k_2<\cdots < k_p\) for large enough n. As any \(k_j\) may be situated on both sides of \(k_j^*\) there exist \(2^p\) mutual positions of \(k_1,\ldots ,k_p\) and \(k_1^*,\ldots , k_p^*\), e.g. \(k_1 \le k_1^*< k_2 \le k_2^*< \cdots < k_p \le k_p^*\) or \(k_1^*< k_1< k_2 \le k_2^* < \cdots k_p \le k_p^*\) etc. In what follows we prove Lemma A.7 for \(k_1 \le k_1^*< k_2 \le k_2^*< \cdots < k_p \le k_p^*\) and \(|k_1-k_1^*| \le C/\Delta _n^2 \cap \cdots \cap |k_p-k_p^*| \le C/\Delta _n^2.\) For all other positions one may proceed similarly.

Denote \(\varvec{a}=(a_1,\ldots ,a_p)^T\). Recall that \(\varvec{\Sigma }^{-1}=\varvec{R}\varvec{R}^T\) where \(\varvec{R}\) is an upper triangular matrix. Let \(\varvec{r}_j=(0,\ldots ,0,r_{jj},\ldots , r_{jp})\) be the j-th row of \(\varvec{R}\), \(\varvec{R}_1=\varvec{R}\) and \(\varvec{R}_j\), \(j=2,\ldots ,p\), be a matrix arising from the matrix \(\varvec{R}\) by replacing the first \(j-1\) rows by zero rows. We may express

$$\begin{aligned} \varvec{B}= & {} \big (k_1\varvec{R}_1\varvec{R}_1^T+(k_2-k_1)\varvec{R}_2\varvec{R}_2^T+\cdots + (k_p-k_{p-1})\varvec{R}_p\varvec{R}_p^T\big ),\\ \varvec{B}^*= & {} \big (k_1^*\varvec{R}_1\varvec{R}_1^T+(k_2^*-k_1^*)\varvec{R}_2\varvec{R}_2^T+\cdots + (k_p^*-k_{p-1}^*)\varvec{R}_p\varvec{R}_p^T\big ). \end{aligned}$$

Then \(A_3(k_1,\ldots ,k_p)=\varvec{a}^T\varvec{F}^T\varvec{B}^{-1}\varvec{F}\varvec{a}\) with \(\varvec{F}=\varvec{B}+(k_2^*-k_1)\varvec{R}_2\varvec{R}_1^T+(k_1-k_1^*)\varvec{R}_2\varvec{R}_2^T+\cdots +(k_{p-1}-k_{p-1}^*)\varvec{R}_p\varvec{R}_p^T\), so that

$$\begin{aligned}&A_3(k_1,\ldots ,k_p)-A_3(k_1^*,\ldots ,k_p^*)=(k_1^*-k_1)\varvec{a}^T\big (-\varvec{R}_1\varvec{R}_1^T- \varvec{R}_2\varvec{R}_2^T+\varvec{R}_1\varvec{R}_2^T \\&\qquad + \varvec{R}_2\varvec{R}_1^T\big )\varvec{a}+ (k_2^*-k_2)\varvec{a}^T\big (-\varvec{R}_2\varvec{R}_2^T- \varvec{R}_3\varvec{R}_3^T+\varvec{R}_2\varvec{R}_3^T + \varvec{R}_3\varvec{R}_2^T\big )\varvec{a}+\cdots \\&\qquad +(k_p^*-k_p)\varvec{a}^T\big (-\varvec{R}_p\varvec{R}_p^T\big )\varvec{a}+o(1)\\&\quad =-(k_1^*-k_1)a_1^2d_{11}-\cdots -(k_p^*-k_p)a_p^2d_{pp}+o(1),\\&A_2(k_1,\ldots ,k_p)-A_2(k_1^*,\ldots ,k_p^*)= - \varvec{a}^T\big (\varvec{R}_1-\varvec{R}_2\big )\sum _{i=k_1+1}^{k_1^*} \varvec{v}(i)\\&\qquad -\varvec{a}^T\big (\varvec{R}_2-\varvec{R}_3\big )\sum _{i=k_2+1}^{k_2^*}\varvec{v}(i)-\cdots - \varvec{a}^T\big (\varvec{R}_{p-1}-\varvec{R}_p\big )\sum _{i=k_p+1}^{k_p^*}\varvec{v}(i)+o_P(1)\\&\quad =-a_1\varvec{r}_1 \sum _{i=k_1+1}^{k_1^*} \varvec{v}(i)-a_2\varvec{r}_2 \sum _{i=k_2+1}^{k_2^*} \varvec{v}(i)-\cdots - a_p\varvec{r}_p \sum _{i=k_p+1}^{k_p^*} \varvec{v}(i) + o_P(1)\\&\quad =-a_1\sqrt{d_{11}}\sum _{i=k_1+1}^{k_1^*} \frac{\varvec{r}_1\varvec{v}(i)}{\sqrt{d_{11}}}-\cdots - a_p\sqrt{d_{pp}}\sum _{i=k_p+1}^{k_p^*} \frac{\varvec{r}_p\varvec{v}(i)}{\sqrt{d_{pp}}} + o_P(1). \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jarušková, D. Estimating non-simultaneous changes in the mean of vectors. Metrika 81, 721–743 (2018). https://doi.org/10.1007/s00184-018-0671-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-018-0671-2

Keywords

Navigation