Skip to main content
Log in

Estimating multiple breaks in mean sequentially with fractionally integrated errors

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

This paper studies the estimation issue for multiple breaks in mean when the model errors are fractionally integrated processes of order d with \(-0.5<d<0.5\). For this problem with \(0<d<0.5\), (Lavielle and Moulines in J Time Ser Anal 21(1):33–59, 2000) estimated the break fractions simultaneously by least squares method. The computational complexity of this method is of order \(O(T^2)\) even employing the dynamic programming algorithm (Bai and Perron in J Appl Econom 18(1):1–22, 2003), where T is the sample size. It is well-known that the computational complexity of the sequential method is of order O(T) (Bai in Econom Theory 13(3):315–352, 1997), which is obviously more efficient than the simultaneous method in terms of the computational cost. Therefore, in this paper, we revisit the issue of estimating multiple breaks in mean with fractionally integrated errors, and examine the sequential method for the above issue. It is found that: (1) when the break magnitudes are fixed, the convergence rates of the estimators for the break fractions are all 1 / T, which is invariant to the fractionally differencing parameter d over the entire range \(d\in (-0.5, 0.5)\); (2) when the break magnitudes shrink to zero as \(T\rightarrow \infty \), both the convergence rates and the asymptotic distributions of the estimators for the break fractions rely on the fractionally differencing parameter d for all \(0\le d<0.5\). The convergence rates of the estimators for the break fractions in the case of \(-0.5<d<0\) in this paper are not optimal due to the absence of optimal Hájek-Rényi inequalities. Our theoretical results extend and improve the results established in Bai (1997) and Kuan and Hsu (J Time Ser Anal 19(6):693–708, 1998), respectively. Monte Carlo simulations are conducted to examine the finite-sample performance for the estimators. Our theoretical findings are supported by the Monte Carlo simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. The formulas of \(R_{iT}(k),i=1,2\) in Bai (1997) have more complicated expressions. However, by taking some careful algebra, one can find that the second terms in \(R_{1T}(k)\) and \(R_{2T}(k)\) in Bai (1997) are actually zeros.

References

  • Bai J (1994) Least squares estimation of a shift in linear processes. J Time Ser Anal 15(5):453–472

    MathSciNet  MATH  Google Scholar 

  • Bai J (1997) Estimating multiple breaks one at a time. Econom Theory 13(3):315–352

    MathSciNet  Google Scholar 

  • Bai J, Perron P (1998) Estimating and testing linear models with multiple structural changes. Econometrica 66(1):47–78

    MathSciNet  MATH  Google Scholar 

  • Bai J, Perron P (2003) Computation and analysis of multiple dtructural change models. J Appl Econom 18(1):1–22

    Google Scholar 

  • Betken A (2016) Testing for change-points in long-range dependent time series by means of a self-normalized Wilcoxon test. J Time Ser Anal 37:785–809

    MathSciNet  MATH  Google Scholar 

  • Betken A (2017) Change point estimation based on Wilcoxon tests in the present of long-range dependence. Electron J Stat 11:3633–3672

    MathSciNet  MATH  Google Scholar 

  • Bhattacharya PK (1987) Maximum likelihood estimation of a change-point in the distribution of independent random variables: general multiparameter case. J Multivar Anal 23(2):183–208

    MathSciNet  MATH  Google Scholar 

  • Billingsley P (1999) Convergence of probability measures, 2nd edn. Wiley, New York

    MATH  Google Scholar 

  • Chang SY, Perron P (2016) Inference on a structural break in trend with fractionally integrated errors. J Time Ser Anal 37:555–574

    MathSciNet  MATH  Google Scholar 

  • Ciuperca G (2014) Model selection by LASSO methods in a change-point model. Stat Pap 55(2):349–374

    MathSciNet  MATH  Google Scholar 

  • Csörgő M, Nasari M, Ould-Haye M (2017) Randomized pivots for means of short and long memory linear processes. Bernoulli 23(4A):2558–2586

    MathSciNet  MATH  Google Scholar 

  • Granger CWJ, Joyeux R (1980) An introduction to long-memory time series models and fractional differencing. J Time Ser Anal 1(1):15–29

    MathSciNet  MATH  Google Scholar 

  • Harvey DI, Leybourne SJ, Taylor AMR (2006) Modified tests for a change in persistence. J Econom 134(2):441–469

    MathSciNet  MATH  Google Scholar 

  • Harvey DI, Leybourne SJ, Taylor AMR (2012) Corrigendum. J Econom 168(2):407

    MATH  Google Scholar 

  • Hu S, Li X, Yang W, Wang X (2011) Maximal inequalities for some dependent sequences and their applications. J Korean Stat Soc 40:11–19

    MathSciNet  MATH  Google Scholar 

  • Hinkley DV (1970) Inference about the change point in a sequence of random variables. Biometrika 57(1):1–17

    MathSciNet  MATH  Google Scholar 

  • Hosking JRM (1981) Fractional differencing. Biometrika 68:165–176

    MathSciNet  MATH  Google Scholar 

  • Hosking JRM (1984) Modelling persistence in hydrological time series using fractional differencing. Water Resourc Res 20(12):1898–1908

    Google Scholar 

  • Hsu Y, Kuan C (2008) Change-point estimation of nonstationary I(d) processes. Econ Lett 98(2):115–121

    MathSciNet  MATH  Google Scholar 

  • Iacone F, Leybourne SJ, Taylor AMR (2014) A fixed-\(b\) test for a break in level at an unknown time under fractional integration. J Time Ser Anal 35(1):40–54

    MathSciNet  MATH  Google Scholar 

  • Iacone F, Leybourne SJ, Taylor AMR (2016) Testing for a change in mean under fractional integration. J Time Ser Econ 9(1):8

    MathSciNet  MATH  Google Scholar 

  • Kejriwal M, Perron P, Zhou J (2013) Wald tests for detecting multiple structural changes in persistence. Econom Theory 29(2):289–323

    MathSciNet  MATH  Google Scholar 

  • Kim J, Pollard D (1990) Cube root asymptotics. Ann Stat 18(1):191–219

    MathSciNet  MATH  Google Scholar 

  • Kuan C, Hsu C (1998) Change-point estimation of fractionally integrated processes. J Time Ser Anal 19(6):693–708

    MathSciNet  MATH  Google Scholar 

  • Lavielle M, Moulines E (2000) Least-squares estimation of an unknown unmber of shifts in a time series. J Time Ser Anal 21(1):33–59

    MathSciNet  MATH  Google Scholar 

  • Lee S, Seo MH, Shin Y (2016) The lasso for high dimensional regression with a possible change point. J R Stat Soc Stat Methodol Ser B 78(1):193–210

    MathSciNet  MATH  Google Scholar 

  • Leybourne S, Kim T, Smith V, Newbold P (2003) Tests for a change in persistence against the null of difference-stationarity. Econom J 6(2):291–311

    MathSciNet  MATH  Google Scholar 

  • Li Q, Wang L (2017) Robust change point detection method via adaptive LAD-LASSO. Statistical Papers (in press)

  • Lin Z, Bai Z (2010) Probability inequalities. Springer, Berlin

    MATH  Google Scholar 

  • Ling S, Li WK (2001) Asymptotic inference for nonstationary fractionally integrated autoregressive moving-average models. Econom Theory 4:738–764

    MathSciNet  MATH  Google Scholar 

  • McLeod AI, Hipel KW (1978) Preservation of the rescaled adjusted range. 1: A reassessment of the Hurst phenomenon. Water Resourc Res 14:491–508

    Google Scholar 

  • Mandelbrot BB, Van Ness JW (1968) Fractional Brownian motions, fractional noises and applications. SIAM Rev 10(4):422–437

    MathSciNet  MATH  Google Scholar 

  • Martins LF, Rodrigues PMM (2014) Testing for persistence change in fractionally integrated models: an application to world inflation rates. Comput Stat Data Anal 76:502–522

    MathSciNet  MATH  Google Scholar 

  • Nielsen MØ (2004) Efficient inference in multivariate fractionally integrated time series models. Econom J 7:63–97

    MathSciNet  MATH  Google Scholar 

  • Picard D (1985) Testing and estimating change points in time series. Adv Appl Probab 17(4):841–867

    MathSciNet  MATH  Google Scholar 

  • Qu Z (2008) Testing for structural change in regression quantiles. J Econom 146(1):170–184

    MathSciNet  MATH  Google Scholar 

  • Robinson PM (1994) Semiparametrics analysis of long-memory time series. Ann Stat 22(1):515–539

    MathSciNet  MATH  Google Scholar 

  • Robinson PM (1995a) Log-periodogram regression of time series with long range dependence. Ann Stat 23(3):1048–1072

    MathSciNet  MATH  Google Scholar 

  • Robinson PM (1995b) Gaussian semiparametric estimation of long range dependence. Ann Stat 23(5):1630–1661

    MathSciNet  MATH  Google Scholar 

  • Shao X (2011) A simple test of changes in mean in the possible presence of long-range dependence. J Times Ser Anal 32(6):598–606

    MathSciNet  MATH  Google Scholar 

  • Sowell F (1990) The fractional unit root distribution. Econometrica 58(2):495–505

    MathSciNet  MATH  Google Scholar 

  • Wang L (2008) Change-in-mean problem for long memory time series models with applications. J Stat Comput Simul 78(7):653–668

    MathSciNet  MATH  Google Scholar 

  • Wenger K, Leschinski C, Sibbertsen P (2018a) A simple test on structural change in long-memory time series. Econ Lett 163:90–94

    MathSciNet  MATH  Google Scholar 

  • Wenger K, Leschinski C, Sibbertsen P (2018b) Change-in-mean tests in long-memory time series: a review of recent developments. AStA Advances in Statistical Analysis (in press)

Download references

Acknowledgements

The study is partially supported by the National Natural Science Foundation of China (No. 11871425) and the Zhejiang Provincial Natural Science Foundation of China (No. LY19A010022 and No. LY17A010016).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tianxiao Pang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Appendix A

In this subsection, we provide the proofs of the results in Sect. 2. In the next lemma, we present some results on bounding the maximum of weighted partial sums of I(d) processes with \(-0.5<d<0.5\).

Lemma 6.1

Suppose \(\{{x}_{t}\}\) is an I(d) process, that is, (2.1) is satisfied, where \(-0.5<d<0.5\) and \(\{u_t\}\) is a sequence of i.i.d. random variables with mean zero, finite variance and \(E(|u_t|^{2+\delta })<\infty \) for some \(\delta >0\). Then, for lager positive integer m,

$$\begin{aligned}&\max _{k\ge m}\frac{1}{k}|S_k|=O_p \left( \frac{1}{\sqrt{m}}\right) ,~~-0.5<d\le 0, \end{aligned}$$
(6.1)
$$\begin{aligned}&\max _{k\ge m}\frac{1}{k}|S_k|=O_p \left( \frac{1}{m^{0.5-d}}\right) ,~~0<d<0.5, \end{aligned}$$
(6.2)
$$\begin{aligned}&\max _{1\le k\le m}\frac{1}{\sqrt{k}}|S_k|=O_p (\sqrt{\log m}),~~-0.5<d\le 0, \end{aligned}$$
(6.3)
$$\begin{aligned}&\max _{1\le k\le m}\frac{1}{\sqrt{k}}|S_k|=O_p(m^d),~~0<d<0.5. \end{aligned}$$
(6.4)

Proof

These results are consequences of some Hájek-Rényi inequalities, where (6.1) and (6.3) with \(-0.5<d<0\) are consequences of Theorem 2.1 in Hu et al. (2011), (6.1) and (6.3) with \(d=0\) are consequences of classical Hájek-Rényi inequalities for i.i.d. random variables, see Lin and Bai (2010), and (6.2) and (6.4) are consequences of Theorem 1 and Lemma 2.2 in Lavielle and Moulines (2000). \(\square \)

To prove the first two lemmas in Sect. 2, we need the following equations which are taken from Bai (1997)Footnote 1,

$$\begin{aligned} U_T(k/T)= & {} \frac{k_1^0-k}{T}a_T^2(k)+\frac{k_2^0-k_1^0}{T}b_T^2(k)+\frac{T-k_2^0}{T}c_T^2(k)+\frac{1}{T}\sum _{t=1}^Tx_t^2\nonumber \\&+R_{1T}(k), \quad k\in [1,k_1^0], \end{aligned}$$
(6.5)
$$\begin{aligned} U_T(k/T)= & {} \frac{k_1^0}{T}d_T^2(k)+\frac{k-k_1^0}{T}e_T^2(k)+\frac{k_2^0-k}{T}f_T^2(k)\nonumber \\&\quad +\frac{T-k_2^0}{T}g_T^2(k)+\frac{1}{T}\sum _{t=1}^Tx_t^2 \nonumber \\&+R_{2T}(k), \quad k\in [k_1^0+1,k_2^0], \end{aligned}$$
(6.6)
$$\begin{aligned} U_T(k/T)= & {} \frac{k_1^0}{T}h_T^2(k)+\frac{k_2^0-k_1^0}{T}p_T^2(k)+\frac{k-k_2^0}{T}q_T^2(k)+\frac{1}{T}\sum _{t=1}^Tx_t^2\nonumber \\&+R_{3T}(k), \quad k \in [k_2^0+1,T], \end{aligned}$$
(6.7)

where

$$\begin{aligned}&{\left\{ \begin{array}{ll}a_T(k)=\frac{1}{T-k}[(T-k_1^0)(\mu _1-\mu _2)+(T-k_2^0)(\mu _2-\mu _3)],~~k\in [1,k_1^0]\\ b_T(k)=\frac{1}{T-k}[(k_1^0-k)(\mu _2-\mu _1)+(T-k_2^0)(\mu _2-\mu _3)],~~k\in [1,k_1^0] \\ c_T(k)=\frac{1}{T-k}[(k_1^0-k)(\mu _2-\mu _1)+(k_2^0-k)(\mu _3-\mu _2)],~~k\in [1,k_1^0] \\ d_T(k)=\frac{1}{k}[(k-k_1^0)(\mu _1-\mu _2)],~~ k\in [k_1^0+1,k_2^0]\\ e_T(k)=\frac{1}{k}[k_1^0(\mu _2-\mu _1)],~~ k\in [k_1^0+1,k_2^0]\\ f_T(k)=\frac{1}{T-k}[(T-k_2^0)(\mu _2-\mu _3)],~~ k\in [k_1^0+1,k_2^0]\\ g_T(k)=\frac{1}{T-k}[(k_2^0-k)(\mu _3-\mu _2)],~~ k\in [k_1^0+1,k_2^0] \\ h_T(k)=\frac{1}{k}[(k-k_1^0)(\mu _1-\mu _2)+(k-k_2^0)(\mu _2-\mu _3)],~~k \in [k_2^0+1,T] \\ p_T(k)=\frac{1}{k}[k_1^0(\mu _2-\mu _1)+(k-k_2^0)(\mu _2-\mu _3)],~~k \in [k_2^0+1,T] \\ q_T(k)=\frac{1}{k}[k_1^0(\mu _2-\mu _1)+k_2^0(\mu _3-\mu _2)],~~k \in [k_2^0+1,T] \end{array}\right. }, \end{aligned}$$
(6.8)
$$\begin{aligned} R_{1T}(k)= & {} \frac{1}{T}\left[ 2a_T(k)\sum _{t=k+1}^{k_1^0}x_t+2b_T(k)\sum _{t=k_1^0+1}^{k_2^0}x_t+2c_T(k)\sum _{t=k_2^0+1}^Tx_t\right] \nonumber \\&\quad -\frac{k}{T}(A_T(k))^2-\frac{T-k}{T}(A_T^*(k))^2,~~ k\in [1,k_1^0], \end{aligned}$$
(6.9)
$$\begin{aligned} R_{2T}(k)= & {} \frac{1}{T}\left[ 2d_T(k)\sum _{t=1}^{k_1^0}x_t+2e_T(k)\sum _{t=k_1^0+1}^{k}x_t \right. \nonumber \\&\quad \left. +2f_T(k)\sum _{t=k+1}^{k_2^0}x_t+2g_T(k)\sum _{t=k_2^0+1}^Tx_t\right] \nonumber \\&\quad -\frac{k}{T}(A_T(k))^2-\frac{T-k}{T}(A_T^*(k))^2,~~k\in [k_1^0+1,k_2^0], \end{aligned}$$
(6.10)
$$\begin{aligned} R_{3T}(k)= & {} \frac{1}{T}\left[ 2h_T(k)\sum _{t=1}^{k_1^0}x_t+2p_T(k)\sum _{t=k_1^0+1}^{k_2^0}x_t+2q_T(k)\sum _{t=k_2^0+1}^kx_t\right] \nonumber \\&\quad -\frac{k}{T}(A_T(k))^2-\frac{T-k}{T}(A_T^*(k))^2,~~ k \in [k_2^0+1,T] \end{aligned}$$
(6.11)

with

$$\begin{aligned} A_T(k)=\frac{1}{k}\sum _{t=1}^kx_t~~\mathrm{and}~~ A_T^*(k)=\frac{1}{T-k}\sum _{t=k+1}^Tx_t. \end{aligned}$$

In the next lemma, the orders of \(R_{iT}(k),~i=1,2,3\) are characterized.

Lemma 6.2

If \(x_t\thicksim I(d)\) with \(-0.5<d<0.5\), then

$$\begin{aligned} R_{iT}(k)=O_p(T^{-0.5+d})~~{uniformly~ in}~ k,~ i=1,2,3. \end{aligned}$$

Proof

It is easy to see that the terms \(a_T(k), \ldots , q_T(k)\) in (6.8) are all uniformly bounded both in T and in k. Consider the term \(R_{1T}(k)\) first. From the property P3, it is true that \(\sum _{t=k_1^0+1}^{k_2^0}x_t=O_p(T^{0.5+d})\) and \(\sum _{t=k_2^0+1}^{T}x_t=O_p(T^{0.5+d})\), which yield

$$\begin{aligned} \frac{1}{T}\sum _{t=k_1^0+1}^{k_2^0}x_t=O_p(T^{-0.5+d}),~~\frac{1}{T}\sum _{t=k_2^0+1}^{T}x_t=O_p(T^{-0.5+d}). \end{aligned}$$
(6.12)

In addition, applying the functional central limit theorem to \(\sum _{t=k+1}^{k_1^0}x_t\) (applied with the data order reversed by treating \(k_1^0\) as 1), one has

$$\begin{aligned}\frac{1}{T}\sup _{1\le k\le k_1^0}\sum _{t=k+1}^{k_1^0}x_t=O_p(T^{-0.5+d}).\end{aligned}$$

For the rest terms in \(R_{1T}(k)\), applying Lemma 6.1 leads to

$$\begin{aligned} \sup _{1\le k\le k_1^0}\left| \frac{k}{T}(A_T(k))^2\right|= & {} \frac{1}{T}\left( \sup _{1\le k\le k_1^0}\frac{1}{\sqrt{k}}\left| \sum _{t=1}^k x_t\right| \right) ^2\nonumber \\= & {} \left\{ \begin{array}{ll} O_p(\frac{\log T}{T}),~~&{}-0.5<d\le 0, \\ O_p(T^{-1+2d}),~~&{}0<d<0.5, \end{array} \right. \end{aligned}$$
(6.13)

and applying the functional central limit theorem to \(\sum _{t=k+1}^T x_t\) (applied with the data order reversed by treating T as 1) yields

$$\begin{aligned} \sup _{1\le k\le k_1^0}\left| \frac{T-k}{T}(A_T^{*}(k))^2\right|= & {} \frac{1}{T}\left( \sup _{1\le k\le k_1^0}\frac{1}{\sqrt{T-k}}\left| \sum _{t=k+1}^T x_t\right| \right) ^2\nonumber \\\le & {} \frac{1}{T(T-k_1^0)}\left( \sup _{1\le k\le T-1}\left| \sum _{t=k+1}^T x_t\right| \right) ^2\nonumber \\= & {} O_p(T^{-1+2d}). \end{aligned}$$
(6.14)

Combining (6.12)–(6.14) leads to

$$\begin{aligned} \sup _{1\le k\le k_1^0}|R_{1T}(k)|=O_p(T^{-0.5+d}). \end{aligned}$$

Similarly, it can be proved that

$$\begin{aligned} \sup _{k_1^0+1\le k\le k_2^0}|R_{2T}(k)|=O_p(T^{-0.5+d})~~\mathrm{and}~~\sup _{k_2^0+1\le k\le T}|R_{3T}(k)|=O_p(T^{-0.5+d}). \end{aligned}$$

The proof is complete. \(\square \)

Proof of Lemma 2.1

Since the non-stochastic terms \(a_T(k), \ldots , q_T(k)\) are all uniformly bounded, \(\frac{1}{T}\sum _{t=1}^Tx_t^2\) converges to its expectation in probability by the property P5, \(R_{iT}(k)\) converges uniformly in probability to zero by Lemma 6.2, the uniform limit of \(U_T(\tau )\) is acquired easily. \(\square \)

Proof of Lemma 2.2

\(R_{iT}(k), i=1,2,3\) are the only stochastic terms of \(U_T(k/T)-\frac{1}{T}\sum _{t=1}^{T}x_t^2\) and they are all of order \(O_p(T^{-0.5+d})\) uniformly in k by Lemma 6.2. Moreover, it follows from the property P3 that \(ER_{iT}(k),~i=1,2,3\) are of order smaller than \(O(T^{-0.5+d})\) uniformly in k by some algebra. These arguments lead to Lemma 2.2. \(\square \)

The next four lemmas are useful for proving Lemma 2.3.

Lemma 6.3

If \(x_t\thicksim I(d)\) with \(-0.5<d<0.5\), then for any positive integer i and for any positive integer \(j>i\), one has

$$\begin{aligned} \left| E\left[ \frac{1}{j-i}\left( \sum _{t=1}^ix_t\right) \left( \sum _{s=i+1}^jx_s\right) \right] \right| \le \left\{ \begin{array}{ll} O(1),~~&{} d<0,\\ 0,~~ &{} d=0, \\ O(i^{2d}),~~ &{} d>0. \end{array} \right. \end{aligned}$$

Proof

When \(d=0\), it is trivial that \(|E[\frac{1}{j-i}(\sum _{t=1}^ix_t)(\sum _{s=i+1}^jx_s)]|=0\). When \(d\ne 0\), since \(\gamma _x(h)=E(x_tx_{t+h}) =O(h^{2d-1})\) according to the property P2, it is true that

$$\begin{aligned} \left| E\left[ \frac{1}{j-i}\left( \sum _{t=1}^ix_t\right) \left( \sum _{s=i+1}^jx_s\right) \right] \right|= & {} \left| \frac{1}{j-i}\sum _{t=1}^i\sum _{s=i+1}^j\gamma _x(s-t)\right| \\\le & {} C\left| \frac{1}{j-i}\sum _{t=1}^i\sum _{s=i+1}^j(i+1-t)^{2d-1}\right| \\= & {} C\left| \frac{j-i}{j-i}\sum _{t=1}^i(i+1-t)^{2d-1}\right| \\= & {} \left\{ \begin{array}{ll} O(1),~~&{} d<0,\\ O(i^{2d}),~~ &{} d>0, \end{array} \right. \end{aligned}$$

as desired. \(\square \)

In addition, applying the property P3, the following result is also true,

$$\begin{aligned} E\left[ \frac{1}{j-i}\left( \sum _{t=i+1}^jx_t\right) ^2\right] =O((j-i)^{2d}). \end{aligned}$$
(6.15)

Lemma 6.4

In Model (2.2), under Assumptions A1 and A2, we have for \(i=1,2,3\),

$$\begin{aligned} T|ER_{iT}(k)-ER_{iT}(k_1^0)|=\left\{ \begin{array}{ll} |k_1^0-k|\cdot O(T^{-1}),~~ &{} d\le 0,\\ |k_1^0-k|\cdot O(T^{-1+2d}),~~ &{} d>0. \end{array} \right. \end{aligned}$$

Proof

We only prove the result for \(i=1\) since the proofs for \(i=1, 2, 3\) are similar. Write

$$\begin{aligned} T|ER_{1T}(k)-ER_{1T}(k_1^0)|&=\left| E\left[ k(A_T(k))^2-k_1^0(A_T(k_1^0))^2\right] \right. \nonumber \\&\quad \left. -E\left[ (T-k)(A_T^*(k))^2-(T-k_1^0)(A_T^*(k_1^0))^2\right] \right| \nonumber \\&\le \left| E\left[ k(A_T(k))^2-k_1^0(A_T(k_1^0))^2\right] \right| \nonumber \\&\quad +\left| E\left[ (T-k)(A_T^*(k))^2-(T-k_1^0)(A_T^*(k_1^0))^2\right] \right| . \end{aligned}$$
(6.16)

Note that, when \(k<k_1^0\), we have

$$\begin{aligned}&k(A_T(k))^2-k_1^0(A_T(k_1^0))^2 \nonumber \\&\quad =(k_1^0-k)\left\{ \frac{1}{k_1^0k}\left( \sum _{t=1}^kx_t\right) ^2- \frac{2}{(k_1^0-k)k_1^0}\left( \sum _{t=1}^kx_t\right) \left( \sum _{t=k+1}^{k_1^0}x_t\right) \right. \nonumber \\&\qquad \left. -\frac{1}{k_1^0(k_1^0-k)}\left( \sum _{t=k+1}^{k_1^0}x_t\right) ^2\right\} . \end{aligned}$$
(6.17)

Thus, applying Lemma 6.3 and (6.15) we have

$$\begin{aligned}&\left| E\left[ k(A_T(k))^2-k_1^0(A_T(k_1^0))^2\right] \right| \nonumber \\&\quad =\left\{ \begin{array}{ll} (k_1^0-k)\left| O(\frac{k^{2d}}{T})-O(\frac{1}{T})-O(\frac{(k_1^0-k)^{2d}}{T})\right| ,~~ &{} d<0\\ (k_1^0-k)\left| O(\frac{k^{2d}}{T})-O(\frac{(k_1^0-k)^{2d}}{T})\right| ,~~ &{} d=0\\ (k_1^0-k)\left| O(\frac{k^{2d}}{T})-O(\frac{k^{2d}}{T})-O(\frac{(k_1^0-k)^{2d}}{T})\right| ,~~ &{} d>0 \end{array} \right. \nonumber \\&\quad \le \left\{ \begin{array}{ll} |k_1^0-k|\cdot O(T^{-1}),~~ &{} d<0,\\ |k_1^0-k|\cdot O(T^{-1}),~~ &{} d=0,\\ |k_1^0-k|\cdot O(T^{-1+2d}),~~ &{} d>0. \end{array} \right. \end{aligned}$$
(6.18)

When \(k\ge k_1^0\), we have

$$\begin{aligned}&k(A_T(k))^2-k_1^0(A_T(k_1^0))^2\\&\quad =(k-k_1^0)\left\{ -\frac{1}{k_1^0k}\left( \sum _{t=1}^{k_1^0}x_t\right) ^2+ \frac{2}{(k-k_1^0)k}\left( \sum _{t=1}^{k_1^0} x_t\right) \left( \sum _{t=k_1^0+1}^{k}x_t\right) \right. \nonumber \\&\quad \qquad \left. +\frac{1}{k(k-k_1^0)}\left( \sum _{t=k_1^0+1}^{k}x_t\right) ^2\right\} . \end{aligned}$$

Thus, applying Lemma 6.3 and (6.15) again we have

$$\begin{aligned}&\left| E\left[ k(A_T(k))^2-k_1^0(A_T(k_1^0))^2\right] \right| \nonumber \\&\quad =\left\{ \begin{array}{ll} (k-k_1^0)\left| -O(\frac{T^{2d}}{T})+O(\frac{1}{T})+O(\frac{(k-k_1^0)^{2d}}{T})\right| ,~~ &{} d<0\\ (k-k_1^0)\left| -O(\frac{T^{2d}}{T})+O(\frac{(k-k_1^0)^{2d}}{T})\right| ,~~ &{} d=0\\ (k-k_1^0)\left| -O(\frac{T^{2d}}{T})+O(\frac{T^{2d}}{T})+O(\frac{(k-k_1^0)^{2d}}{T})\right| ,~~ &{} d>0 \end{array} \right. \nonumber \\&\quad \le \left\{ \begin{array}{ll} |k_1^0-k|\cdot O(T^{-1}),~~ &{} d<0,\\ |k_1^0-k|\cdot O(T^{-1}),~~ &{} d=0,\\ |k_1^0-k|\cdot O(T^{-1+2d}),~~ &{} d>0. \end{array} \right. \end{aligned}$$
(6.19)

Combining (6.18) and (6.19) together yields

$$\begin{aligned} \left| E\left[ k(A_T(k))^2-k_1^0(A_T(k_1^0))^2\right] \right| =\left\{ \begin{array}{ll} |k_1^0-k|\cdot O(T^{-1}),~~ &{} d\le 0,\\ |k_1^0-k|\cdot O(T^{-1+2d}),~~ &{} d>0. \end{array} \right. \end{aligned}$$
(6.20)

Similarly, one has

$$\begin{aligned} \left| E\left[ (T-k)(A_T^*(k))^2-(T-k_1^0)(A_T^*(k_1^0))^2\right] \right| =\left\{ \begin{array}{ll} |k_1^0-k|\cdot O(T^{-1}),~~ &{} d\le 0,\\ |k_1^0-k|\cdot O(T^{-1+2d}),~~ &{} d>0. \end{array} \right. \end{aligned}$$
(6.21)

Combining (6.16), (6.20) and (6.21), one immediately has

$$\begin{aligned} T|ER_{1T}(k)-ER_{1T}(k_1^0)|=\left\{ \begin{array}{ll} |k_1^0-k|\cdot O(T^{-1}),~~ &{} d\le 0,\\ |k_1^0-k|\cdot O(T^{-1+2d}),~~ &{} d>0. \end{array} \right. \end{aligned}$$

The proof is complete. \(\square \)

Lemma 6.5

In Model (2.2), under Assumptions A1 and A2, there exists a positive constant \(M<\infty \) such that

$$\begin{aligned} ES_T(k)-ES_T(k_1^0)\ge & {} T[ER_{1T}(k)-ER_{1T}(k_1^0)]\\\ge & {} \left\{ \begin{array}{ll} -M|k_1^0-k|/T,~~ &{} d\le 0,\\ -M|k_1^0-k|/T^{1-2d},~~ &{} d>0, \end{array} \right. \\ ES_T(k)-ES_T(k_2^0)\ge & {} T[ER_{3T}(k)-ER_{3T}(k_2^0)]\\\ge & {} \left\{ \begin{array}{ll} -M|k_2^0-k|/T,~~ &{} d\le 0,\\ -M|k_2^0-k|/T^{1-2d},~~ &{} d>0. \end{array} \right. \end{aligned}$$

Proof

The proof is similar to that of Lemma 13 in Bai (1997), where Lemma 6.4 above is used. The details are therefore omitted. \(\square \)

Lemma 6.6

In Model (2.2), under Assumptions A1–A3, there exists a constant \(C>0\) such that for all large T,

$$\begin{aligned} ES_T(k)-ES_T(k_1^0)\ge C|k-k_1^0|,\quad for ~k\le k_1^0. \end{aligned}$$

Proof

The proof is similar to that of Lemma 14 in Bai (1997), where Lemma 6.5 and the following observation are used: \(M|k_1^0-k|/T^{1-2d}=o(|k_1^0-k|)\) for any \(-0.5<d<0.5\). \(\square \)

Proof of Lemma 2.3

For \(k\le k_1^0\), Lemma 2.3 is implied straightforwardly by Lemma 6.6. For \(k>k_1^0\), we follow the proof of Lemma 3 in Bai (1997), but some details are different. When \(k\in [k_1^0+1, k_2^0]\), it has been proved in Bai (1997) that

$$\begin{aligned} ES_T(k)-ES_T(k_1^0)\ge & {} (k-k_1^0)\frac{k_2^0}{k}C^{*}-(k-k_1^0)O(T^{-1})\\&+T\left[ ER_{2T}(k)-ER_{2T}(k_1^0)\right] , \end{aligned}$$

where \(C^{*}\) is a positive constant and its definition can be found in Bai (1997). Note that \(k_2^0/k\ge 1\) for \(k\in [k_1^0+1, k_2^0]\), and

$$\begin{aligned} T\left[ ER_{2T}(k)-ER_{2T}(k_1^0)\right] =o(|k-k_1^0|)~~\mathrm{for~any}~-0.5<d<0.5 \end{aligned}$$

by Lemma 6.4. Thus, for all large T,

$$\begin{aligned} ES_T(k)-ES_T(k_1^0)\ge (k-k_1^0)C^{*}/2. \end{aligned}$$
(6.22)

When \(k\in [k_2^0+1, T]\), from Lemma 6.5 and (6.22) with \(k=k_2^0\), it is true that for all large T,

$$\begin{aligned}&ES_T(k)-ES_T(k_1^0)\nonumber \\&\quad =ES_T(k)-ES_T(k_2^0)+ES_T(k_2^0)-ES_T(k_1^0)\nonumber \\&\quad \ge ES_T(k_2^0)-ES_T(k_1^0)-\left\{ \begin{array}{ll} M|T-k_2^0|/T,~~ &{} -0.5<d\le 0\\ M|T-k_2^0|/T^{1-2d},~~ &{} 0<d<0.5. \end{array} \right. \end{aligned}$$

Note that \(M|T-k_2^0|/T=O(1)\) and \(M|T-k_2^0|/T^{1-2d}=O(T^{2d})\) and \(ES_T(k)-ES_T(k_1^0)\ge (k_2^0-k_1^0)C^{*}/2\) which tends to infinity at rate T. Hence, we have

$$\begin{aligned} ES_T(k)-ES_T(k_1^0)\ge & {} \left( ES_T(k_2^0)-ES_T(k_1^0)\right) (1-o(1))\nonumber \\\ge & {} \left( (k-k_1^0)\frac{k_2^0-k_1^0}{T-k_1^0}\frac{ES_T(k_2^0)-ES_T(k_1^0)}{k_2^0-k_1^0}\right) (1-o(1))\nonumber \\\ge & {} (k-k_1^0)\frac{\tau _2^0-\tau _1^0}{1-\tau _1^0}\frac{C^{*}}{4}. \end{aligned}$$

The proof is complete. \(\square \)

Proof of Lemma 2.4

For all \(k\in [1,T]\) and large T, we have

$$\begin{aligned} \begin{aligned} S_T(k)-S_T(k_1^0)&= \left[ S_T(k)-\sum _{t=1}^{T}x_t^2\right] -\left[ ES_T(k)-\sum _{t=1}^{T}Ex_t^2\right] \\&\quad -\left\{ \left[ S_T(k_1^0)-\sum _{t=1}^{T}x_t^2\right] -\left[ ES_T(k_1^0)-\sum _{t=1}^{T}Ex_t^2\right] \right\} +ES_T(k)-ES_T(k_1^0) \\&\ge -2\sup _{1\le j\le T}\left| \left[ S_T(j)-\sum _{t=1}^Tx_t^2\right] -\left[ ES_T(j)-\sum _{t=1}^{T}Ex_t^2\right] \right| +ES_T(k)-ES_T(k_1^0) \\&\ge -2\sup _{1\le j\le T}\left| \left[ S_T(j)-\sum _{t=1}^{T}x_t^2\right] -\left[ ES_T(j)-\sum _{t=1}^{T}Ex_t^2\right] \right| +C|k-k_1^0|, \end{aligned} \end{aligned}$$

where the last inequality is implied by Lemma 2.3. The preceding inequality still holds if \(k={\hat{k}}_1\). Since \(S_T({\hat{k}}_1)-S_T(k_1^0)\le 0\), we then have

$$\begin{aligned} |{\hat{k}}_1-k_1^0|\le \frac{2}{C}\sup _{1\le j\le T}\left| \left\{ S_T(j)-\sum _{t=1}^{T}x_t^2\right\} -\left\{ ES_T(j)-\sum _{t=1}^{T}Ex_t^2\right\} \right| , \end{aligned}$$

that is

$$\begin{aligned} |{\hat{\tau }}_1-\tau _1^0|\le O_p\left( \frac{1}{T}\right) +\frac{2}{C}\sup _{1\le j\le T}\left| \left\{ U_T(j/T)-\frac{1}{T}\sum _{t=1}^{T}x_t^2\right\} -\left\{ EU_T(j/T)-\frac{1}{T}\sum _{t=1}^{T}Ex_t^2\right\} \right| . \end{aligned}$$

The proof is then finished by applying Lemma 2.2. \(\square \)

Proof of Lemma 2.5

One can complete the proof by following the lines in the proof of Lemma 4 in Bai (1997), where Lemmas 2.3 and 6.4 and the property P3 are used. \(\square \)

Proof of Theorem 2.1

The proof is easy by using Lemmas 2.4 and 2.5. The reader can refer to the proof of Proposition 2 in Bai (1997) for more details. \(\square \)

Proof of Lemma 2.6

First, from (6.5), it is easy to see that

$$\begin{aligned}&\left| \left[ U_T(k/T)-\frac{1}{T}\sum _{t=1}^Tx_t^2\right] -\left[ EU_T(k/T)-\frac{1}{T}\sum _{t=1}^T Ex_t^2\right] \right| \\&\quad =\left\{ \begin{array}{ll} |R_{1T}(k)-ER_{1T}(k)|,~~ &{} k\in [1,k_1^0]\\ |R_{2T}(k)-ER_{2T}(k)|,~~ &{} k\in [k_1^0+1,k_2^0]\\ |R_{3T}(k)-ER_{3T}(k)|,~~ &{} k\in [k_2^0+1, T] \end{array} \right. \le \left\{ \begin{array}{ll} |R_{1T}(k)|+|ER_{1T}(k)|,~~ &{} k\in [1,k_1^0],\\ |R_{2T}(k)|+|ER_{2T}(k)|,~~ &{} k\in [k_1^0+1,k_2^0],\\ |R_{3T}(k)|+|ER_{3T}(k)|,~~ &{} k\in [k_2^0+1, T]. \end{array} \right. \end{aligned}$$

Note that the terms \(a_T(k),\ldots ,q_T(k)\) in (6.8) are all of order \(O(v_T)\), then by following the lines in the proof of Lemma 6.2 and noticing Assumption A4 it is easy to have

$$\begin{aligned} \sup _{k}|R_{iT}(k)|=O_p(v_T T^{-0.5+d}),~~i=1,2,3. \end{aligned}$$
(6.23)

In addition, by the property P3,

$$\begin{aligned} |ER_{iT}(k)|\le O\left( \frac{k^{2d}}{T}\right) +O\left( \frac{(T-k)^{2d}}{T}\right) ,~~i=1,2,3, \end{aligned}$$

which further implies

$$\begin{aligned} \sup _{k}|ER_{iT}(k)|\le \left\{ \begin{array}{ll} O(T^{-1}),~~ &{} -0.5<d\le 0,\\ O(T^{-1+2d}),~~ &{} 0<d<0.5, \end{array} \right. ~~~~i=1,2,3. \end{aligned}$$

This together with Assumption A4 imply

$$\begin{aligned} \sup _{k}|ER_{iT}(k)|=o(v_T T^{-0.5+d}),~~i=1,2,3. \end{aligned}$$
(6.24)

Combining (6.23) and (6.24) leads to the desired result. \(\square \)

Proof of Lemma 2.7

We only prove the result for the case of \(k\le k_1^0\) since the case of \(k>k_1^0\) can be handled similarly. It follows from the proof of Lemma 13 in Bai (1997) that

$$\begin{aligned} ES_T(k)-ES_T(k_1^0)= & {} \frac{k_1^0-k}{(1-k/T)(1-k_1^0/T)}[(1-k_1^0/T)(\mu _{1T}-\mu _{2T})\nonumber \\&\quad +(1-k_2^0/T)(\mu _{2T}-\mu _{3T})]^2+T[ER_{1T}(k)-ER_{1T}(k_1^0)].\nonumber \\ \end{aligned}$$
(6.25)

Since \(\mu _{iT}={\tilde{\mu }}_i v_T\), it is easy to see that the first term on the right-hand side of (6.25) is at least as large as \(C(k_1^0-k)v_T^2\), where C is a positive constant depending on \(\tau _i^0~(i=1,2)\) and \({\tilde{\mu }}_j~(j=1,2,3)\). For the second term on the right-hand side of (6.25), applying the same arguments as those used in the proof of Lemma 6.4, it can be proved that

$$\begin{aligned} T|ER_{1T}(k)-ER_{1T}(k_1^0)|=\left\{ \begin{array}{ll} |k_1^0-k|\cdot O(T^{-1}),~~ &{} -0.5<d\le 0,\\ |k_1^0-k|\cdot O(T^{-1+2d}),~~ &{} 0<d<0.5. \end{array} \right. \end{aligned}$$

In view of Assumption A4 again, one has

$$\begin{aligned} T|ER_{1T}(k)-ER_{1T}(k_1^0)|=o(|k_1^0-k|v_T^2). \end{aligned}$$

Then, the proof is complete. \(\square \)

Proof of Lemma 2.8

The proof of Lemma 2.8 is similar to that of Proposition 1 in Bai (1997), where Lemmas 2.6 and 2.7 are used. \(\square \)

Proof of Lemma 2.9

The proof is similar to that of Lemma 9 in Bai (1997). The key of this proof is to verify for any given \(\eta >0\) and \(\varepsilon >0\), there exists a large positive constant \(M<\infty \) such that for all large T,

$$\begin{aligned} P\left( \sup _{k\in D_{T,M}^*}\frac{T|R_{1T}(k)-R_{1T}(k_1^0)|}{v_T^2|k-k_1^0|}>\eta \right) <\varepsilon . \end{aligned}$$

Note that when \(k\in D_{T,M}^*\), we have either \(T\eta<k<k_1^0-Mv_T^{-2}\) or \(k_1^0+Mv_T^{-2}<k<T\tau _2^0(1-\eta )\) when \(-0.5<d<0\), and either \(T\eta<k<k_1^0-Mv_T^{-2/(1-2d)}\) or \(k_1^0+Mv_T^{-2/(1-2d)}<k<T\tau _2^0(1-\eta )\) when \(0\le d<0.5\). To save space, we only consider the cases of \(k<k_1^0-Mv_T^{-2}\) when \(-0.5<d<0\) and \(k<k_1^0-Mv_T^{-2/(1-2d)}\) when \(0\le d<0.5\).

Write

$$\begin{aligned}&T\left[ R_{1T}(k)-R_{1T}(k_1^0)\right] \nonumber \\&\quad =2\Big (a_T(k)\sum _{t=k+1}^{k_1^0}x_t\Big ) +2\left( (b_T(k)-b_T(k_1^0))\sum _{t=k_1^0+1}^{k_2^0}x_t\right) +2\left( (c_T(k)-c_T(k_1^0))\sum _{t=k_2^0+1}^{T}x_t\right) \nonumber \\&\quad \quad +\left[ k_1^0\left( A_T(k_1^0)\right) ^2-k\left( A_T(k)\right) ^2\right] +\left[ (T-k_1^0)\left( A_T^*(k_1^0)\right) ^2-(T-k)\left( A_T^*(k)\right) ^2\right] . \end{aligned}$$
(6.26)

Thus, is suffices to prove that, when M and T are large enough, every term on the right-hand side of (6.26) divided by \(v_T^2(k_1^0-k)\) is arbitrarily small in probability, uniformly in \(T\eta \le k<k_1^0-Mv_T^{-2}\) when \(-0.5<d<0\) and in \(T\eta \le k<k_1^0-Mv_T^{-2/(1-2d)}\) when \(0\le d<0.5\). Note that, when \(d=0\), \(T\eta \le k<k_1^0-Mv_T^{-2}\) is equivalent to \(T\eta \le k<k_1^0-Mv_T^{-2/(1-2d)}\).

Consider the term \(a_T(k)\sum _{t=k+1}^{k_1^0}x_t/(v_T^2(k_1^0-k))\). First, note that \(|a_T(k)|\le L v_T\) with a positive constant L uniformly in \(k\in [1,T]\). When \(-0.5<d\le 0\), applying (6.1) in Lemma 6.1 (applied with the data order reversed by treating \(k_1^0\) as 1), it is true that

$$\begin{aligned}&\sup _{T\eta \le k<k_1^0-Mv_T^{-2}}a_T(k)\sum _{t=k+1}^{k_1^0}x_t/(v_T^2(k_1^0-k))\nonumber \\&\quad =O_p\left( \frac{1}{v_T\sqrt{Mv_T^{-2}}}\right) =O_p(M^{-0.5})=o_p(1)~~\mathrm{as}~M\rightarrow \infty ,~-0.5<d\le 0, \end{aligned}$$

and

$$\begin{aligned}&\sup _{T\eta \le k<k_1^0-Mv_T^{-2/(1-2d)}}a_T(k)\sum _{t=k+1}^{k_1^0}x_t/(v_T^2(k_1^0-k))\nonumber \\&\quad =O_p\left( \frac{1}{v_T(Mv_T^{-2/(1-2d)})^{0.5-d}}\right) \nonumber \\&\quad =O_p(M^{-0.5+d})=o_p(1)~~\mathrm{as}~M\rightarrow \infty ,~0<d<0.5. \end{aligned}$$

Consider the term \((b_T(k)-b_T(k_1^0))\sum _{t=k_1^0+1}^{k_2^0}x_t/(v_T^2(k_1^0-k))\). Similar to the results in (A.26) in Bai (1997), it is easy to establish

$$\begin{aligned} |b_T(k)-b_T(k_1^0)|\le \left| \frac{k_1^0-k}{T-k}\right| v_T C~~\mathrm{and}~~|c_T(k)-c_T(k_1^0)|\le \left| \frac{k_1^0-k}{T-k}\right| v_T C \end{aligned}$$

for some \(0<C<\infty \). Hence, using the property P3, we have

$$\begin{aligned}&\sup _{T\eta \le k<k_1^0-Mv_T^{-2}}\left| \frac{(b_T(k)-b_T(k_1^0))\sum _{t=k_1^0+1}^{k_2^0}x_t}{v_T^2(k_1^0-k)}\right| \nonumber \\&\quad \le \sup _{T\eta \le k<k_1^0-Mv_T^{-2}}\left| \frac{(b_T(k)-b_T(k_1^0))}{v_T^2(k_1^0-k)}\right| \cdot \left| \sum _{t=k_1^0+1}^{k_2^0}x_t\right| \nonumber \\&\quad =O_p\left( \frac{1}{Tv_T}\cdot T^{0.5+d}\right) =o_p(1),~~-0.5<d\le 0 \end{aligned}$$

by Assumption A4, and similarly,

$$\begin{aligned} \sup _{T\eta \le k<k_1^0-Mv_T^{-2/(1-2d)}}\left| \frac{(b_T(k)-b_T(k_1^0))\sum _{t=k_1^0+1}^{k_2^0}x_t}{v_T^2(k_1^0-k)}\right| =o_p(1),~~0<d<0.5. \end{aligned}$$

Applying the above arguments, it can analogously be proved that

$$\begin{aligned} \sup _{T\eta \le k<k_1^0-Mv_T^{-2}}\left| \frac{(c_T(k)-c_T(k_1^0))\sum _{t=k_2^0+1}^{T}x_t}{v_T^2(k_1^0-k)}\right| =o_p(1),~~-0.5<d\le 0 \end{aligned}$$

and

$$\begin{aligned} \sup _{T\eta \le k<k_1^0-Mv_T^{-2/(1-2d)}}\left| \frac{(c_T(k)-c_T(k_1^0))\sum _{t=k_2^0+1}^{T}x_t}{v_T^2(k_1^0-k)}\right| =o_p(1),~~0<d<0.5. \end{aligned}$$

Consider the term \(\left[ k_1^0\left( A_T(k_1^0)\right) ^2-k\left( A_T(k)\right) ^2\right] /(v_T^2(k_1^0-k))\). From (6.17), we have

$$\begin{aligned}&\frac{k_1^0\left( A_T(k_1^0)\right) ^2-k\left( A_T(k)\right) ^2}{v_T^2(k_1^0-k)}\nonumber \\&\quad =-\frac{1}{k_1^0kv_T^2}\left( \sum _{t=1}^kx_t\right) ^2+ \frac{2}{(k_1^0-k)k_1^0v_T^2}\left( \sum _{t=1}^kx_t\right) \left( \sum _{t=k+1}^{k_1^0}x_t\right) \nonumber \\&\quad +\frac{1}{k_1^0(k_1^0-k)v_T^2}\left( \sum _{t=k+1}^{k_1^0}x_t\right) ^2. \end{aligned}$$

By Lemma 6.1 and Assumption A4, it is true that: (1)

$$\begin{aligned} \sup _{T\eta \le k<k_1^0-Mv_T^{-2}}\frac{1}{k_1^0kv_T^2}\left( \sum _{t=1}^kx_t\right) ^2\le & {} \frac{1}{k_1^0T\eta v_T^2}\sup _{1\le k<k_1^0}\left( \sum _{t=1}^kx_t\right) ^2\nonumber \\= & {} O_p\left( \frac{T^{1+2d}}{T^2v_T^2}\right) =o_p(1),~~-0.5<d\le 0\nonumber \\ \end{aligned}$$
(6.27)

by the functional central limith theorem and Assumption A4, and similarly,

$$\begin{aligned} \sup _{T\eta \le k<k_1^0-Mv_T^{-2/(1-2d)}}\frac{1}{k_1^0kv_T^2}\left( \sum _{t=1}^kx_t\right) ^2=O_p\left( \frac{T^{1+2d}}{T^2v_T^2}\right) =o_p(1),~~0<d<0.5;\nonumber \\ \end{aligned}$$
(6.28)

(2)

$$\begin{aligned} \sup _{T\eta \le k<k_1^0-Mv_T^{-2}}\frac{1}{k_1^0(k_1^0-k)v_T^2}\left( \sum _{t=k+1}^{k_1^0}x_t\right) ^2\le & {} \frac{1}{k_1^0 v_T^2}\sup _{1\le k<k_1^0}\left( \frac{1}{\sqrt{k_1^0-k}}\sum _{t=k+1}^{k_1^0}x_t\right) ^2\nonumber \\= & {} O_p\left( \frac{\log T}{Tv_T^2}\right) =o_p(1),~~-0.5<d\le 0\nonumber \\ \end{aligned}$$
(6.29)

by Lemma 6.1 and Assumption A4, and similarly,

$$\begin{aligned} \sup _{T\eta \le k<k_1^0-Mv_T^{-2/(1-2d)}}\frac{1}{k_1^0(k_1^0-k)v_T^2}\left( \sum _{t=k+1}^{k_1^0}x_t\right) ^2= & {} O_p\left( \frac{T^{2d}}{Tv_T^2}\right) \nonumber \\= & {} o_p(1),~~0<d<0.5;\quad \qquad \end{aligned}$$
(6.30)

(3) note that

$$\begin{aligned}&\sup _{T\eta \le k<k_1^0-Mv_T^{-2}}\left| \frac{2}{(k_1^0-k)k_1^0v_T^2}\left( \sum _{t=1}^kx_t\right) \left( \sum _{t=k+1}^{k_1^0}x_t\right) \right| \nonumber \\&\quad \le \sup _{T\eta \le k\le k_1^0/2}\left| \frac{2}{(k_1^0-k)k_1^0v_T^2}\left( \sum _{t=1}^kx_t\right) \left( \sum _{t=k+1}^{k_1^0}x_t\right) \right| \nonumber \\&\qquad +\sup _{k_1^0/2<k<k_1^0-Mv_T^{-2}}\left| \frac{2}{(k_1^0-k)k_1^0v_T^2}\left( \sum _{t=1}^kx_t\right) \left( \sum _{t=k+1}^{k_1^0}x_t\right) \right| \nonumber \\&\quad \le \sup _{T\eta \le k\le k_1^0/2}\left| \frac{2}{k_1^0v_T^2}\left( \frac{1}{\sqrt{k}}\sum _{t=1}^kx_t\right) \left( \frac{1}{\sqrt{k_1^0-k}}\sum _{t=k+1}^{k_1^0}x_t\right) \right| \nonumber \\&\qquad +\sup _{k_1^0/2<k<k_1^0-Mv_T^{-2}}\left| \frac{2}{\sqrt{k_1^0}v_T^2}\left( \frac{1}{\sqrt{k_1^0}}\sum _{t=1}^kx_t\right) \left( \frac{1}{k_1^0-k}\sum _{t=k+1}^{k_1^0}x_t\right) \right| \nonumber \\&\quad \le \frac{1}{k_1^0 v_T^2}\sup _{T\eta \le k\le k_1^0/2}\left( \frac{1}{\sqrt{k}}\sum _{t=1}^{k}x_t\right) ^2+\frac{1}{k_1^0 v_T^2}\sup _{T\eta \le k<k_1^0/2}\left( \frac{1}{\sqrt{k_1^0-k}}\sum _{t=k+1}^{k_1^0}x_t\right) ^2\nonumber \\&\qquad +\sup _{k_1^0/2<k<k_1^0-Mv_T^{-2}}\left| \frac{2}{\sqrt{k_1^0}v_T^2}\left( \frac{1}{\sqrt{k_1^0}}\sum _{t=1}^kx_t\right) \left( \frac{1}{k_1^0-k}\sum _{t=k+1}^{k_1^0}x_t\right) \right| ,~~-0.5<d\le 0, \end{aligned}$$

where Cauchy-Schwarz inequality is used in the last inequality. Moreover, in view of Lemma 6.1 and the functional central limit theorem, it is true that

$$\begin{aligned}&\sup _{k_1^0/2<k<k_1^0-Mv_T^{-2}}\left| \frac{2}{\sqrt{k_1^0}v_T^2}\left( \frac{1}{\sqrt{k_1^0}}\sum _{t=1}^kx_t\right) \left( \frac{1}{k_1^0-k}\sum _{t=k+1}^{k_1^0}x_t\right) \right| \nonumber \\&\quad =O_p\left( \frac{T^{d}}{\sqrt{T}v_T^2}\frac{1}{\sqrt{Mv_T^{-2}}}\right) \nonumber \\&\quad =O_p\left( \frac{1}{T^{0.5-d}v_T\sqrt{M}}\right) =o_p(1),~~-0.5<d\le 0 \end{aligned}$$

by Assumption A4. This together with (6.27) and (6.29) imply that

$$\begin{aligned} \sup _{T\eta \le k<k_1^0-Mv_T^{-2}}\left| \frac{2}{(k_1^0-k)k_1^0v_T^2}\left( \sum _{t=1}^kx_t\right) \left( \sum _{t=k+1}^{k_1^0}x_t\right) \right| =o_p(1),~~-0.5<d\le 0. \end{aligned}$$

Similarly, it is true that

$$\begin{aligned}&\sup _{T\eta \le k<k_1^0-Mv_T^{-2/(1-2d)}}\left| \frac{2}{(k_1^0-k)k_1^0v_T^2}\left( \sum _{t=1}^kx_t\right) \left( \sum _{t=k+1}^{k_1^0}x_t\right) \right| \nonumber \\&\quad \le \frac{1}{k_1^0 v_T^2}\sup _{T\eta \le k\le k_1^0/2}\left( \frac{1}{\sqrt{k}}\sum _{t=1}^{k}x_t\right) ^2+\frac{1}{k_1^0 v_T^2}\sup _{T\eta \le k<k_1^0/2}\left( \frac{1}{\sqrt{k_1^0-k}}\sum _{t=k+1}^{k_1^0}x_t\right) ^2\nonumber \\&\quad \quad +\sup _{k_1^0/2<k<k_1^0-Mv_T^{-2/(1-2d)}}\left| \frac{2}{\sqrt{k_1^0}v_T^2}\left( \frac{1}{\sqrt{k_1^0}}\sum _{t=1}^kx_t\right) \left( \frac{1}{k_1^0-k}\sum _{t=k+1}^{k_1^0}x_t\right) \right| ,~~0<d<0.5 \end{aligned}$$

and

$$\begin{aligned}&\sup _{k_1^0/2<k<k_1^0-Mv_T^{-2/(1-2d)}}\left| \frac{2}{\sqrt{k_1^0}v_T^2}\left( \frac{1}{\sqrt{k_1^0}}\sum _{t=1}^kx_t\right) \left( \frac{1}{k_1^0-k}\sum _{t=k+1}^{k_1^0}x_t\right) \right| \nonumber \\&\quad =O_p\left( \frac{T^{d}}{\sqrt{T}v_T^2}\frac{1}{(Mv_T^{-2/(1-2d)})^{0.5-d}}\right) =O_p\left( \frac{1}{T^{0.5-d}v_TM^{0.5-d}}\right) =o_p(1),~~0<d<0.5. \end{aligned}$$

This together with (6.28) and (6.30) imply that

$$\begin{aligned} \sup _{T\eta \le k<k_1^0-Mv_T^{-2/(1-2d)}}\left| \frac{2}{(k_1^0-k)k_1^0v_T^2}\left( \sum _{t=1}^kx_t\right) \left( \sum _{t=k+1}^{k_1^0}x_t\right) \right| =o_p(1),~~0<d<0.5. \end{aligned}$$

Combining the above results, it is true that

$$\begin{aligned} \sup _{1\le k<k_1^0-Mv_T^{-2}}\left| \frac{k_1^0\left( A_T(k_1^0)\right) ^2-k\left( A_T(k)\right) ^2}{v_T^2(k_1^0-k)}\right| =o_p(1),~~-0.5<d\le 0 \end{aligned}$$

and

$$\begin{aligned} \sup _{1\le k<k_1^0-Mv_T^{-2/(1-2d)}}\left| \frac{k_1^0\left( A_T(k_1^0)\right) ^2-k\left( A_T(k)\right) ^2}{v_T^2(k_1^0-k)}\right| =o_p(1),~~0<d<0.5. \end{aligned}$$

Similarly, it can be proved

$$\begin{aligned} \sup _{1\le k<k_1^0-Mv_T^{-2}}\left| \frac{(T-k_1^0)\left( A_T^*(k_1^0)\right) ^2-(T-k)\left( A_T^*(k)\right) ^2}{v_T^2(k_1^0-k)}\right| =o_p(1),~~-0.5<d\le 0 \end{aligned}$$

and

$$\begin{aligned} \sup _{1\le k<k_1^0-Mv_T^{-2/(1-2d)}}\left| \frac{(T-k_1^0)\left( A_T^*(k_1^0)\right) ^2-(T-k)\left( A_T^*(k)\right) ^2}{v_T^2(k_1^0-k)}\right| =o_p(1),~~0<d<0.5. \end{aligned}$$

The proof is complete. \(\square \)

Proof of Theorem 2.2

The proof is similar to that of Theorem 2.1 by using Lemma 2.9 instead of Lemma 2.5. Hence the details are omitted. \(\square \)

Proof of Theorem 2.3

We study the process

$$\begin{aligned} \Lambda _T(s)=v_T^{4d/(1-2d)}\left\{ S_T(k_1^0+\lfloor sv_T^{-2/(1-2d)}\rfloor )-S_T(k_1^0)\right\} ,~~0\le d<0.5, \end{aligned}$$

where \(s\in [-M,M]\) for an arbitrary large constant \(0<M<\infty \). Let \(l=\lfloor sv_T^{-2/(1-2d)}\rfloor \),

$$\begin{aligned} \Delta _T(l)=v_T^{4d/(1-2d)}\left\{ S_T(k_1^0+l)-S_T(k_1^0)\right\} , \end{aligned}$$

and

$$\begin{aligned} {\hat{l}}=\underset{l}{\mathop {\arg \min }}\Delta _T(l). \end{aligned}$$

Clearly, \(P({\hat{l}}={\hat{k}}-k_1^0)\ge 1-\varepsilon \) for any small \(\varepsilon >0\) and all large \(T=T(\varepsilon )\). Hence, we only need to study \(\Delta _T(l)\) to acquire the asymptotic distribution of \({\hat{\tau }}_1\).

Firstly, we consider the case of \(0\le l\le Mv_T^{-2/(1-2d)}\). We introduce some new notations. Denote

$$\begin{aligned} {\hat{\mu }}_1^*= & {} \frac{1}{k_1^0+l}\sum _{t=1}^{k_1^0+l}y_t, \quad {\hat{\mu }}_2^*=\frac{1}{T-k_1^0-l}\sum _{t=k_1^0+l+1}^{T}y_t,\\ {\hat{\mu }}_1= & {} \frac{1}{k_1^0}\sum _{t=1}^{k_1^0}y_t, \quad {\hat{\mu }}_2=\frac{1}{T-k_1^0}\sum _{t=k_1^0+1}^{T}y_t. \end{aligned}$$

It is not difficult to show, by the property P3 and Assumption A4, that

$$\begin{aligned} \left\{ \begin{array}{ll} {\hat{\mu }}_1^*-\mu _{1T}=O_p(T^{-0.5+d}), \\ {\hat{\mu }}_2-\mu _{2T}-\frac{1-\tau _2^0}{1-\tau _1^0}(\mu _{3T}-\mu _{2T})=O_p(T^{-0.5+d}). \end{array} \right. \end{aligned}$$
(6.31)

Then, we can write

$$\begin{aligned}&v_T^{4d/(1-2d)}S_T(k_1^0+l)\nonumber \\&\quad = v_T^{4d/(1-2d)}\left\{ \sum _{t=1}^{k_1^0}(y_t-{\hat{\mu }}_1^*)^2 +\sum _{t=k_1^0+1}^{k_1^0+l}(y_t-{\hat{\mu }}_1^*)^2 +\sum _{t=k_1^0+l+1}^{T}(y_t-{\hat{\mu }}_2^*)^2\right\} \nonumber \\ \end{aligned}$$
(6.32)

and

$$\begin{aligned}&v_T^{4d/(1-2d)}S_T(k_1^0)\nonumber \\&\quad =v_T^{4d/(1-2d)}\left\{ \sum _{t=1}^{k_1^0}(y_t-{\hat{\mu }}_1)^2 +\sum _{t=k_1^0+1}^{k_1^0+l}(y_t-{\hat{\mu }}_2)^2 +\sum _{t=k_1^0+l+1}^{T}(y_t-{\hat{\mu }}_2)^2\right\} .\nonumber \\ \end{aligned}$$
(6.33)

The differences between the two first and two third terms of the right-hand sides of (6.32) and (6.33) are

$$\begin{aligned} v_T^{4d/(1-2d)}\left\{ \sum _{t=1}^{k_1^0}(y_t-{\hat{\mu }}_1^*)^2-\sum _{t=1}^{k_1^0}(y_t-{\hat{\mu }}_1)^2\right\} =v_T^{4d/(1-2d)}k_1^0({\hat{\mu }}_1^*-{\hat{\mu }}_1)^2\qquad \end{aligned}$$
(6.34)

and

$$\begin{aligned}&v_T^{4d/(1-2d)}\left\{ \sum _{t=k_1^0+l+1}^{T}(y_t-{\hat{\mu }}_2^*)^2-\sum _{t=k_1^0+l+1}^{T}(y_t-{\hat{\mu }}_2)^2\right\} \nonumber \\&\quad =-v_T^{4d/(1-2d)}(T-k_1^0-l)({\hat{\mu }}_2^*-{\hat{\mu }}_2)^2,\qquad \end{aligned}$$
(6.35)

respectively. Notice that \(l=o(T)\) by Assumption A4, we then have

$$\begin{aligned}&{\hat{\mu }}_1^*-{\hat{\mu }}_1\nonumber \\&\quad =\frac{-l}{k_1^0(k_1^0+l)}\sum _{t=1}^{k_1^0}x_t+\frac{1}{k_1^0+l}\sum _{t=k_1^0+1}^{k_1^0+l}x_t +\frac{lv_T}{k_1^0+l}({\tilde{\mu }}_2-{\tilde{\mu }}_1)\nonumber \\&\quad =-O_p\left( \frac{l}{T^{1.5-d}}\right) +O_p\left( \frac{l^{0.5+d}}{T}\right) +O_p\left( \frac{lv_T}{T}\right) \nonumber \\&\quad =O_p\left( \frac{1}{Tv_T^{(1+2d)/(1-2d)}}\right) . \end{aligned}$$

Similarly, we have

$$\begin{aligned} {\hat{\mu }}_2^*-{\hat{\mu }}_2=O_p\left( \frac{1}{Tv_T^{(1+2d)/(1-2d)}}\right) .\qquad \end{aligned}$$

Hence, it follows from (6.34) and (6.35) that

$$\begin{aligned} v_T^{4d/(1-2d)}\left\{ \sum _{t=1}^{k_1^0}(y_t-{\hat{\mu }}_1^*)^2-\sum _{t=1}^{k_1^0}(y_t-{\hat{\mu }}_1)^2\right\} =O_p\left( \frac{1}{Tv_T^{2/(1-2d)}}\right) =o_p(1)\nonumber \\ \end{aligned}$$
(6.36)

and

$$\begin{aligned} v_T^{2/(1-2d)}\left\{ \sum _{t=k_1^0+l+1}^{T}(y_t-{\hat{\mu }}_2^*)^2-\sum _{t=k_1^0+l+1}^{T}(y_t-{\hat{\mu }}_2)^2\right\} =O_p\left( \frac{1}{Tv_T^{2/(1-2d)}}\right) =o_p(1)\nonumber \\ \end{aligned}$$
(6.37)

by Assumption A4.

Now, consider the difference between the two second terms on the right-hand sides of (6.32) and (6.33). For \(t\in [k_1^0+1,k_1^0+l]\), \(y_t=\mu _{2T}+x_t\). Then,

$$\begin{aligned}&\sum _{t=k_1^0+1}^{k_1^0+l}(y_t-{\hat{\mu }}_1^*)^2-\sum _{t=k_1^0+1}^{k_1^0+l}(y_t-{\hat{\mu }}_2)^2\nonumber \\&\quad =2[\mu _{2T}-{\hat{\mu }}_1^*-(\mu _{2T}-{\hat{\mu }}_2)]\sum _{t=k_1^0+1}^{k_1^0+l}x_t+l[(\mu _{2T}-{\hat{\mu }}_1^*)^2-(\mu _{2T}-{\hat{\mu }}_2)^2]. \end{aligned}$$

From (6.31), it is not difficult to have

$$\begin{aligned} \mu _{2T}-{\hat{\mu }}_1^*-(\mu _{2T}-{\hat{\mu }}_2)= & {} \mu _{2T}-\mu _{1T}+\mu _{1T}-{\hat{\mu }}_1^*-(\mu _{2T}-{\hat{\mu }}_2)\nonumber \\= & {} v_T({\tilde{\mu }}_2-{\tilde{\mu }}_1)(1+\lambda _1)+O_p(T^{-0.5+d})\nonumber \\= & {} v_T({\tilde{\mu }}_2-{\tilde{\mu }}_1)(1+\lambda _1)(1+o_p(1)) \end{aligned}$$

and

$$\begin{aligned} (\mu _{2T}-{\hat{\mu }}_1^*)^2-(\mu _{2T}-{\hat{\mu }}_2)^2= & {} v_T^2({\tilde{\mu }}_2-{\tilde{\mu }}_1)^2(1-\lambda _1^2)+O_p(v_T T^{-0.5+d})+O_p(T^{-1+2d})\nonumber \\= & {} v_T^2({\tilde{\mu }}_2-{\tilde{\mu }}_1)^2(1-\lambda _1^2)(1+o_p(1)), \end{aligned}$$

where

$$\begin{aligned} \lambda _1=\frac{1-\tau _2^0}{1-\tau _1^0}\left( \frac{{\tilde{\mu }}_3-{\tilde{\mu }}_2}{{\tilde{\mu }}_2-{\tilde{\mu }}_1}\right) . \end{aligned}$$

Thus, based on the above arguments, we have

$$\begin{aligned}&v_T^{4d/(1-2d)}\left\{ \sum _{t=k_1^0+1}^{k_1^0+l}(y_t-{\hat{\mu }}_1^*)^2-\sum _{t=k_1^0+1}^{k_1^0+l}(y_t-{\hat{\mu }}_2)^2\right\} \nonumber \\&\quad =2({\tilde{\mu }}_2-{\tilde{\mu }}_1)(1+\lambda _1)v_T^{(1+2d)/(1-2d)} \sum _{t=k_1^0+1}^{k_1^0+l}x_t \cdot (1+o_p(1))\nonumber \\&\quad \quad +v_T^{2/(1-2d)} l ({\tilde{\mu }}_2-{\tilde{\mu }}_1)^2(1-\lambda _1^2)\cdot (1+o_p(1))\nonumber \\&\quad \Rightarrow 2\kappa ({\tilde{\mu }}_2-{\tilde{\mu }}_1)(1+\lambda _1)B_d^{(1)}(s)+s({\tilde{\mu }}_2-{\tilde{\mu }}_1)^2(1-\lambda _1^2) \end{aligned}$$

by the fact

$$\begin{aligned} \sum _{t=k_1^0+1}^{k_1^0+l}x_t=(1-B)^{-d}\sum _{t=k_1^0+1}^{k_1^0+l}u_t{\mathop {=}\limits ^{d}}(1-B)^{-d}\sum _{t=1}^{l}u_t=\sum _{t=1}^{l}x_t \end{aligned}$$

and the functional central limit theorem (see Assumption A1), where \(B_d^{(1)}(\cdot )\) is a two-sided fractional Brownian motion, which together with (6.36) and (6.37) yield

$$\begin{aligned} \Lambda _T(s)\Rightarrow 2\kappa ({\tilde{\mu }}_2-{\tilde{\mu }}_1)(1+\lambda _1)B_d^{(1)}(s)+s({\tilde{\mu }}_2-{\tilde{\mu }}_1)^2(1-\lambda _1^2), ~~ s>0. \end{aligned}$$

Similarly, it can be proved that

$$\begin{aligned} \Lambda _T(s)\Rightarrow 2\kappa ({\tilde{\mu }}_2-{\tilde{\mu }}_1)(1+\lambda _1)B_d^{(1)}(s)-s({\tilde{\mu }}_2-{\tilde{\mu }}_1)^2(1+\lambda _1)^2, ~~ s<0. \end{aligned}$$

Define

$$\begin{aligned} \Gamma _1(s,\lambda )={\left\{ \begin{array}{ll}2\kappa ({\tilde{\mu }}_2-{\tilde{\mu }}_1)B_d^{(1)}(s)-s({\tilde{\mu }}_2-{\tilde{\mu }}_1)^2(1+\lambda ), \qquad &{}\text {if s<0},\\ 0, \qquad &{}\text {if s=0},\\ 2\kappa ({\tilde{\mu }}_2-{\tilde{\mu }}_1)B_d^{(1)}(s)+s({\tilde{\mu }}_2-{\tilde{\mu }}_1)^2(1-\lambda ), \qquad &{}\text {if s>0}.\end{array}\right. } \end{aligned}$$

Then, we have

$$\begin{aligned} \Lambda _T(s)\Rightarrow (1+\lambda _1)\Gamma _1(s,\lambda _1). \end{aligned}$$

This implies that

$$\begin{aligned} \begin{aligned} Tv_T^{2/(1-2d)}({\hat{\tau }}_1-\tau _1^0)&{\mathop {\rightarrow }\limits ^{d}} \underset{s}{\mathop {\arg \min }}(1+\lambda _1)\Gamma _1(s,\lambda _1) \\&{\mathop {=}\limits ^{d}} \underset{s}{\mathop {\arg \min }}\Gamma _1(s,\lambda _1) \end{aligned} \end{aligned}$$

by applying the continuous mapping theorem for argmax/argmin functionals (Kim and Pollard 1990) and the fact that \(1+\lambda _1>0\) (see page 347 in Bai 1997).

The proof is complete. \(\square \)

Proof of Theorem 2.4

The proof is similar to that of Theorem 2.3, thus the details are omitted for saving space. \(\square \)

1.2 Appendix B

In this subsection, we provide the proofs of the results in Sect. 3.

Proof of Theorem 3.1

The proof of Theorem 3.1 is routine and very similar to that of Theorem 2.1, but the details are very long and tedious. Hence, this proof is omitted for saving space. \(\square \)

Proof of Theorem 3.2

First, under the case of shrinking breaks (that is, Assumption A4 is satisfied), it can be proved that for any \(\varepsilon >0\), there exists a positive constant \(M<\infty \) such that for all large T,

$$\begin{aligned} P(T|{\hat{\tau }}_i-\tau _i^0|>Mv_T^{-2/(1-2d)})<\varepsilon ,~~0\le d<0.5. \end{aligned}$$

Then, we study the process

$$\begin{aligned} \Lambda _T^{\prime }(s)=v_T^{4d/(1-2d)}\left\{ S_T(k_i^0+\lfloor sv_T^{-2/(1-2d)}\rfloor )-S_T(k_i^0)\right\} ,~~0\le d<0.5, \end{aligned}$$

where \(s\in [-M,M]\). Let \(l=\lfloor sv_T^{-2/(1-2d)}\rfloor \),

$$\begin{aligned} \Delta _T^{\prime }(l)=v_T^{4d/(1-2d)}\left\{ S_T(k_i^0+l)-S_T(k_i^0)\right\} ,~~{\hat{l}}=\underset{l}{\mathop {\arg \min }}\Delta _T^{\prime }(l). \end{aligned}$$

As before, we only need to study \(\Delta _T^{\prime }(l)\) to acquire the asymptotic distribution of \({\hat{\tau }}_i\).

Firstly, we consider the case of \(0\le l\le Mv_T^{-2/(1-2d)}\). Denote

$$\begin{aligned} {\hat{\mu }}_{i1}^*= & {} \frac{1}{k_i^0+l}\sum _{t=1}^{k_i^0+l}y_t, \quad {\hat{\mu }}_{i2}^*=\frac{1}{T-k_i^0-l}\sum _{t=k_i^0+l+1}^{T}y_t,\\ {\hat{\mu }}_{i1}= & {} \frac{1}{k_i^0}\sum _{t=1}^{k_i^0}y_t, \quad {\hat{\mu }}_{i2}=\frac{1}{T-k_i^0}\sum _{t=k_i^0+1}^{T}y_t. \end{aligned}$$

It is straightforward to obtain, by the property P3 and Assumption A4, that,

$$\begin{aligned} \mu _{iT}-{\hat{\mu }}_{i1}^*= & {} \mu _{iT}-\frac{1}{k_i^0+l}\sum _{t=1}^{k_i^0+l}x_t\nonumber \\&-\frac{1}{k_i^0+l}\big [k_1^0\mu _{1T}+(k_2^0-k_1^0)\mu _{2T}+\cdots +(k_i^0-k_{i-1}^0)\mu _{iT}+l\mu _{i+1,T}\big ]\nonumber \\= & {} \frac{v_T}{\tau _i^0}\sum _{j=1}^{i-1}\tau _j^0({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_{j})+O_p(T^{-0.5+d})+O(lv_T/T)\nonumber \\= & {} \frac{v_T}{\tau _i^0}\sum _{j=1}^{i-1}\tau _j^0({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_{j})+O_p(T^{-0.5+d}), \end{aligned}$$
(6.38)

and

$$\begin{aligned}&\mu _{i+1,T}-{\hat{\mu }}_{i2}\nonumber \\&\quad =\mu _{i+1,T}-\frac{1}{T-k_i^0}\sum _{t=k_i^0+1}^Tx_t\nonumber \\&\quad -\frac{1}{T-k_i^0}\big [(k_{i+1}^0-k_i^0)\mu _{i+1,T}+\cdots +(T-k_m^0)\mu _{m+1,T}\big ]\nonumber \\&\quad =-\frac{v_T}{1-\tau _i^0}\sum _{j=i+1}^m(1-\tau _j^0)({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_j)+O_p(T^{-0.5+d}). \end{aligned}$$
(6.39)

It is easy to see that the expression of \(\Delta _T^{\prime }(l)\) can be written as follows,

$$\begin{aligned} \Delta _T^{\prime }(l)= & {} v_T^{4d/(1-2d)}\left\{ \sum _{t=1}^{k_i^0}\big [(y_t-{\hat{\mu }}_{i1}^*)^2-(y_t-{\hat{\mu }}_{i1})^2\big ]\right. \nonumber \\&+\sum _{t=k_i^0+1}^{k_i^0+l}\big [(y_t-{\hat{\mu }}_{i1}^*)^2-(y_t-{\hat{\mu }}_{i2})^2\big ]\nonumber \\&+\left. \sum _{t=k_i^0+l+1}^{T}\big [(y_t-{\hat{\mu }}_{i2}^*)^2-(y_t-{\hat{\mu }}_{i2})^2\big ]\right\} . \end{aligned}$$

Similar to the proof of Theorem 2.3, it can be proved that the second term on the right hand side of equation is the leading term. For this leading term, when \(t\in [k_i^0+1,k_i^0+l]\), \(y_t=\mu _{i+1,T}+x_t\), and

$$\begin{aligned}&\sum _{t=k_i^0+1}^{k_i^0+l}\big [(y_t-{\hat{\mu }}_{i1}^*)^2-(y_t-{\hat{\mu }}_{i2})^2\big ]\nonumber \\&\quad =2[\mu _{i+1,T}-{\hat{\mu }}_{i1}^*-(\mu _{i+1,T}-{\hat{\mu }}_{i2})]\sum _{t=k_i^0+1}^{k_i^0+l}x_t\nonumber \\&\qquad +\,l[(\mu _{i+1,T}-{\hat{\mu }}_{i1}^*)^2-(\mu _{i+1,T}-{\hat{\mu }}_{i2})^2]. \end{aligned}$$

From (6.38) and (6.39), we have

$$\begin{aligned}&[\mu _{i+1,T}-{\hat{\mu }}_{i1}^*-(\mu _{i+1,T}-{\hat{\mu }}_{i2})]\nonumber \\&\quad =\mu _{i+1,T}-\mu _{iT}+\mu _{iT}-{\hat{\mu }}_{i1}^*-(\mu _{i+1,T}-{\hat{\mu }}_{i2})\nonumber \\&\quad =v_T\left[ ({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)+\frac{1}{\tau _i^0}\sum _{j=1}^{i-1}\tau _j^0({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_{j})\right. \nonumber \\&\quad \quad \left. +\frac{1}{1-\tau _i^0}\sum _{j=i+1}^m(1-\tau _j^0)({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_j)\right] +O_p(T^{-0.5+d}), \end{aligned}$$

and

$$\begin{aligned}&(\mu _{i+1,T}-{\hat{\mu }}_{i1}^*)^2-(\mu _{i+1,T}-{\hat{\mu }}_{i2})^2\nonumber \\&\quad =[\mu _{i+1,T}-\mu _{iT}+\mu _{iT}-{\hat{\mu }}_{i1}^*]^2-(\mu _{i+1,T}-{\hat{\mu }}_{i2})^2\nonumber \\&\quad =v_T^2\left\{ \left[ ({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)+\frac{1}{\tau _i^0}\sum _{j=1}^{i-1}\tau _j^0({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_{j})\right] ^2\right. \nonumber \\&\quad \quad \left. -\left[ \frac{1}{1-\tau _i^0}\sum _{j=i+1}^m(1-\tau _j^0)({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_j)\right] ^2\right\} \nonumber \\&\quad +O_p(v_TT^{-0.5+d})+O_p(T^{-1+2d}). \end{aligned}$$

Denote

$$\begin{aligned} \rho _i= & {} \frac{1}{\tau _i^0({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)}\sum _{j=1}^{i-1}\tau _j^0({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_{j}),\\ \omega _i= & {} \frac{1}{(1-\tau _i^0)({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)}\sum _{j=i+1}^m(1-\tau _j^0)({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_j). \end{aligned}$$

Then, from the functional central limith theorem and the above arguments, we have

$$\begin{aligned}&v_T^{4d/(1-2d)}\left\{ \sum _{t=k_i^0+1}^{k_i^0+l}\big [(y_t-{\hat{\mu }}_{i1}^*)^2-(y_t-{\hat{\mu }}_{i2})^2\big ]\right\} \nonumber \\&\quad =2({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)(1+\rho _i+\omega _i)v_T^{(1+2d)/(1-2d)} \sum _{t=k_i^0+1}^{k_i^0+l}x_t \cdot (1+o_p(1))\nonumber \\&\quad \quad +\, v_T^{2/(1-2d)} l ({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)^2[(1+\rho _i)^2-\omega _i^2)]\cdot (1+o_p(1))\nonumber \\&\quad \Rightarrow 2\kappa ({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)(1+\rho _i+\omega _i)B_d^{(i)}(s)+s({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)^2[(1+\rho _i)^2-\omega _i^2)], \end{aligned}$$

where \(B_d^{(i)}(\cdot )\) is a two-sided fractional Brownian motion. Hence,

$$\begin{aligned} \Lambda _T^{\prime }(s)\Rightarrow & {} 2\kappa ({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)(1+\rho _i+\omega _i)B_d^{(i)}(s)\\&+s({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)^2[(1+\rho _i)^2-\omega _i^2)],~~s>0. \end{aligned}$$

It can be proved similarly that,

$$\begin{aligned} \Lambda _T^{\prime }(s)\Rightarrow & {} 2\kappa ({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)(1+\rho _i+\omega _i)B_d^{(i)}(s)\\&-s({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)^2[(1+\omega _i)^2-\rho _i^2)],~~s<0. \end{aligned}$$

Define

$$\begin{aligned} \Gamma _i(s,\lambda )={\left\{ \begin{array}{ll}2\kappa ({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)B_d^{(i)}(s)-s({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)^2(1+\lambda ), \qquad &{}\text {if s<0},\\ 0, \qquad &{}\text {if }s=0,\\ 2\kappa ({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)B_d^{(i)}(s)+s({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)^2(1-\lambda ), \qquad &{}\text {if }s>0.\end{array}\right. } \end{aligned}$$

Then, we have

$$\begin{aligned} \Lambda _T^{\prime }(s)\Rightarrow (1+\omega _i+\rho _i)\Gamma _i(s,\lambda _i) \end{aligned}$$

with

$$\begin{aligned} \lambda _i=\omega _i-\rho _i=\frac{1}{({\tilde{\mu }}_{i+1}-{\tilde{\mu }}_i)} \left[ \frac{1}{(1-\tau _i^0)}\sum _{j=i+1}^m(1-\tau _j^0)({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_j)-\frac{1}{\tau _i^0}\sum _{j=1}^{i-1}\tau _j^0({\tilde{\mu }}_{j+1}-{\tilde{\mu }}_{j})\right] . \end{aligned}$$

This implies that

$$\begin{aligned} \begin{aligned} Tv_T^{2/(1-2d)}({\hat{\tau }}_i-\tau _i^0) {\mathop {\rightarrow }\limits ^{d}} \underset{s}{\mathop {\arg \min }}(1+\omega _i+\rho _i)\Gamma _i(s,\lambda _i) {\mathop {=}\limits ^{d}} \underset{s}{\mathop {\arg \min }}\Gamma _i(s,\lambda _i) \end{aligned} \end{aligned}$$

by applying the continuous mapping theorem for argmax/argmin functionals (Kim and Pollard 1990) and the fact that \(1+\omega _i+\rho _i>0\) which can be derived by Assumption A8.

The proof is complete. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xi, D., Pang, T. Estimating multiple breaks in mean sequentially with fractionally integrated errors. Stat Papers 62, 451–494 (2021). https://doi.org/10.1007/s00362-019-01104-z

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-019-01104-z

Keywords

Mathematics Subject Classification

Navigation