Estimating change points in nonparametric time series regression models

In this paper we consider a regression model that allows for time series covariates as well as heteroscedasticity with a regression function that is modelled nonparametrically. We assume that the regression function changes at some unknown time ⌊ns0⌋\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lfloor ns_0\rfloor $$\end{document}, s0∈(0,1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$s_0\in (0,1)$$\end{document}, and our aim is to estimate the (rescaled) change point s0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$s_0$$\end{document}. The considered estimator is based on a Kolmogorov-Smirnov functional of the marked empirical process of residuals. We show consistency of the estimator and prove a rate of convergence of OP(n-1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O_P(n^{-1})$$\end{document} which in this case is clearly optimal as there are only n points in the sequence. Additionally we investigate the case of lagged dependent covariates, that is, autoregression models with a change in the nonparametric (auto-) regression function and give a consistency result. The method of proof also allows for different kinds of functionals such that Cramér-von Mises type estimators can be considered similarly. The approach extends existing literature by allowing nonparametric models, time series data as well as heteroscedasticity. Finite sample simulations indicate the good performance of our estimator in regression as well as autoregression models and a real data example shows its applicability in practise.

such a break, the change point, is unknown, see for instance Kirch and Kamgaing (2012) and reference mentioned therein. This paper, however, is concerned with the estimation of the change point when assuming its existence.
The most simple set of models can be described as follows where s 0 ∈ (0, 1) is the (rescaled) change point, μ 1 and μ 2 the signal before and after the break, respectively, and (ε t ) t being stationary and centred errors. These models are often referred to as AMOC-models (at most one change). The problem naturally moved from the standard case with independent errors (see Ferger and Stute (1992) among others) to the time series context. Both Bai (1994) and Antoch et al. (1997) allow for linear processes and Hušková and Kirch (2008) more generally for dependent errors.
Additional information on the form of the signal can be expressed through a process of covariates (X t ) t resulting in linear regression models with a change in the regression parameter, such as where β 1 and β 2 are the regression coefficients before and after the break, respectively. Bai (1997), Horváth et al. (1997) and Aue et al. (2012) among others consider the estimation of a change point in (multiple) linear regression models making use of least squares estimation. Considering X t = Y t−1 in the linear regression model from above, one obtains autoregressive models with one change in the autoregressive parameter. The estimation of the parameters and the unknown change point in AR(1) models was for instance considered by Chong (2001), Pang et al. (2014) and Pang and Zhang (2015).
Our aim is to propose an estimator for the change point s 0 in a nonparametric version of the regression model from above, namely Y t = m (1) (X t )I {t ≤ ns 0 } + m (2) (X t )I {t > ns 0 } + ε t , t = 1, . . . , n, for some nonparametric regression functions m (1) , m (2) (before and after the break) and in addition also investigate the autoregressive case where X t = Y t−1 . While the investigation of points of discontinuity in (nonparametric) regression functions has been studied to some extend (see for instance Döring and Jensen (2015) for an overview), not that much research has been devoted to change point analysis in nonparametric models as the one above, where the change occurs in time. Delgado and Hidalgo (2000) propose estimators for the location and size of structural breaks in a nonparametric regression model imposing scalar breaks in time or values taken by some regressors, as in threshold models. Their rates of convergence and limiting distribution depends on a bandwidth, chosen for the kernel estimation. Chen et al. (2005) estimate the time of a scalar change in the conditional variance function in nonparametric heteroscedastic regression models using a hybrid procedure that combines the least squares and nonparametric methods.
The paper at hand extends existing literature, on the one hand by allowing for nonparametric heteroscedastic regression models with a general change in the unknown regression function where both errors and covariates are allowed to be time series, and on the other hand by investigating the autoregressive case. The achieved rate of convergence for the proposed estimator of O P (n −1 ) is optimal as described in Hariz et al. (2007).
The remainder of the paper is organized as follows. The model and the considered estimator are introduced in Sect. 2. Section 3 contains the regularity assumptions as well as the asymptotic results for the proposed estimator. Section 4 is concerned with the special case of lagged dependent covariates, that is the autoregressive case. In Sect. 5 we describe a simulation study and discuss a real data example, whereas Sect. 6 concludes the paper. Proofs of the main results as well as auxiliary lemmata can be found in the Appendix.

The model and estimator
(2.1) The unobservable innovations are assumed to fulfill E[U t |F t ] = 0 almost surely for the sigma-field F t = σ (U j−1 , X j : j ≤ t). We assume there exists a change point in the regression function such that where ns 0 with s 0 ∈ (0, 1) is the unknown time the change occurs. Note that we keep above notations for simplicity reasons, however, the considered process is in fact a triangular array process {(Y n,t , X n,t ) : 1 ≤ t ≤ n, n ∈ N} and will be treated appropriately.
Assuming (Y 1 , X 1 ), . . . , (Y n , X n ) have been observed, the aim is to estimate s 0 . The idea is to base the estimator on the sequential marked empirical process of residuals, namelyT for s ∈ [0, 1] and z ∈ R d , where x ≤ y is short for x j ≤ y j for all j = 1, . . . , d, ω n (·) = I {· ∈ J n } being from assumption (V) below andm n being the Nadaraya-Watson estimator, that ism with kernel function K and bandwidth h n as considered in the assumptions below.
Then we want to estimate s 0 bŷ Note thatŝ n = nŝ n /n.

Remark
The advantage of using marked residuals in comparison to using the classical CUSUMT n (s, ∞) to estimate the change point is that in the first case the estimator is consistent for all changes of the form (2.2) whereas there are several examples in which the use ofT n (s, ∞) leads to a non-consistent estimator. To this end see the remark below the proof of Theorem 3.1 and compare to Mohr and Neumeyer (2019).
Remark Mohr and Neumeyer (2019) Mohr and Neumeyer (2019). Assuming strict stationarity of the covariates and the existence of a density f such that X t ∼ f for all t, as in (IX.1) below, the Cramér-von Mises approach from above with ν ≡ f leads to an alternative estimator for s 0 , namelỹ s n := min s : However, to obtain a feasible estimator one needs to replace the integral R d |T n (s, z)| 2 f (z)d z by its empirical counterpart 1 n n k=1 |T n (s, X k )| 2 in practise as f is not known.

Asymptotic results
In this section we will derive asymptotic properties forŝ n . To this end we introduce the following assumptions.

Remark
The assumptions on the error terms and the mixing assumptions particularly allow for conditional heteroscedasticity. Assumptions (I), (II) and (III) are a trade off between the existence of moments and the rate of decay of the mixing coefficient. Assumptions (III), (IV), (VII) and the first part of (VIII) are reproduced from Kristensen (2012). Together with (V) and (VI), they are used to obtain uniform rates of convergence form n stated in Lemma A.1 in the Appendix. In (IX.1), we assume stationarity of the covariates for the whole observation period, while in the case of (IX.2) we assume stationarity before and right after the change occurs. Nevertheless both assumptions rule out general autoregressive effects such as We will address this issue separately in Sect. 4.
where r n = n.
The proofs of the theorems can be found in Appendix A.2. We state both theorems seperately since we need Theorem 3.1 to prove Theorem 3.2.
Remark To obtain the rates of convergence we make use of the fact thatŝ n can be expressed using the sup norm on l ∞ (R d ), i.e.
is the space of all uniformly bounded real valued functions on R d . Note that similarlys n can be expressed using the L 2 (P) norm, when (X t ) t is strictly stationary with marginal distribution P, namelỹ corresponding results fors n as in Theorem 3.1 and Theorem 3.2 can be proven in a similar matter.

The autoregressive case
In this section we will consider the case where the exogenous variables include finitely many lagged values of the endogenous variable, we will refer to this model as the autoregressive case. We will focus on one dimensional covariates, however, the results do not depend on the dimension and can also be formulated for higher order autoregression models. Consider the nonparametric autoregression with unobservable innovations U t and one change in the regression function occurring at some unknown time ns 0 as in (2.2). Furthermore assume the following.
Remark Note that (IX.3) requires on the one hand strict stationarity up to the time of change ns 0 . On the other hand the time series needs to reach its (new) stationary distribution fast enough after the change. This is a generalization of (IX.2) where we assumed stationarity both before and right after the change point, which can not be fulfilled in the model (4.1). A necessary condition then is that there exists a stationary solution of equation (4.1) under both m (1) (·) and m (2) (·) as regression functions.
Then assumption (IX.3) is fulfilled. Note to this end that X t := Y t−1 ∼ N (0, 1/(1 − a 2 )) for t ≤ ns 0 . The distribution after the change point is given by In general verifying assumption (IX.3) for model (4.1) means to compare the distribution of a stochastic process that is not yet in balance with its stationary distribution. A well known technique to deal with this task is coupling, see e. g. Franke et al. (2002).
Under (IX.3) we get the following consistency result for our change point estimator in the autoregressive case. The proof can be found in Appendix A.2.
Remark Another possibility to handle the autoregressive case would be to model the change in a different way, namely (2) t t , see e. g. Kirch et al. (2015). In this case assumption (IX.2) is fulfilled and thus Theorems 3.1 and 3.2 apply.

Simulations
To investigate the finite sample performance of our estimator, we generate data from two different basic models, namely and standard normally distributed, just as the errors For both models we generate data both for the homoscedastic case σ ≡ 1 as well as for the heteroscedastic case σ (x) = √ 1 + 0.5x 2 . The results for both are very similar in all situations, thus we only present the results for the heteroscedastic case. To model the change in the regression function we use three different scenarios where we let s 0 range from 0.1 to 0.9. In Fig. 1 the results for 1000 replications and sample sizes n = 100, 500, 1000 are shown, where we plot s 0 against the estimated mean squared error of our estimatorŝ n . The kernel form n is chosen as the Epanechnikov kernel of order four and the bandwidth is determined by a cross-validation method. It can be seen that our estimator performs quite well even for the smallest sample size n = 100 when s 0 is 0.5 or close to it whereas for a change point that lies closer to the boundaries of the observation interval a larger sample size is needed to get satisfying results. This is due to the fact that if s 0 = 0.1 or s 0 = 0.9 there are only 10 observations before and after the change point respectively for n = 100 and thus the estimation of m (1) and m (2) respectively are poor. Moreover an asymmetry in the results is striking. This stems from the CUSUM type statistic that our estimator is based on. For s 0 = 0.1 e. g. the sum consists of only 0.1n summands and thus the estimation of e. g. E[U t ] is worse than if s 0 = 0.9 and the estimation is based on 0.9n summands. The effect of a decreasing performance of the estimators the closer s 0 gets to the boundaries is typical for change point estimators based on CUSUM statistics and can be antagonized by the use of appropriate weights, see e. g. Ferger (2005).
To stress our estimator a little further we simulate the scenario that there is also a change in the variance function σ at a different time point than the change in the regression function m. In this situation the estimator should still be able to detect s 0 , the change point in the regression function. The results are shown in Fig. 2 for model (IID) and model (TS) with change point scenario (C1) where σ t (x) = √ 1 + 0.1x 2 for t ≤ 0.4n and σ t (x) = √ 1 + 0.8x 2 for t > 0.4n. They confirm the good performance of our estimator even in this more difficult situation.
As discussed in Sect. 4 our estimator can also be applied to the autoregressive case. To investigate the finite sample performance in this situation we generate data according to the model For σ ≡ 1 and change point scenario (C1) as well as (C2) assumption (IX.3) is fulfilled, see the example in Sect. 4. Simulation results for these cases are shown in Fig. 3 where the setting is the same as described above. They look very similar to the results of model  As stated in the remark in Sect. 2 it is also possible to base the estimator on a Cramér-von Mises type functional of the marked empirical process of residuals. The simulation results for this type of estimator are very similar to those presented here for the Kolmogorov-Smirnov type estimatorŝ n and are omitted for the sake of brevity.

Data example
Finally, we will consider a real data example. The data at hand contains 36 measurements of the annual flow volume of the small Czech river, Ráztoka, recorded between 1954 and 1989 as well as the annual rainfall during that time. It was considered by Hušková and Antoch (2003) to investigate the effect of controlled deforestation on the capability for water retention of the soil. To this end it is of interest if and when the relationship between rainfall and flow volume changes. We set X t as the annual rainfall and Y t as the annual flow volume. Mohr and Neumeyer (2019) applied their Kolmogorov-Smirnov test to this data set, which clearly rejects the null of no change in the conditional mean function, indicating the existence of a change in the relationship between rainfall and flow volume. Usingŝ n to estimate the unknown time of change suggests a change in 1979. Note that this is consistent with the literature. As was pointed out by Hušková and Antoch (2003) large scale deforestation had started around that time. Figure 5 shows on the left-hand side the scatterplot X t against Y t using dots for the observations after the estimated change and crosses for the observations before the estimated change. On the right-hand side the figure shows the cumulative sum, n 1/2 sup z∈R |T n (·, z)|, as well as the critical value of the test used in Mohr and Neumeyer (2019) (red horizontal line) and the estimated change (green vertical line). Note thats n leads to the same result.

Concluding remarks
In this paper we consider nonparametric regression models with a change in the unknown regression function that allows for time series data as well as conditional heteroscedasticity. We propose an estimator for the rescaled change point that is based on the sequential marked empirical process of residuals and show consistency as well as a rate of convergence of O P (n −1 ). In an autoregressive setting we additionally give a consistency result for the proposed estimator. If more than one change occurs, the proposed estimator is not consistent for one of the changes in some situations. For detecting multiple changes we refer the reader to alternative procedures such as the MOSUM procedure proposed by Eichinger and Kirch (2018) or the wild binary segmentation procedure by Fryzlewicz (2014) (see also Fryzlewicz (2019)).
Investigating the asymptotic distribution of the proposed estimator is a subsequent issue. Certainly, it is of great interest as it can be used to obtain confidence intervals. However, this subject goes beyond the scope of the paper at hand and is postponed to future research.
Acknowledgements Open Access funding provided by Projekt DEAL. Financial support by the DFG (Research Unit FOR 1735 Structural Inference in Statistics: Adaptation and Efficiency) is gratefully acknowledged.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
The proof is similar to the proof of Lemma 2.2 in Mohr (2018). The key tool is an application of Theorem 1 in Kristensen (2009). Details are omitted for the sake of brevity.

Proof of Lemma A.2
The proof follows along similar lines as the proof of Lemma A.3 in Mohr (2018). Throughout the proof the values of C andC may vary from line to line but they are always positive, finite and independent of n.
where (A.1) is of the desired rate in probability since for all i and for C U < ∞ from assumption (I).
Considering the term (A.2) we define the function class to rewrite the assertion as Now we will cover [0, 1] by finitely many intervals and F n by finitely many brackets to replace the supremum by a maximum. Let therefore 0 = s 1 < · · · < s K n = 1 part the interval [0, 1] in K n subintervals of length¯ n with¯ n = κ 1 q n (κ n¯ n + 1) and 2κ 1 q n (κ n¯ n + 1) = 2(κ n + κ gives a partition of R for all i = 1, . . . , d. Then for all z ∈ R d there exists a j ∈ × d i=1 {1, . . . , N i } such that z j −1 < z ≤ z j where j − 1 := ( j 1 − 1, . . . , j d − 1). Thus every element ϕ of F n lies in one of the brackets [ϕ l j , ϕ u j ], i. e. ϕ l all (u, x). We say a bracket [ϕ l j , ϕ u j ] is of size n if (ϕ u j − ϕ l j )d P ≤ n . The total number of brackets of size n needed to cover F n is denoted by J n := N [ ] ( n , F n , · L 1 (P) ) and is of order J n = O( −d n ), which follows analogously to but easier than the proof of Lemma A.7 in Mohr (2018).
For all ϕ ∈ F n there exists a j with ϕ l j ≤ ϕ ≤ ϕ u j and thus Therefore for all s ∈ [0, 1] and κ n n = O(κ n ) if we choose n constant. Thus it remains to show that We will only consider the first summand in more detail since the rest works analogously. To prove that (A.3) is stochastically of the desired rate we apply a Bernstein type inequality for α-mixing processes, see Liebscher (1996) Therorem 2.1. Following his notation we define by assumption (I). Thus Liebscher's Theorem can be applied with N = κ 1− 2 q n . This means that for some constants C 1 , C 2 ,C where the second to last inequality follows from the fact that exp(−x) < x −k k! for all k ∈ N and x ∈ R + and the last inequality is true by assumption (III) which impliesᾱ > (q + 2)/(q − 2). This completes the proof.

Proof of Lemma A.3
First we will distinguish between the cases L + κ n s ≤ ns 0 and L + κ n s > ns 0 . In the first case we can write and analogously for the second case We will only examine the case L + κ n s ≤ ns 0 in detail since the other case works analogously.
The remainder of the proof is similar to the proof of Lemma A.2. With g(X i ) := for all i and for some C m < ∞ by assumption (II). Thus we can rewrite our assertion as with the function class To replace the supremum over ϕ by a maximum we cover F n by finitely many brackets and j , z j are defined as in the proof of Lemma A.2. The total number of brackets J n := N [ ] ( n , F n , · L 1 (P) ) needed to cover F n is again of order J n = O( −d n ), which follows analogously to but easier than the proof of Lemma A.7 in Mohr (2018). Now we proceed completely analogously to the proof of Lemma A.2 by replacing the supremum over s by a maximum as well and applying Liebscher's Theorem. Since the arguments are the same as in the aforementioned proof we omit this part for the sake of brevity.

Proof of Lemma A.4 It holds
by the Markov inequality. Further by Lemma A.1 with assumption (VIII) it holds that and thus for sufficiently large n for κ n ≤ n. This completes the proof.

A.2 Proof of main results
We will proof Theorem 3.1 under the assumption (IX.1) and simply make a note on the parts that change under (IX.2).
Finally we will investigate E n,1 . To do this we define shells S n,l = t ∈ [0, 1] : 2 l < r n t − ns 0 n ≤ 2 l+1 and choose L n = L n (η) such that 2 L n < r n η ≤ 2 L n +1 for some η ≤ 1 2 . Then (2 −ζ ) l for some constantC < ∞ by Lemmata A.2, A.3 and A.4 with κ n = 2 l+1 n r n with q from assumption (I), r from assumption (II) and ζ > 0 from assumption (VIII). Now choosing r n = n and letting n and thus L n tend to infinity and then M to infinity, the assertion of Theorem 3.2 follows.