Abstract
The likelihood ratio test for a change in the mean-reverting parameter of a first order autoregressive model with stationary Gaussian noise is considered. The test statistic converges in distribution to the Gumbel extreme value distribution under the null hypothesis of no change-point for a large class of covariance structures including long-memory processes as the fractional Gaussian noise.
Similar content being viewed by others
References
Asmussen S, Albrecher H (2010) Ruin probabilities. Chapman & Hall, London
Beran J (1994) Statistics for long-memory processes. Chapman & Hall, London
Billingsley P (1999) Convergence of probability measures, 2nd edn. Wiley Series in Probability and Statistics
Brouste A, Cai C, Kleptsyna M (2014) Asymptotic properties of the MLE for the autoregressive process coefficients under stationary Gaussian noises. Math Methods Stat 23(2):103–115
Brouste A, Kleptsyna M (2012) Kalman type filter under stationary noises. Syst Control Lett 61:1229–1234
Chong T (2001) Structural change in AR(1) models. Econom Theory 17:87–155
Davis R, Huang D, Yao Y (1995) Testing for a change in the parameter values and order of an autoregressive model. Ann Stat 23(1):282–304
Douglas H, Timmer D, Pignatiello J (1998) The development and evaluation of cusum-based control charts for an AR(1) process. IIE Trans Qua Reliab 30:525–534
Douglas H, Timmer D, Pignatiello J (2003) Change point estimates for the parameters of an AR(1) process. Quality Reliab Eng Int 19:355–369
Duflo M (1997) Random iterative models. Applications of Mathematics. Springer, New York
Durbin J (1960) The fitting of time series models. Rev Inst Int Stat 28:233–243
Eberlein E (1986) On strong invariance principles under dependence assumptions. Ann Probab 14:260–270
Gatheral J, Jaisson T, Rosenbaum M (2018) Volatility is rough. Quant Finance 18(6):933–949
Horvàth L (1993) The maximum likelihood method for testing changes in the parameters of normal observations. Ann Stat 21:671–680
Hosking J (1981) Fractional differencing. Biometrika 68(1):165–176
Istas J, Lang G (1997) Quadratic variations and estimation of the local Hölder index of a Gaussian process. Annales de l’I.H.P. Sect B 33(4):407–436
Kuelbs J, Philipp W (1980) Almost sure invariance principles for partial sums of mixing B-valued random variables. Ann Probab 8:1003–1036
Liptser R, Shiryaev A (2001) Statistics of random processes. Springer, New York
Robinson P (1995) Log-periodogram regression of time series with long-range dependence. Ann Stat 23(3):1048–1072
Soltane M (2018) Asymptotic efficiency in autoregressive processes driven by stationary Gaussian noise (preprint). arXiv:1810.08805
Stout W (1970) The Hartman–Wintner law of the iterated logarithm for martingales. Ann Math Stat 41:2158–2160
Acknowledgements
We would like to thank the anonymous referee for all the valuable comments that improve the original manuscript. This research benefited from the support of the Chair Risques Emergents ou Atypiques en Assurance, under the aegis of Fondation du Risque, a joint initiative by Le Mans University, Ecole Polytechnique and MMA company, member of Covea group. This work is also supported by the research project PANORisk and Région Pays de la Loire (France).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A Technical Lemmas
A Technical Lemmas
1.1 A.1 Lemmas for Proposition 3
As before, we assume \(\beta _n=O(n^{-\alpha })\) with \(\alpha >1/2\). From now on, we assume that \(1/2< \alpha <1.\) This is not a restriction since if \(\alpha \geqslant 1\) we will have in particular \(\beta _n = O\left( n^{-\lambda _1} \right) \) for some \(1/2< \lambda _1 <1.\) The following estimates will therefore be valid, replacing in this case \(\alpha \) by \(\lambda _1\). Now let us define
Then
Since \(\Phi _n^{(12)} = \Phi _n^{(21)}\), we have
Lemma 1
Under \(H_0\), we have that
Proof
Set
Note that
Then
with
If follows that
By Brouste et al. (2014), there exists a positive constant \(c_1\) such that \(\sup _{n\ge 1} \left\| \prod _{j=1}^n A_{n-j}^{\vartheta } \right\| < c_1\). Here \(\Vert \cdot \Vert \) is an operator norm. Note that \(\Vert V_k \Vert \le c_2 \beta _{k-1}\). Therefore, \(\Vert \Phi _n - \Psi _n \Vert \le c_3 \sum _{k=1}^n \beta _k = O(n^{1-\alpha })\).
By (34), we have that
and hence \(\Phi _n^{(12)} = O(n^{1 - 2 \alpha })\). Similarly, \(\Phi _n^{(22)} = \Phi _{n-1}^{(22)} + O(n^{1 - 3 \alpha })\) and \(\Phi _n^{(22)} = O(n^{2 - 3 \alpha })\). Now we have from (34) and (35) that
which completes the proof of this Lemma. \(\square \)
Lemma 2
We have
Proof
Note that
By Lemma 1, \(a_{k-1}^{*} \Phi _{k-1} a_{k-1} - \sigma _{k-1}^2 {\text{ E }}(\gamma _{k-1}^2) = O(k^{1-3\alpha })\). Therefore,
The Lemma follows from the fact that
Lemma 3
There exists a constant \(K > 0\) such that \({\text{ E }}(|F_n|^4) \le K\) for every n.
Proof
As in the proof of Lemma 2, \({\text{ E }}\left[ (a_k^{*} \zeta _k)^2 \right] = \sigma _k^2 {\text{ E }}(\gamma _k^2) + O(k^{1-3\alpha })\) is bounded. If \((\varepsilon _n)_{n \ge 1}\) are i.i.d. Gaussian random variables, then
is bounded. Therefore,
is also bounded. The general case follows easily. \(\square \)
Lemma 4
We have that
uniformly in m where \(\delta >0\).
Proof
Let
Working component by component, one easily obtains from the recursive equation \(T_{k+1,m}={A}_k^{\vartheta } T_{k,m}\) that
where we used the notation
By Brouste et al. (2014), there exists a positive constant K such that \(\sup _{n\ge 1} \left\| \prod _{j=1}^n A_{n-j}^{\vartheta } \right\| < K\). Hence,
and so,
Since \(a_k^*T_{k,m}= \left( T_{k,m}^{(11)}+\beta _k T_{k,m}^{(21)};T_{k,m}^{(12)}+\beta _k T_{k,m}^{(22)}\right) \) the desired result is proved. \(\square \)
Lemma 5
We have that
uniformly in m where \(\mathcal {F}_n\) is defined by \(\mathcal {F}_n = \sigma (F_1,\ldots ,F_n)\).
Proof
By the orthogonality of martingale, we have that
Denote \(\Theta _k = {\text{ E }}\left[ \zeta _k \zeta _k^{*} \,|\, \mathcal {F}_m \right] \) for \(m +1 \le k \le m+n\). Then
and
It is easy to see that \({\text{ E }}\left[ \Vert \Theta _m - {\text{ E }}(\Theta _m) \Vert \right] \) is bounded. By Lemma 4,
The proof is finished.
Remark 6
Since we work in the Gaussian setting, it is possible to show that in the new probability space (the one given by Lemma 3) that equation (26) holds. More precisely, Lemma 7 holds in the new probability space and we have
where W(k) is the standard Bronwian motion.
1.2 A.2 Lemmas for Proposition 5.2
Lemma 6
We consider a random vector \(B_n \in \mathbb {R}^d\) such that for all \(n \geqslant 1,\)
where the covariance matrix satisfies \(\Vert \Sigma _n \Vert = O(1).\) Then,
Proof
For any \(\varepsilon > 0,\) we have
where \(\Vert \mu _n \Vert ^2 \sim \chi ^2(d)\), which, in turn, implies
for sufficiently large n, where \(c_1(d)\) and \(c_{2}(d)\) are two positive constants independent of x and n. By the hypothesis on \(\Vert \Sigma _n \Vert \) and inequality \( 8 \ln n \leqslant {\ln n}^2 \varepsilon ^2 \Vert \Sigma _n \Vert ^{-1}\) for n large enough, we obtain
It remains to apply Borel–Cantelli’s lemma to obtain the desired result. \(\square \)
Lemma 7
Under \(H_0\), we have that
for any \(\lambda > 0\).
Proof
Let
where \(T_0\) is a degenerate Gaussian random vector whose second component is zero and
We obtain
By Brouste et al. (2014), there exists a positive constant K such that \(\sup _{n\ge 1} \left\| \prod _{j=1}^n A_{n-j}^{\vartheta } \right\| < K\). By (44),
Applying the Lemma 6, it is easy to see that \(T_n = O(\ln n) \ a.s.\). By (45), there exists \(C := C(\omega ) > 0\) such that \(\big \vert \zeta _n^{(2)}\big \vert \le K C \ln n \sum _{k=1}^{n-1} \vert \beta _k\vert \). This together with the assumption \(\beta _n = O(n^{-\alpha })\) imply that
for any \(\lambda > 0\).
It remains to prove the bound for \(\zeta _n^{(1)}\). Since
and
we have via a Taylor expansion
By (46), we obtain \(\vartheta \beta _{n-1}^2 \sigma _{n-1} \gamma _{n-1} + \vartheta \beta _{n-1} \zeta _{n-1}^{(2)} = O(n^{1-2\alpha + \lambda }) \ a.s.\) for any \(\lambda > 0\). Consequently, there exists \(C := C(\omega ) > 0\) such that
which proves this lemma.
Let us recall the martingale \(M_k=M_k(0)=\sum \limits _{n=2}^k\frac{a_{n-1}^*\zeta _{n-1}}{\sigma _n} \varepsilon _n\) and its quadratic variation \(\langle M\rangle _k=\sum \limits _{n=2}^k \left( \frac{a_{n-1}^*\zeta _{n-1}}{\sigma _n}\right) ^2\), we have the following lemmas which are the prepared work for the proof. \(\square \)
Lemma 8
For the sequence \(U_k=\sum _{n=1}^{k-1}Z_n\) with \(Z_n,\, n=1,2,\cdots ,\,k-1\) be the i.i.d centered normal distribution with the variance \(\mathcal {I}(\vartheta )=\frac{1}{1-\vartheta ^2}\), we have
as \(k\rightarrow \infty \) a.s. for some \(\lambda >0\).
Proof
The difference is equal to
From the equation (30) we know \(M_k-U_k=O(k^{1/2-\kappa })\) for \(k\rightarrow \infty \), on the other hand by the law of the interated logarithm, \(U_k/\mathcal {I}(\vartheta )=O((k\ln \ln k)^{1/2})\) which achieves the proof.
Lemma 9
We have
Proof
From Lemma 8, it suffices to show that
By the law of iterated logarithm (see Stout 1970) we have \(\frac{1}{k}\sum \limits _{n=2}^k \gamma _{n-1}^2-\mathcal {I}(\vartheta )=O((\ln \ln k/k)^{1/2}) \ a.s.\) and from Lemma 6, \(\gamma _n = O\left( \ln n\right) \ a.s.\). Now we have from Lemma 7 and Remark 6 that
Therefore,
for some \(\delta _2>0\) and hence the previous convergence to 0 as \(k\rightarrow \infty , \ a.s.\) is satisfied. \(\square \)
Lemma 10
For any small fixed number \(\epsilon '>0\) as \(n\rightarrow \infty \),
Proof
From the proof of Theorem 1,
Then the fact \(\max \limits _{\epsilon 'N\le k\le N} \frac{M_k^2}{\langle M\rangle _k}=O_{\mathbf {P}}(1)\) and \(\max \limits _{1<k\le \epsilon 'N}\frac{M_k^2}{\langle M\rangle _k}{\mathop {\longrightarrow }\limits ^{\mathbf {P}}}\infty \) achieves the proof.
Lemma 11
For any \(0< \varepsilon < 1/2\) the random variables \( \max \limits _{1\le k \le \varepsilon N} \frac{M_k^2}{\langle M \rangle _k}\) and \(\max \limits _{(1-\varepsilon )N \le k \le N} \frac{(M_N - M_k)^2}{\langle M \rangle _N - \langle M \rangle _k}\) are asymptotically independent.
Proof
We have from Lemma 3 and Remark 6 that:
and
The Lemma follows from the fact that the random variables W(t) and \(W(1)-W(t)\) are independent. \(\square \)
Rights and permissions
About this article
Cite this article
Brouste, A., Cai, C., Soltane, M. et al. Testing for the change of the mean-reverting parameter of an autoregressive model with stationary Gaussian noise. Stat Inference Stoch Process 23, 301–318 (2020). https://doi.org/10.1007/s11203-020-09217-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11203-020-09217-1