Skip to main content
Log in

Testing for the change of the mean-reverting parameter of an autoregressive model with stationary Gaussian noise

  • Published:
Statistical Inference for Stochastic Processes Aims and scope Submit manuscript

Abstract

The likelihood ratio test for a change in the mean-reverting parameter of a first order autoregressive model with stationary Gaussian noise is considered. The test statistic converges in distribution to the Gumbel extreme value distribution under the null hypothesis of no change-point for a large class of covariance structures including long-memory processes as the fractional Gaussian noise.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Asmussen S, Albrecher H (2010) Ruin probabilities. Chapman & Hall, London

    Book  Google Scholar 

  • Beran J (1994) Statistics for long-memory processes. Chapman & Hall, London

    MATH  Google Scholar 

  • Billingsley P (1999) Convergence of probability measures, 2nd edn. Wiley Series in Probability and Statistics

  • Brouste A, Cai C, Kleptsyna M (2014) Asymptotic properties of the MLE for the autoregressive process coefficients under stationary Gaussian noises. Math Methods Stat 23(2):103–115

    Article  MathSciNet  Google Scholar 

  • Brouste A, Kleptsyna M (2012) Kalman type filter under stationary noises. Syst Control Lett 61:1229–1234

    Article  MathSciNet  Google Scholar 

  • Chong T (2001) Structural change in AR(1) models. Econom Theory 17:87–155

    Article  MathSciNet  Google Scholar 

  • Davis R, Huang D, Yao Y (1995) Testing for a change in the parameter values and order of an autoregressive model. Ann Stat 23(1):282–304

    Article  MathSciNet  Google Scholar 

  • Douglas H, Timmer D, Pignatiello J (1998) The development and evaluation of cusum-based control charts for an AR(1) process. IIE Trans Qua Reliab 30:525–534

    Google Scholar 

  • Douglas H, Timmer D, Pignatiello J (2003) Change point estimates for the parameters of an AR(1) process. Quality Reliab Eng Int 19:355–369

    Article  Google Scholar 

  • Duflo M (1997) Random iterative models. Applications of Mathematics. Springer, New York

    Book  Google Scholar 

  • Durbin J (1960) The fitting of time series models. Rev Inst Int Stat 28:233–243

    Article  Google Scholar 

  • Eberlein E (1986) On strong invariance principles under dependence assumptions. Ann Probab 14:260–270

    Article  MathSciNet  Google Scholar 

  • Gatheral J, Jaisson T, Rosenbaum M (2018) Volatility is rough. Quant Finance 18(6):933–949

    Article  MathSciNet  Google Scholar 

  • Horvàth L (1993) The maximum likelihood method for testing changes in the parameters of normal observations. Ann Stat 21:671–680

    Article  MathSciNet  Google Scholar 

  • Hosking J (1981) Fractional differencing. Biometrika 68(1):165–176

    Article  MathSciNet  Google Scholar 

  • Istas J, Lang G (1997) Quadratic variations and estimation of the local Hölder index of a Gaussian process. Annales de l’I.H.P. Sect B 33(4):407–436

  • Kuelbs J, Philipp W (1980) Almost sure invariance principles for partial sums of mixing B-valued random variables. Ann Probab 8:1003–1036

    Article  MathSciNet  Google Scholar 

  • Liptser R, Shiryaev A (2001) Statistics of random processes. Springer, New York

    Book  Google Scholar 

  • Robinson P (1995) Log-periodogram regression of time series with long-range dependence. Ann Stat 23(3):1048–1072

    Article  MathSciNet  Google Scholar 

  • Soltane M (2018) Asymptotic efficiency in autoregressive processes driven by stationary Gaussian noise (preprint). arXiv:1810.08805

  • Stout W (1970) The Hartman–Wintner law of the iterated logarithm for martingales. Ann Math Stat 41:2158–2160

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the anonymous referee for all the valuable comments that improve the original manuscript. This research benefited from the support of the Chair Risques Emergents ou Atypiques en Assurance, under the aegis of Fondation du Risque, a joint initiative by Le Mans University, Ecole Polytechnique and MMA company, member of Covea group. This work is also supported by the research project PANORisk and Région Pays de la Loire (France).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexandre Brouste.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Technical Lemmas

A Technical Lemmas

1.1 A.1 Lemmas for Proposition 3

As before, we assume \(\beta _n=O(n^{-\alpha })\) with \(\alpha >1/2\). From now on, we assume that \(1/2< \alpha <1.\) This is not a restriction since if \(\alpha \geqslant 1\) we will have in particular \(\beta _n = O\left( n^{-\lambda _1} \right) \) for some \(1/2< \lambda _1 <1.\) The following estimates will therefore be valid, replacing in this case \(\alpha \) by \(\lambda _1\). Now let us define

$$\begin{aligned} \Phi _n = \left( \Phi _n^{(ij)} \right) _{i,j=1,2} = {\text{ E }}\left( \zeta _n \zeta _n^{*} \right) . \end{aligned}$$

Then

$$\begin{aligned} \Phi _n = A_{n-1}^{\vartheta } \Phi _{n-1} \left( A_{n-1}^{\vartheta } \right) ^{*} + \begin{pmatrix} \sigma _n^2 &{} 0 \\ 0 &{} 0 \end{pmatrix}, \quad \Phi _0 = \begin{pmatrix} 0 &{} 0 \\ 0 &{} 0 \end{pmatrix}. \end{aligned}$$
(33)

Since \(\Phi _n^{(12)} = \Phi _n^{(21)}\), we have

$$\begin{aligned} \begin{aligned} \Phi _n^{(11)}&= \vartheta ^2 \Phi _{n-1}^{(11)} + 2 \vartheta ^2 \beta _{n-1} \Phi _{n-1}^{(12)} + \vartheta ^2 \beta _{n-1}^2 \Phi _{n-1}^{(22)} + \sigma _n^2, \\ \Phi _n^{(12)}&= \vartheta (1+\beta _{n-1}^2) \Phi _{n-1}^{(12)} + \vartheta \beta _{n-1} \Phi _{n-1}^{(11)} + \vartheta \beta _{n-1} \Phi _{n-1}^{(22)}, \\ \Phi _n^{(22)}&= \Phi _{n-1}^{(22)} + \beta _{n-1}^2 \Phi _{n-1}^{(11)} + 2 \beta _{n-1} \Phi _{n-1}^{(12)}. \end{aligned} \end{aligned}$$
(34)

Lemma 1

Under \(H_0\), we have that

$$\begin{aligned} \Phi _n^{(11)} - \sigma _n^2 {\text{ E }}(\gamma _n^2) = O(n^{1 - 3\alpha }), \quad \Phi _n^{(12)} = O(n^{1 - 2 \alpha }), \quad \Phi _n^{(22)} = O(n^{2 - 3 \alpha }). \end{aligned}$$

Proof

Set

$$\begin{aligned} \Psi _n = \begin{pmatrix} \sigma _n^2 {\text{ E }}(\gamma _n^2) &{} 0 \\ 0 &{} 0 \end{pmatrix} \end{aligned}$$

Note that

$$\begin{aligned} {\text{ E }}(\gamma _n^2) = \vartheta ^2 {\text{ E }}(\gamma _{n-1}^2) + 1. \end{aligned}$$
(35)

Then

$$\begin{aligned} \Phi _n - \Psi _n = A_{n-1}^{\vartheta } \left( \Phi _{n-1} - \Psi _{n-1} \right) \left( A_{n-1}^{\vartheta } \right) ^{*} + V_n \end{aligned}$$

with

$$\begin{aligned} V_n = \begin{pmatrix} \vartheta ^2 \beta _{n-1}^2 \sigma _{n-1}^2 {\text{ E }}(\gamma _{n-1}^2) &{} \vartheta \beta _{n-1} \sigma _{n-1}^2 {\text{ E }}(\gamma _{n-1}^2) \\ \vartheta \beta _{n-1} \sigma _{n-1}^2 {\text{ E }}(\gamma _{n-1}^2) &{} \beta _{n-1}^2 \sigma _{n-1}^2 {\text{ E }}(\gamma _{n-1}^2) \end{pmatrix}. \end{aligned}$$

If follows that

$$\begin{aligned} \Phi _n - \Psi _n = \sum _{k=1}^{n-1} A_{n-1}^{\vartheta } \cdots A_{n-k}^{\vartheta } V_{n-k} \left( A_{n-k}^{\vartheta } \right) ^{*} \cdots \left( A_{n-1}^{\vartheta } \right) ^{*} + V_n. \end{aligned}$$

By Brouste et al. (2014), there exists a positive constant \(c_1\) such that \(\sup _{n\ge 1} \left\| \prod _{j=1}^n A_{n-j}^{\vartheta } \right\| < c_1\). Here \(\Vert \cdot \Vert \) is an operator norm. Note that \(\Vert V_k \Vert \le c_2 \beta _{k-1}\). Therefore, \(\Vert \Phi _n - \Psi _n \Vert \le c_3 \sum _{k=1}^n \beta _k = O(n^{1-\alpha })\).

By (34), we have that

$$\begin{aligned} \Phi _n^{(12)} = \vartheta (1+\beta _n^2) \Phi _{n-1}^{(12)} + O(n^{1-2\alpha }), \end{aligned}$$

and hence \(\Phi _n^{(12)} = O(n^{1 - 2 \alpha })\). Similarly, \(\Phi _n^{(22)} = \Phi _{n-1}^{(22)} + O(n^{1 - 3 \alpha })\) and \(\Phi _n^{(22)} = O(n^{2 - 3 \alpha })\). Now we have from (34) and (35) that

$$\begin{aligned} \Phi _n^{(11)} - \sigma _n^2 E(\gamma _n^2) = \vartheta ^2 \left( \Phi _{n-1}^{(11)} - \sigma _{n-1}^2 E(\gamma _{n-1}^2) \right) + O(n^{1 - 3 \alpha }), \end{aligned}$$

which completes the proof of this Lemma. \(\square \)

Lemma 2

We have

$$\begin{aligned} \frac{1}{n} {\text{ E }}\left[ M_n(m)^2 \right] - \mathcal {I}(\vartheta ) = O(n^{1-3\alpha }). \end{aligned}$$
(36)

Proof

Note that

$$\begin{aligned} {\text{ E }}\left[ M_n(m)^2 \right] = \sum _{k=m+1}^{m+n} \frac{{\text{ E }}\left[ (a_{k-1}^{*} \zeta _{k-1})^2 \right] }{\sigma _k^2} =\sum _{k=m+1}^{m+n} \frac{a_{k-1}^{*} \Phi _{k-1} a_{k-1}}{\sigma _k^2}. \end{aligned}$$

By Lemma 1, \(a_{k-1}^{*} \Phi _{k-1} a_{k-1} - \sigma _{k-1}^2 {\text{ E }}(\gamma _{k-1}^2) = O(k^{1-3\alpha })\). Therefore,

$$\begin{aligned} \frac{1}{n} \sum _{k=m+1}^{m+n} \frac{a_{k-1}^* \Phi _{k-1} a_{k-1} - \sigma _{k-1}^2 {\text{ E }}(\gamma _{k-1}^2)}{\sigma _k^2} \le \frac{c_{1}}{n} \sum _{k=m+1}^{m+n} k^{1-3\alpha } =O(n^{1-3\alpha }). \end{aligned}$$

The Lemma follows from the fact that

$$\begin{aligned} \sum _{k=m+1}^{m+n} \frac{\sigma _{k-1}^2 {\text{ E }}(\gamma _{k-1}^2)}{\sigma _k^2} - n \mathcal {I}(\vartheta ) = O(1). \end{aligned}$$

Lemma 3

There exists a constant \(K > 0\) such that \({\text{ E }}(|F_n|^4) \le K\) for every n.

Proof

As in the proof of Lemma 2, \({\text{ E }}\left[ (a_k^{*} \zeta _k)^2 \right] = \sigma _k^2 {\text{ E }}(\gamma _k^2) + O(k^{1-3\alpha })\) is bounded. If \((\varepsilon _n)_{n \ge 1}\) are i.i.d. Gaussian random variables, then

$$\begin{aligned} {\text{ E }}\left[ (a_n^{*} \zeta _n)^4 \right] = 3 \left\{ {\text{ E }}\left[ (a_n^{*} \zeta _n)^2 \right] \right\} ^2 \end{aligned}$$

is bounded. Therefore,

$$\begin{aligned} 6 \sum _{k=1}^{n-1} \left[ a_k^{*} A_{k-1}^{\vartheta } \cdots A_k^{\vartheta } b \right] ^4 \sigma _k^4 + 2 \sum _{1\le k < l \le n-1 } \left[ a_k^{*} A_{k-1}^{\vartheta } \cdots A_k^{\vartheta } b \right] ^2 \left[ a_l^{*} A_{l-1}^{\vartheta } \cdots A_l^{\vartheta } b \right] ^2 \sigma _k^2 \sigma _l^2 \end{aligned}$$

is also bounded. The general case follows easily. \(\square \)

Lemma 4

We have that

$$\begin{aligned} \left\| a_k^{*} A_{k-1}^{\vartheta } \cdots A_m^{\vartheta } \right\| = O(k^{-\delta }) \end{aligned}$$
(37)

uniformly in m where \(\delta >0\).

Proof

Let

$$\begin{aligned} T_{k,m} = \prod _{i=1}^{k-m} {A}_{k-i}^{\vartheta }. \end{aligned}$$

Working component by component, one easily obtains from the recursive equation \(T_{k+1,m}={A}_k^{\vartheta } T_{k,m}\) that

$$\begin{aligned} \left\{ \begin{array}{ll} T_{k+1,m}^{(11)} = \vartheta T_{k,m}^{(11)}+\beta _k \vartheta T_{k,m}^{(21)} \\ T_{k+1,m}^{(12)} = \vartheta T_{k,m}^{(12)} + \beta _k\vartheta T_{k,m}^{(22)} \\ T_{k+1,m}^{(21)} = \beta _n T_{k,m}^{(11)}+T_{k,m}^{(21)} \\ T_{k+1,m}^{(22)} = \beta _n T_{k,m}^{(12)}+T_{k,m}^{(22)} \end{array}, \right. \end{aligned}$$

where we used the notation

$$\begin{aligned} T_{k,m} = \begin{pmatrix} T_{n,m}^{(11)} &{} T_{n,m}^{(12)} \\ T_{n,m}^{(21)} &{} T_{n,m}^{(22)} \end{pmatrix}. \end{aligned}$$

By Brouste et al. (2014), there exists a positive constant K such that \(\sup _{n\ge 1} \left\| \prod _{j=1}^n A_{n-j}^{\vartheta } \right\| < K\). Hence,

$$\begin{aligned} \left\{ \begin{array}{ll} \vert T_{k+1,m}^{(11)} \vert \leqslant \vert \vartheta T_{k,m}^{(11)} \vert +\vert \beta _k \vartheta T_{k,m}^{(21)}\vert \\ \vert T_{k+1,m}^{(12)} \vert \leqslant \vert \vartheta T_{k,m}^{(12)} \vert + \vert \beta _k \vartheta T_{k,m}^{(22)} \vert \\ \vert T_{k+1,m}^{(21)} \vert \leqslant \vert \beta _k \vert K+T_{k,m}^{(21)} \\ \vert T_{k+1,m}^{(22)} \vert \leqslant \vert \beta _k T^{(12)}_{k,m} \vert K+ \vert T_{k,m}^{(22)} \vert \end{array} \right. \end{aligned}$$

and so,

$$\begin{aligned} \left\{ \begin{array}{ll} T_{k+1,m}^{(11)} = O\left( k^{-\alpha }\right) \\ T_{k+1,m}^{(12)} = O\left( k^{-\alpha }\right) \\ T_{k+1,m}^{(21)} = O\left( k^{1-\alpha }\right) \\ T_{k+1,m}^{(22)} = O\left( k^{1-\alpha }\right) \end{array} \right. \end{aligned}$$

Since \(a_k^*T_{k,m}= \left( T_{k,m}^{(11)}+\beta _k T_{k,m}^{(21)};T_{k,m}^{(12)}+\beta _k T_{k,m}^{(22)}\right) \) the desired result is proved. \(\square \)

Lemma 5

We have that

$$\begin{aligned} {\text{ E }}\left\{ \left| {\text{ E }}\left[ M_n(m)^2 \,|\, \mathcal {F}_m \right] - {\text{ E }}\left[ M_n(m)^2 \right] \right| \right\} = O(n^{1-2\delta }) \end{aligned}$$
(38)

uniformly in m where \(\mathcal {F}_n\) is defined by \(\mathcal {F}_n = \sigma (F_1,\ldots ,F_n)\).

Proof

By the orthogonality of martingale, we have that

$$\begin{aligned}&{\text{ E }}\left[ M_n(m)^2 \,|\, \mathcal {F}_m \right] ={\text{ E }}\left[ \left( M_{m+n} - M_m \right) ^2 \,|\, \mathcal {F}_m \right] =\sum _{k=m+1}^{m+n} {\text{ E }}\left[ F_k^2 \,|\, \mathcal {F}_m \right] \\ =&\sum _{k=m+1}^{m+n} \sigma _k^{-2} a_{k-1}^{*} {\text{ E }}\left[ \zeta _{k-1} \zeta _{k-1}^{*} \,|\, \mathcal {F}_m \right] a_{k-1}. \end{aligned}$$

Denote \(\Theta _k = {\text{ E }}\left[ \zeta _k \zeta _k^{*} \,|\, \mathcal {F}_m \right] \) for \(m +1 \le k \le m+n\). Then

$$\begin{aligned} \Theta _k = A_{k-1}^{\vartheta } \Theta _{k-1} \left( A_{k-1}^{\vartheta } \right) ^{*} + \begin{pmatrix} \sigma _k^2 &{} 0 \\ 0 &{} 0 \end{pmatrix}, \end{aligned}$$

and

$$\begin{aligned} \Theta _k - {\text{ E }}(\Theta _k) = A_{k-1}^{\vartheta } \cdots A_m^{\vartheta } \left( \Theta _m - {\text{ E }}[\Theta _m] \right) \left( A_m^{\vartheta } \right) ^{*} \cdots \left( A_{k-1}^{\vartheta } \right) ^{*}. \end{aligned}$$
(39)

It is easy to see that \({\text{ E }}\left[ \Vert \Theta _m - {\text{ E }}(\Theta _m) \Vert \right] \) is bounded. By Lemma 4,

$$\begin{aligned} {\text{ E }}\left\{ \left| {\text{ E }}\left[ M_n(m)^2 \,|\, \mathcal {F}_m \right] - {\text{ E }}\left[ M_n(m)^2 \right] \right| \right\} \le c_1 \sum _{k=m+1}^{m+n} \left\| a_k^{*} A_{k-1}^{\vartheta } \cdots A_m^{\vartheta } \right\| ^2 \le c_2 n^{1-2\delta }. \end{aligned}$$

The proof is finished.

Remark 6

Since we work in the Gaussian setting, it is possible to show that in the new probability space (the one given by Lemma 3) that equation (26) holds. More precisely, Lemma 7 holds in the new probability space and we have

$$\begin{aligned} \frac{M_k}{\langle M \rangle _k^{1/2}} - \frac{W(k)}{k^{1/2}}= O\left( k^{-\kappa }\right) \ a.s., \end{aligned}$$

where W(k) is the standard Bronwian motion.

1.2 A.2 Lemmas for Proposition 5.2

Lemma 6

We consider a random vector \(B_n \in \mathbb {R}^d\) such that for all \(n \geqslant 1,\)

$$\begin{aligned} B_n \sim \mathcal {N}(0,\Sigma _n), \end{aligned}$$

where the covariance matrix satisfies \(\Vert \Sigma _n \Vert = O(1).\) Then,

$$\begin{aligned} \left\| \frac{B_n}{\ln n} \right\| = o\left( 1\right) \ a.s. \end{aligned}$$
(40)

Proof

For any \(\varepsilon > 0,\) we have

$$\begin{aligned} \mathbf {P}\left( \left\| \frac{B_n}{\ln n} \right\|> \varepsilon \right)&= \mathbf {P}\left( \Vert B_n \Vert ^2> {\ln n}^2 \varepsilon ^2 \right) \\&= \mathbf {P}\left( \langle \Sigma _n \mu _n, \mu _n \rangle> {\ln n}^2 \varepsilon ^2 \right) \leqslant \mathbf {P}\left( \Vert \mu _n \Vert ^2 > \frac{{\ln n}^2 \varepsilon ^2}{\Vert \Sigma _n \Vert }\right) , \end{aligned}$$

where \(\Vert \mu _n \Vert ^2 \sim \chi ^2(d)\), which, in turn, implies

$$\begin{aligned} \mathbf {P}(\Vert \mu _n \Vert ^2 > {\ln n}^2 \varepsilon ^2 \Vert \Sigma _n \Vert ^{-1})= & {} \int _{{\ln n}^2 \varepsilon ^2 \Vert \Sigma _n \Vert ^{-1}}^{\infty } c_1(d)x^{\frac{d}{2}-1} \exp \left( -\frac{x}{2}\right) \, \mathrm {d}x \\\leqslant & {} \int _{{\ln n}^2 \varepsilon ^2 \Vert \Sigma _n \Vert ^{-1}}^{\infty } c_{2}(d) \exp \left( -\frac{x}{4}\right) \, \mathrm {d}x \end{aligned}$$

for sufficiently large n,  where \(c_1(d)\) and \(c_{2}(d)\) are two positive constants independent of x and n. By the hypothesis on \(\Vert \Sigma _n \Vert \) and inequality \( 8 \ln n \leqslant {\ln n}^2 \varepsilon ^2 \Vert \Sigma _n \Vert ^{-1}\) for n large enough, we obtain

$$\begin{aligned} \int _{{\ln n}^2\Vert \Sigma _n \Vert ^{-1} \varepsilon ^2}^{\infty } c_{2}(d)\exp \left( -\frac{x}{4}\right) \, \mathrm {d}x \leqslant 4c_{2}(d)n^{-2}=O\left( n^{-2}\right) . \end{aligned}$$

It remains to apply Borel–Cantelli’s lemma to obtain the desired result. \(\square \)

Lemma 7

Under \(H_0\), we have that

$$\begin{aligned} \zeta _n^{(1)} - \sigma _n \gamma _n = O(n^{1-2\alpha + \lambda }) \ a.s., \quad \zeta _n^{(2)} = O(n^{1-\alpha + \lambda }) \ a.s. \end{aligned}$$
(41)

for any \(\lambda > 0\).

Proof

Let

$$\begin{aligned} T_n= \begin{pmatrix} \vartheta &{} 0 \\ 0 &{} 1 \end{pmatrix}T_{n-1} + b \sigma _n \varepsilon _n , \end{aligned}$$
(42)

where \(T_0\) is a degenerate Gaussian random vector whose second component is zero and

$$\begin{aligned} \Delta _n=\zeta _n-T_n. \end{aligned}$$
(43)

We obtain

$$\begin{aligned} \Delta _n = A_{n-1}^{\vartheta } \Delta _{n-1} - \begin{pmatrix} 0 \\ \beta _{n-1}T_{n-1}^{(1)} \end{pmatrix}. \end{aligned}$$
(44)

By Brouste et al. (2014), there exists a positive constant K such that \(\sup _{n\ge 1} \left\| \prod _{j=1}^n A_{n-j}^{\vartheta } \right\| < K\). By (44),

$$\begin{aligned} \zeta _n^{(2)} = \Delta _n^{(2)}. \end{aligned}$$
(45)

Applying the Lemma 6, it is easy to see that \(T_n = O(\ln n) \ a.s.\). By (45), there exists \(C := C(\omega ) > 0\) such that \(\big \vert \zeta _n^{(2)}\big \vert \le K C \ln n \sum _{k=1}^{n-1} \vert \beta _k\vert \). This together with the assumption \(\beta _n = O(n^{-\alpha })\) imply that

$$\begin{aligned} |\zeta _n^{(2)}| = O(n^{1-\alpha + \lambda }) \ a.s. \end{aligned}$$
(46)

for any \(\lambda > 0\).

It remains to prove the bound for \(\zeta _n^{(1)}\). Since

$$\begin{aligned} \sigma _n \gamma _n = \vartheta \sqrt{(1- \beta _{n-1}^2)} \sigma _{n-1} \gamma _{n-1} + \sigma _n \varepsilon _n \end{aligned}$$

and

$$\begin{aligned} \zeta _n^{(1)} = \vartheta \zeta _{n-1}^{(1)} + \vartheta \beta _{n-1} \zeta _{n-1}^{(2)} + \sigma _n \varepsilon _n, \end{aligned}$$

we have via a Taylor expansion

$$\begin{aligned} \zeta _n^{(1)} - \sigma _n \gamma _n = \vartheta \left( \zeta _{n-1}^{(1)} - \sigma _{n-1} \gamma _{n-1} \right) + \vartheta O\left( \beta _{n-1}^2\right) \sigma _{n-1} \gamma _{n-1} + \vartheta \beta _{n-1} \zeta _{n-1}^{(2)}. \end{aligned}$$
(47)

By (46), we obtain \(\vartheta \beta _{n-1}^2 \sigma _{n-1} \gamma _{n-1} + \vartheta \beta _{n-1} \zeta _{n-1}^{(2)} = O(n^{1-2\alpha + \lambda }) \ a.s.\) for any \(\lambda > 0\). Consequently, there exists \(C := C(\omega ) > 0\) such that

$$\begin{aligned} \left| \zeta _n^{(1)} - \sigma _n \gamma _n \right| \le C \sum _{k=1}^{n-1} \vartheta ^{n-k} k^{1-2\alpha + \lambda } = O(n^{1-2\alpha + \lambda }) \ a.s. , \end{aligned}$$
(48)

which proves this lemma.

Let us recall the martingale \(M_k=M_k(0)=\sum \limits _{n=2}^k\frac{a_{n-1}^*\zeta _{n-1}}{\sigma _n} \varepsilon _n\) and its quadratic variation \(\langle M\rangle _k=\sum \limits _{n=2}^k \left( \frac{a_{n-1}^*\zeta _{n-1}}{\sigma _n}\right) ^2\), we have the following lemmas which are the prepared work for the proof. \(\square \)

Lemma 8

For the sequence \(U_k=\sum _{n=1}^{k-1}Z_n\) with \(Z_n,\, n=1,2,\cdots ,\,k-1\) be the i.i.d centered normal distribution with the variance \(\mathcal {I}(\vartheta )=\frac{1}{1-\vartheta ^2}\), we have

$$\begin{aligned} \frac{M_k^2}{\mathcal {I}(\vartheta )}-\frac{U_k^2}{\mathcal {I}(\vartheta )}=O(k^{1-\lambda ^4}) \ a.s. \end{aligned}$$

as \(k\rightarrow \infty \) a.s. for some \(\lambda >0\).

Proof

The difference is equal to

$$\begin{aligned} \frac{M_k^2}{\mathcal {I}(\vartheta )}-\frac{U_k^2}{\mathcal {I}(\vartheta )}= \left( M_k+U_k \right) \left( M_k-U_k \right) / \mathcal {I}(\vartheta ). \end{aligned}$$

From the equation (30) we know \(M_k-U_k=O(k^{1/2-\kappa })\) for \(k\rightarrow \infty \), on the other hand by the law of the interated logarithm, \(U_k/\mathcal {I}(\vartheta )=O((k\ln \ln k)^{1/2})\) which achieves the proof.

Lemma 9

We have

$$\begin{aligned} \frac{M_k^2}{\langle M\rangle _k}-\frac{U_k^2}{k \mathcal {I}(\vartheta )} \rightarrow 0,\, k\rightarrow \infty ,\, a.s. \end{aligned}$$

Proof

From Lemma 8, it suffices to show that

$$\begin{aligned} \frac{M_k^2}{\langle M\rangle _k}-\frac{M_k^2}{k\mathcal {I}(\vartheta )} =\frac{1}{\mathcal {I}(\vartheta )} \left( \frac{M_k}{k^{1/2}} \right) ^2 \left( \mathcal {I}(\vartheta )-\frac{\langle M\rangle _k}{k}\right) \frac{k}{\langle M\rangle _k}\rightarrow 0,\, k\rightarrow \infty , \ a.s.. \end{aligned}$$

By the law of iterated logarithm (see Stout 1970) we have \(\frac{1}{k}\sum \limits _{n=2}^k \gamma _{n-1}^2-\mathcal {I}(\vartheta )=O((\ln \ln k/k)^{1/2}) \ a.s.\) and from Lemma 6, \(\gamma _n = O\left( \ln n\right) \ a.s.\). Now we have from Lemma 7 and Remark 6 that

$$\begin{aligned}&\left| \frac{1}{k} \sum _{n=2}^k \left( a_n^{*} \zeta _{n} - \sigma _n \gamma _{n} + \sigma _n \gamma _{n} \right) ^2 - \mathcal {I}\left( \vartheta \right) \right| \\&\quad = \vert \frac{1}{k}\sum _{n=2}^k \left( a_n^{*} \zeta _{n} - \sigma _n \gamma _{n}\right) ^2 - \frac{2}{k}\sum _{n=2}^k \left( a_n^{*} \zeta _{n} - \sigma _n \gamma _{n}\right) \sigma _n\gamma _n \\&\qquad + \frac{1}{k} \sum _{n=2}^k \sigma _n^2\gamma _n^2 - \mathcal {I}\left( \vartheta \right) \vert \\&\quad = O\left( k^{-\delta _2}\right) \ a.s.. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{\langle {M} \rangle _k}{k} - \mathcal {I}\left( \vartheta \right) = O\left( k^{-\delta _2}\right) \ a.s. \end{aligned}$$
(49)

for some \(\delta _2>0\) and hence the previous convergence to 0 as \(k\rightarrow \infty , \ a.s.\) is satisfied. \(\square \)

Lemma 10

For any small fixed number \(\epsilon '>0\) as \(n\rightarrow \infty \),

$$\begin{aligned} \mathbf {P}\left[ \max _{1<k\le N} \frac{M_k^2}{\langle M\rangle _k}=\max _{1<k\le \epsilon ' N}\frac{M_k^2}{\langle M\rangle _k} \right] \rightarrow 1. \end{aligned}$$

Proof

From the proof of Theorem 1,

$$\begin{aligned} \max _{\epsilon 'N\le k\le N}\frac{M_k^2}{\langle M\rangle _k}{\mathop {\longrightarrow }\limits ^{d}}\max _{t\in [\epsilon ',1]}\frac{W^2(t)}{t}. \end{aligned}$$

Then the fact \(\max \limits _{\epsilon 'N\le k\le N} \frac{M_k^2}{\langle M\rangle _k}=O_{\mathbf {P}}(1)\) and \(\max \limits _{1<k\le \epsilon 'N}\frac{M_k^2}{\langle M\rangle _k}{\mathop {\longrightarrow }\limits ^{\mathbf {P}}}\infty \) achieves the proof.

Lemma 11

For any \(0< \varepsilon < 1/2\) the random variables \( \max \limits _{1\le k \le \varepsilon N} \frac{M_k^2}{\langle M \rangle _k}\) and \(\max \limits _{(1-\varepsilon )N \le k \le N} \frac{(M_N - M_k)^2}{\langle M \rangle _N - \langle M \rangle _k}\) are asymptotically independent.

Proof

We have from Lemma 3 and Remark 6 that:

$$\begin{aligned} {\max \limits _{1\le k \le \varepsilon N} \frac{M_k^2}{\langle M \rangle _k}}\xrightarrow [N\rightarrow \infty ]{d}{\max \limits _{0\le t \le \varepsilon } \frac{W(t)^2}{t}} \end{aligned}$$

and

$$\begin{aligned} {\max \limits _{(1-\varepsilon )N \le k \le N} \frac{(M_N - M_k)^2}{\langle M \rangle _N - \langle M \rangle _k}}\xrightarrow [N\rightarrow \infty ]{d}{\max \limits _{1-\varepsilon \le t \le 1} \frac{(W(1)-W(t))^2}{1-t}}. \end{aligned}$$

The Lemma follows from the fact that the random variables W(t) and \(W(1)-W(t)\) are independent. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Brouste, A., Cai, C., Soltane, M. et al. Testing for the change of the mean-reverting parameter of an autoregressive model with stationary Gaussian noise. Stat Inference Stoch Process 23, 301–318 (2020). https://doi.org/10.1007/s11203-020-09217-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11203-020-09217-1

Keywords

Navigation