Sequential fixed accuracy estimation for nonstationary autoregressive processes

  • 114 Accesses


For an autoregressive process of order p, the paper proposes new sequential estimates for the unknown parameters based on the least squares (LS) method. The sequential estimates use p stopping rules for collecting the data and presumes a special modification the sample Fisher information matrix in the LS estimates. In case of Gaussian disturbances, the proposed estimates have non-asymptotic normal joint distribution for any values of unknown autoregressive parameters. It is shown that in the i.i.d. case with unspecified error distributions, the new estimates have the property of uniform asymptotic normality for unstable autoregressive processes under some general condition on the parameters. Examples of unstable autoregressive models satisfying this condition are considered.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 99

This is the net price. Taxes to be calculated in checkout.


  1. Ahtola, J., Tiao, G.C. (1987). Distributions of least squares estimators of autoregressive parameters for a process with complex roots on the unit circle. Journal of Time Series Analysis, 8(1), 1–14.

  2. Anderson, T. W. (1959). On asymptotic distributions of estimates of parameters of stochastic difference equations. The Annals of Mathematical Statistics, 30, 676–6877.

  3. Anderson, T. W. (1971). The statistical analysis of time series. New York-London-Sydney: Wiley.

  4. Borisov, V. Z., Konev, V. V. (1977). On sequential parameters estimation in discrete time processes. Automation and Remote Control, 6, 58–64.

  5. Box, G., Jenkins, G., Reinsel, G. (2008). Time series analysis: Forecasting and control. Hoboken, N.J.: John Wiley.

  6. Brockwell, P. J., Davis, R. A. (1991). Time series: Theory and methods (2nd ed.). New York: Springer.

  7. Chan, N. H., Wei, C. Z. (1988). Limiting distributions of least squares estimates of unstable autoregressive processes. Annals of Statistics, 16(1), 367–401.

  8. Dickey, D. A., Fuller, W. A. (1979). Distribution of the estimators for autoregressive time series with a unit root. Journal of the American Statistical Association, 74(no. 366, part 1), 427–431

  9. Galtchouk, L., Konev, V. (2001). On sequential estimation of parameters in semimartingale regression models with continuous time parameter. Annals of Statistics, 29(5), 1508–1536.

  10. Galtchouk, L., Konev, V. (2006). Sequential estimation of the parameters in unstable AR(2). Sequential Analysis, 25(1), 25–43.

  11. Greenwood, P. E., Shiryaev, A. N. (1992). Asymptotic minimaxity of a sequential estimator for a first order autoregressive model. Stochastics: An International Journal of Probability and Stochastic Processes, 38(1), 49–65.

  12. Henze, N., Zirkler, B. (1990). A class of invariant consistent tests for multivariate normality. Communications in Statistics - Theory and Methods, 19(10), 3595–3617.

  13. Konev, V. V., Lai, T. L. (1995). Estimators with prescribed precision in stochastic regression models. Sequential Analysis, 14(3), 179–192.

  14. Konev, V., Le Breton, A. (2000). Guaranteed parameter estimation in a first-order autoregressive process with infinite variance. Sequential Analysis, 19(1–2), 25–43.

  15. Konev, V. V., Pergamenshchikov, S. M. (1981). The sequential plans of parameters identification in dynamic systems. Automation and Remote Control, 7, 84–92.

  16. Konev, V., Pergamenshchikov, S. (1997). On guaranteed estimation of the mean of an autoregressive process. Annals of Statistics, 25(5), 2127–2163.

  17. Konev, V. V. (2016). On one property of martingales with conditionally Gaussian increments and its application in the theory of nonasymptotic inference. Doklady Mathematics, 94, 676–680.

  18. Lai, T. L., Siegmund, D. (1983). Fixed accuracy estimation of an autoregressive parameter. Annals of Statistics, 11(2), 478–485.

  19. Lai, T. L., Wei, C. Z. (1983). Asymptotic properties of general autoregressive models and strong consistency of least-squares estimates of their parameters. Journal of Multivariate Analysis, 13(1), 1–23.

  20. Lai, T. L., Wei, C. Z. (1985). Asymptotic properties of multivariate weighted sums with applications to stochastic regression in linear dynamic systems. Multivariate analysis VI (Pittsburgh, Pa., 1983), (pp. 375–393), North-Holland, Amsterdam

  21. Lee, S. (1994). Sequential estimation for the parameters of a stationary autoregressive model. Sequential Analysis, 13(4), 301–317.

  22. Liptser, R. S., Shiryaev, A. N. (2001). Statistics of Random Processes. II. Applications, Stochastic Modelling and Applied Probability (2nd ed.). Springer: Berlin.

  23. Mann, H. B., Wald, A. (1943). On the statistical treatment of linear stochastic difference equations. Econometrica, 11, 173–220.

  24. Mardia, K. V. (1974). Applications of some measures of multivariate skewness and kurtosis in testing normality and robustness studies. Sankhya B: The Indian Journal of Statistics, 36(2), 115–128.

  25. Mikulski, P. W., Monsour, M. J. (1991). Optimality of the maximum likelihood estimator in first-order autoregressive processes. Journal of Time Series Analysis, 12(3), 237–253.

  26. Monsour, M. J., Mikulski, P. W. (1998). On limiting distributions in explosive autoregressive processes. Statistics & Probability Letters, 37(2), 141–147.

  27. Novikov, A. A. (1972). Sequential estimation of the parameters of processes of diffusion type. (Russian). Matematicheskie Zametki, 12, 627–638.

  28. Pergamenshchikov, S. M. (1992). Asymptotic properties of sequential design for estimating the parameter of a first-order autoregression. Theory of Probability & Its Applications, 36, 36–49.

  29. Rao, M. M. (1978). Asymptotic distribution of an estimator of the boundary parameter of an unstable process. Annals of Statistics, 6(1), 185–190.

  30. Royston, J. P. (1983). Some techniques for assessing multivariate normality based on the Shapiro–Wilk. Applied Statistics, 32, 121–133.

  31. Shiryaev, A. N., Spokoiny, V. G. (2000). Statistical experiments and decisions. Asymptotic theory. Advanced Series on Statistical Science & Applied Probability, 8. World Scientific Publishing Co., Inc., River Edge, NJ.

  32. Sriram, T. N., Iaci, R. (2014). Editor’s special invited paper: Sequential estimation for time series models. Sequential Analysis, 33(2), 136–157.

  33. White, J. S. (1958). The limiting distribution of the serial correlation coefficient in the explosive case. Annals of Mathematical Statistics, 29, 1188–1197.

Download references


The authors are grateful to the Associate Editor and to an anonymous referee for their helpful comments which improved this paper.

Author information

Correspondence to Victor Konev.

Additional information

This study was supported by the ministry of education and science of the Russian Federation, goszadanie no 2.3208.2017/4.6 and in part by RFBR under Grant 16-01-00121.


Additional probabilistic result for the square integrable martingales with conditionally Gaussian increments

In order to prove Theorem 1, we will establish first the following probabilistic result for the square integrable martingales with conditionally Gaussian increments.

Theorem 3

Let \((\varOmega ,\mathcal {F},P)\) be a probability space with a filtration \({(\mathcal {F}_k)}_{k\ge 0}\). Let \({(M_k^{l},\mathcal {F}_k)}_{k\ge 0},\)\(l=\overline{1,p},\) be a family of p square integrable martingales with the quadratic characteristics \(\{<M^{(l)}>_n\}_{n\ge 1}\) such that

  1. (a)

    \(P(<M^{(l)}>_{\infty }=+\infty )=1,~l=\overline{1,p};\)

  2. (b)

    \(\mathrm {Law}(\varDelta M_k^{(l)}|\mathcal {F}_{k-1})=\mathcal {N}(0,\sigma _l^2(k-1)),~k=1,2,\ldots ,~l=\overline{1,p},\) i.e., the \(\mathcal {F}_{k-1}\)-conditional distribution of \(\varDelta M_k^{{l}}=M_k^{(l)}-M_{k-1}^{(l)}\) is Gaussian with parameters 0 and \(\sigma _l^2(k-1)=\mathsf {E}((\varDelta M_k^{(l)})^2|\mathcal {F}_{k-1}).\)

    For every \(h>0\), define the sequence of stopping times

    $$\begin{aligned} \tau _l=\tau _l(h)=\inf \left\{ n>\tau _{l-1}:~~\sum \limits _{k=\tau _{k-1}(h)+1}^{n}\sigma _l^2(j-1)\ge h\right\} ,~j=\overline{1,p} \end{aligned}$$

where \(\tau _0=\tau _0(h)=0\), \(\inf \{\emptyset \}=+\infty \), and the sequence of random variables

$$\begin{aligned} m_l(h)=\frac{1}{\sqrt{h}}\sum \limits _{k=\tau _{l-1}+1}^{\tau _l}\sqrt{\beta _k(h,l)}\varDelta M_k^{(l)},~l=1,\ldots ,p, \end{aligned}$$


$$\begin{aligned} \beta _k(h,l)=\left\{ \begin{aligned}&1&if~\tau _{l-1}(h)<k<\tau _l(h),\\&\alpha _l(h)&if~k=\tau _l(h); \end{aligned}\right. \end{aligned}$$

and \(\alpha _l(h)\) are correcting factors, \(0<\alpha _j(h)\le 1\) determined by the equations

$$\begin{aligned} \sum \limits _{k=\tau _{l-1}+1}^{\tau _l-1}\sigma ^2(k-1)+\alpha _j(h)\sigma _l^2(\tau _l(h)-1)=h. \end{aligned}$$

Then, for any \(h>0\), the vector \(m(h)=(m_1(h),\ldots ,m_p(h))'\) has p-dimensional standard normal distribution, that is \(m(h)\sim \mathcal {N}(0,I_p).\)

We will show that the characteristic function of the random vector \(m(h)=(m_1(h),\ldots ,m_p(h))'\) has the form

$$\begin{aligned} \varphi (u)=\varphi (u_1,\ldots ,u_p)=\mathsf {E}\exp \left( i\sum \limits _{l=1}^p m_l(h)u_l\right) =e^{-\frac{1}{2}\sum \limits _{l=1}^p u_l^2} \end{aligned}$$

for any \(u=(u_1,\ldots ,u_p)\in R^p.\)

Let \(\mathcal {F}_{\tau _i}\) denote a \(\sigma -\)algebra of the events prior to stopping time \(\tau _i\), that is

$$\begin{aligned} \mathcal {F}_{\tau _i}=\left\{ A\in \mathcal {F}_{\infty }:~~A\cap (\tau _i\le k)\in \mathcal {F}_k \text { for every }k\ge 0 \right\} \end{aligned}$$

where \(\mathcal {F}_{\infty }=\sigma \left( \cup _{k\ge 0}\mathcal {F}_k\right) .\) The family of \(\sigma -\)algebras \({\{\mathcal {F}_{\tau _i}\}}_{0\le i \le p}\) is non-decreasing, that is

$$\begin{aligned} \mathcal {F}_{\tau _0}\subset \mathcal {F}_{\tau _1}\subset \ldots \subset \mathcal {F}_{\tau _p}. \end{aligned}$$


$$\begin{aligned} \varphi (u)=\mathsf {E}\left\{ \exp \left( i\sum \limits _{l=1}^{p-1}m_l(h) u_l\right) \mathsf {E}\left( e^{i m_p(h) u_p}\Big |\mathcal {F}_{\tau _p-1}\right) \right\} , \end{aligned}$$

one has to verify that

$$\begin{aligned} \mathsf {E}\left\{ e^{i m_1(h) u_1}\right\}= & {} e^{-\frac{u_1^2}{2}}, \end{aligned}$$
$$\begin{aligned} \mathsf {E}\left[ e^{i m_l(h) u_l}\Big |\mathcal {F}_{\tau _{l}-1}\right]= & {} e^{-\frac{u_l^2}{2}},~l=\overline{2,p}. \end{aligned}$$

We introduce the sequence of truncated stopping times \(\overline{\tau }_1(h,N)=\tau _1(h)\wedge N,~N=1,2,\ldots \) and denote

$$\begin{aligned} \xi _N(h)=\frac{1}{\sqrt{h}}\sum \limits _{k=1}^{\overline{\tau }_1(h,N)}\sqrt{\beta _k(h,1)}\varDelta M_k^{(1)}. \end{aligned}$$

Noting that

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\xi _N(h)=m_1(h)~~a.s., \end{aligned}$$

one has

$$\begin{aligned} \mathsf {E} e^{i m_1(h) u_1}=\lim \limits _{N\rightarrow \infty }\mathsf {E}e^{i u_1 \xi _N(h)}. \end{aligned}$$

Further we use the equation

$$\begin{aligned} \mathsf {E}e^{i u_1 \xi _N(h)}=e^{-\frac{u_1^2}{2}}\mathsf {E}e^{\xi _N^{(1)}(h,u_1)}+R_N, \end{aligned}$$


$$\begin{aligned} R_N= & {} \mathsf {E}e^{\xi _N^{(1)}(h,u_1)}\left( e^{-\xi _N^{(2)}(h,u_1)}-e^{-\frac{u_1^2}{2}}\right) .\nonumber \\ \xi _N^{(1)}(h,u_1)= & {} \sum \limits _{k=1}^{N}\left[ \frac{i u_1 \sqrt{\beta _k(h,1)}\varDelta M_k^{(1)}}{\sqrt{h}}\chi _{\{k\le \tau (h)\}}+\frac{u_1^2 \beta _k(h,1)\sigma ^2_1{(k-1)}}{2h}\chi _{\{k\le \tau _1(h)\}}\right] ,\nonumber \\ \xi _N^{(2)}(h,u_1)= & {} \frac{u_1^2}{2h}\sum \limits _{k=1}^{N}\beta _k(h,1)\sigma ^2_{1}(k-1)\chi _{\{k\le \tau _1(h)\}}. \end{aligned}$$

By the definition of stopping time \(\tau _1(h)\) in (38), one obtains

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\xi _N^{(2)}(h,u_1)=\frac{u_1^2}{2}. \end{aligned}$$

Applying the theorem on dominated convergence yields

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }R_N=0. \end{aligned}$$

From here and (37), we come to (35).

Now we check (36). We have

$$\begin{aligned} \mathsf {E}\Big [e^{i m_l(h) u_l}\Big |\mathcal {F}_{\tau _{l-1}}\Big ]= & {} \lim \limits _{n\rightarrow \infty }\mathsf {E}\Big [e^{i m_l(h) u_l}\Big |\mathcal {F}_{\tau _{l-1}\wedge n}\Big ],\nonumber \\ \mathsf {E}\Big [e^{i m_l(h) u_l}\Big |\mathcal {F}_{\tau _{l-1}\wedge n}\Big ]= & {} \sum \limits _{t=0}^{n}\mathsf {E}\Big [e^{i m_l(h) u_l}\Big |\mathcal {F}_{t}\Big ]\chi _{\{\tau _{l-1}=t\}}\nonumber \\= & {} \sum \limits _{t=0}^{n}\mathsf {E}\Big [e^{\xi (h,l,t)}\Big |\mathcal {F}_t\Big ]\chi _{\{\tau _{l-1}=t\}}; \end{aligned}$$


$$\begin{aligned} \xi (h,l,t)=\frac{i}{\sqrt{h}}\sum \limits _{k=t+1}^{\tau _l}\sqrt{\beta _k(h,l)}\varDelta M_k^{(l)} u_l. \end{aligned}$$

Introducing truncated stopping times

$$\begin{aligned} \tau _l \wedge N,~~N=t+1,t+2,\ldots , \end{aligned}$$

and the sequence of random variables

$$\begin{aligned} \xi _N(h,l,t)=\frac{i}{\sqrt{h}}\sum \limits _{k=t+1}^{\tau _l\wedge N}\sqrt{\beta _k(h,l)}\varDelta M_k^{(l)} u_l,~~N=t+1,\ldots \end{aligned}$$

and taking into account that

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\xi _N(h,l,t)=\xi (h,l,t)~a.s., \end{aligned}$$

we get

$$\begin{aligned} \mathsf {E}\Big [e^{\xi (h,l,t)}\Big |\mathcal {F}_t\Big ]=\lim \limits _{N\rightarrow \infty }\mathsf {E}\Big [e^{\xi _N(h,l,t)}\Big |\mathcal {F}_t\Big ]= e^{-\frac{u_l^2}{2}}. \end{aligned}$$

Further we represent \(\xi _N(h,l,t)\) as

$$\begin{aligned} \xi _N(h,l,t)=\xi _N^{(1)}(h,l,t)-\xi _N^{(2)}(h,l,t), \end{aligned}$$


$$\begin{aligned} \xi _N^{(1)}(h,l,t)= & {} \sum \limits _{k=t+1}^{N}\left[ \frac{i u_l \sqrt{\beta _k(h,l)}}{\sqrt{h}}\varDelta M_k^{(l)}\chi _{\{k\le \tau _l\}}+\frac{u_l^2\beta _k(h,l)}{2h}\sigma ^2_{l}(k-1)\chi _{\{k\le \tau _l\}}\right] ,\\ \xi _N^{(2)}(h,l,t)= & {} \frac{u_l^2}{2h}\sum \limits _{k=t+1}^{N}\beta _k(h,l)\sigma ^2_{l}(k-1)\chi _{\{k\le \tau _l(h)\}}. \end{aligned}$$


$$\begin{aligned} \mathsf {E}\Big [e^{\xi _N(h,l,t)}\Big |\mathcal {F}_t\Big ]=e^{-\frac{u_l^2}{2}}\mathsf {E}\left[ e^{\xi _N^{(1)}(h,l,t)}\Big |\mathcal {F}_t\right] +R_N, \end{aligned}$$


$$\begin{aligned} R_N=\mathsf {E}\left\{ e^{\xi _N^{(1)}(h,l,t)}\left[ e^{-\xi _N^{(2)}(h,l,t)}-e^{-\frac{u^2_l}{2}}\right] \Big |\mathcal {F}_t\right\} . \end{aligned}$$

Noting that

$$\begin{aligned} \mathsf {E}\left[ e^{\xi _N^{(1)}(h,l,t)}\Big |\mathcal {F}_t\right] =\mathsf {E}\left[ e^{\xi _{N-1}^{(1)}(h,l,t)}\Big |\mathcal {F}_t\right] =\ldots =1, \end{aligned}$$

and tending \(N\rightarrow \infty \) in (41) one gets

$$\begin{aligned} \mathsf {E}\left[ e^{\xi (h,l,t)}\Big |\mathcal {F}_t\right] =e^{-\frac{u^2_l}{2}}. \end{aligned}$$

In view of (39)

$$\begin{aligned} \mathsf {E}\left[ e^{i m_l(h) u_l}|\mathcal {F}_{\tau _{l-1}\wedge n}\right] =e^{-\frac{u_l^2}{2}}\chi _{\{\tau _{l-1}\le n\}}. \end{aligned}$$

Limiting \(n\rightarrow \infty \) we arrive at the desired results (35). Theorem 3 is proven. \(\square \)

Proofs of main theorems

Proof of Theorem 1

Substituting \(x_k\) from (1) in (10) yields

$$\begin{aligned} v_i(h)= & {} \sum \limits _{k=\tau _{i-1}(h)+1}^{\tau _i(h)}\sqrt{\beta _{i,k}(h)}{\langle X_{k-1}(X_{k-1}'\theta +\varepsilon _k)\rangle }_{i}\\= & {} \sum \limits _{k=\tau _{i-1}(h)+1}^{\tau _i(h)}\sqrt{\beta _{i,k}(h)}{\langle X_{k-1}X_{k-1}'\theta \rangle }_{i}+\sum \limits _{k=\tau _{i-1}(h)+1}^{\tau _i(h)}\sqrt{\beta _{i,k}(h)}{\langle X_{k-1}\rangle }_i\varepsilon _k\\= & {} \left\langle \sum \limits _{k=\tau _{i-1}(h)+1}^{\tau _i(h)}\sqrt{\beta _{i,k}(h)}X_{k-1}X_{k-1}'\theta \right\rangle _i+\sum \limits _{k=\tau _{i-1}(h)+1}^{\tau _i(h)}\sqrt{\beta _{i,k}(h)}x_{k-i}\varepsilon _k\\= & {} \langle \widehat{G}_{i,\tau _i(h)}\theta \rangle _i+\eta _i(h)=\langle G_p(h)\theta \rangle _i+\eta _i(h), \end{aligned}$$

where \(\eta _i(h)=\sum \limits _{k=\tau _{i-1}(h)+1}^{\tau _i(h)}\sqrt{\beta _{i,k}(h)}x_{k-i}\varepsilon _k\).


$$\begin{aligned} v(h)=G_p(h)\theta +\eta (h), \end{aligned}$$

where \(\eta (h)=(\eta _1(h),\ldots ,\eta _p(h))'.\)

Combining (11) and (43), one obtains

$$\begin{aligned} G_p(h)(\theta ^*(h)-\theta )=\eta (h). \end{aligned}$$

It remains to show that the random vector \(\eta (h)/\sqrt{h}\) has p-dimensional standard normal distribution. To this end, we apply Theorem 3. First we introduce the natural filtration \((\mathcal {F}_k)_{k\ge 0}\) of process (1) defined as

$$\begin{aligned} \begin{aligned}&\mathcal {F}_0=\sigma (x_0,\ldots ,x_{1-p}),\\&\mathcal {F}_k=\sigma (x_0,\ldots ,x_{1-p};\varepsilon _1,\ldots ,\varepsilon _k), k\ge 1. \end{aligned} \end{aligned}$$

It will be noted that the random variables \({\left\{ \tau _l(h)\right\} }_{1\le l\le p}\), defined in (7), are stopping times with respect to this filtration for every \(h>0\).

Further we need the following p stochastic processes \({\{M_t^{(l)}\}}_{t\ge 0}\) defined as

$$\begin{aligned} M_0^{(l)}= & {} 0,\\ M_t^{(l)}= & {} \sum \limits _{k=1}^t x_{k-l}\varepsilon _k,~~l=1,\ldots ,p. \end{aligned}$$

These processes are martingales with respect to filtration (44) and, under the assumptions of Theorem 1, they satisfy both conditions (a), (b) of Theorem 3. Moreover, the stopping times (34) employed in Theorem 3 reduce, in the case of AR(p) model, to those given by (7) because \(\sigma ^2_l(k-1)=x_{k-l}^2\).

Applying Theorem 3 to the vector \(\eta (h)\Big /\sqrt{h}=\frac{1}{\sqrt{h}}G_p(h)(\theta ^*(h)-\theta )\) with the coordinates

$$\begin{aligned} \frac{\eta _i(h)}{\sqrt{h}}=\frac{1}{\sqrt{h}}\sum \limits _{k=\tau _{i-1}(h)+1}^{\tau _i(h)}\sqrt{\beta _{i k}(h)}x_{k-i}\varepsilon _k, \end{aligned}$$

one comes to the desire result. This completes the proof of Theorem 1. \(\square \)

Proof of Theorem 2

In view of the equation for the standardized deviation

$$\begin{aligned} \frac{G_p(h)}{\sqrt{h}}(\theta ^*(h)-\theta )=\frac{\eta (h)}{\sqrt{h}}, \end{aligned}$$

one has to study the asymptotic distribution of the vector \(\eta (h)/\sqrt{h}\) with coordinates (45).

We will show that, for every vector \(\lambda =(\lambda _1,\ldots ,\lambda _p)'\) with \(\lambda _1^2+\ldots +\lambda _p^2=1,\;\lambda _j\ne 0,\) the linear combination \(\lambda '\eta (h)/\sqrt{h}\) satisfies the limiting relation

$$\begin{aligned} \lim _{h\rightarrow \infty }\sup _{\theta \in K}\sup _{-\infty<t<\infty }\left| P_{\theta }\left( \frac{\lambda '\eta (h)}{\sqrt{h}}\le t\right) -\varPhi (t)\right| =0. \end{aligned}$$

Let \(\{y_j\}_{j\ge 1}\) and \(\{z_j\}_{j\ge 1}\) be two sequences of random variables defined as

$$\begin{aligned} y_{j}= & {} \left\{ \begin{aligned}&\lambda _1 \sqrt{\beta _{1,j+1}}x_{j}&if~&0\le j<\tau _1(h);\\&\ldots \\&\lambda _{p-1}\sqrt{\beta _{p-1,j+1}(h)}x_{j-p+2}&if~&\tau _{p-2}(h)\le j<\tau _{p-1}(h);\\&\lambda _p x_{j-p+1}&if~&j\ge \tau _{p-1}(h); \end{aligned}\right. \end{aligned}$$
$$\begin{aligned} z_{j}= & {} \left\{ \begin{aligned}&y_j&if~&0\le j <\tau _p(h)-1;\\&\lambda _p \sqrt{\alpha _p(h)} x_{\tau _p-p}&if~&j=\tau _{p}(h)-1. \end{aligned}\right. \end{aligned}$$

Then \(\lambda '\eta (h)\sqrt{h}\) can be written as

$$\begin{aligned} \frac{\lambda '\eta (h)}{\sqrt{h}}=\frac{1}{\sqrt{h}}\sum \limits _{j=1}^{\tau _p(h)}z_{j-1}\varepsilon _j=Y(h)+\zeta (h) \end{aligned}$$


$$\begin{aligned} Y(h)= & {} \frac{1}{\sqrt{h}}\sum \limits _{j=1}^{\tau _p(h)}y_{j-1}\varepsilon _j, \end{aligned}$$
$$\begin{aligned} \zeta (h)= & {} \frac{1}{\sqrt{h}}\left( \sqrt{\alpha _p(h)}-1\right) y_{\tau _p(h)-1}\varepsilon _{\tau _p(h)}. \end{aligned}$$

Further we establish the following results.

Proposition 6

Under conditions of Theorem 2, for any set \(K\subset [\varLambda _p]\) satisfying (12)

$$\begin{aligned} \lim \limits _{h\rightarrow \infty }\sup _{\theta \in K}\sup _{-\infty<t<\infty }\left| P_{\theta }(Y(h)\le t)-\varPhi (t)\right| =0 \end{aligned}$$

where Y(h) is given by (50).

Lemma 3

Let \(\zeta (h)\) be defined by (51). Then for any \(\varDelta >0\) and any set \(K\subset [\varLambda _p]\) satisfying (12)

$$\begin{aligned} \lim _{h\rightarrow \infty }\sup _{\theta \in K}P_{\theta }\left( |\zeta (h)|>\varDelta \right) =0. \end{aligned}$$

Combining these results with (49), one comes to (14). This completes the proof of Theorem 2. \(\square \)

The proofs of Proposition 6 and Lemma 3 are rather tedious and given below in this section.

Some technical results

Some properties of unstable AR(p) model

Here we establish some technical results for unstable AR(p) process and for the sequence of random variables (47) which are used below. Equation (1) can be written in the form

$$\begin{aligned} X_k=A X_{k-1}+\xi _k,\;k=1,2,\ldots \end{aligned}$$

where \(X_k=(x_k,x_{k-1},\ldots ,x_{k-p+1})'\), \(\xi _k=(\varepsilon _k,0,\ldots ,0)'\); \(A=A(\theta )\) is given by (13).

Lemma 4

Let the process \((X_k)_{k\ge 0}\) satisfy (54) with \(\theta \in [\varLambda _p]\). Then for any \(n\ge 1\)

$$\begin{aligned} \sum \limits _{k=0}^n \Vert X_k\Vert ^2\ge c_p\sum \limits _{k=1}^n \varepsilon _k^2 \end{aligned}$$

where \([\varLambda _p]\) is the closure of region (3),

$$\begin{aligned} c_p= & {} \inf _{\theta \in [\varLambda _p]}\left( (1+\Vert A\Vert ^2)^{1/2}-\Vert A\Vert \right) ^2,\\ \Vert A\Vert ^2= & {} tr A A'. \end{aligned}$$

Proof of Lemma 4. Using Equation (54), one gets

$$\begin{aligned} \sum \limits _{k=1}^n\Vert X_k\Vert ^2=\sum \limits _{k=1}^n X_{k-1}' A' A X_{k-1}+\sum \limits _{k=1}^n \xi _k'A X_{k-1}+\sum \limits _{k=1}^n X_{k-1}' A' \xi _k +\sum \limits _{k=1}^n\Vert \xi _k\Vert ^2. \end{aligned}$$

From here it follows that

$$\begin{aligned} \sum \limits _{k=1}^n \Vert \xi _k\Vert ^2\le & {} \sum \limits _{k=0}^n \Vert X_k\Vert ^2+2\Vert A\Vert \sum \limits _{k=1}^n\Vert X_{k-1}\Vert \Vert \xi _k\Vert \le \sum \limits _{k=0}^n\Vert X_k\Vert ^2 \\&+2\Vert A\Vert \left( \sum \limits _{k=0}^n \Vert X_k\Vert ^2\right) ^{1/2} \left( \sum \limits _{k=1}^n\Vert \xi _k\Vert ^2\right) ^{1/2}\\= & {} s\left( \sqrt{\sum \limits _{k=0}^n \Vert X_k\Vert ^2}+\Vert A\Vert \sqrt{\sum \limits _{k=1}^n\Vert \xi _k\Vert ^2}\right) ^2-\Vert A\Vert ^2\sum \limits _{k=1}^n\Vert \xi _k\Vert ^2. \end{aligned}$$

This implies that

$$\begin{aligned} (1+\Vert A\Vert ^2)\sum \limits _{k=1}^n\Vert \xi _k\Vert ^2\le \left( \sqrt{\sum \limits _{k=0}^n \Vert X_k\Vert ^2}+\Vert A\Vert \sqrt{\sum \limits _{k=1}^n\Vert \xi _k\Vert ^2}\right) ^2. \end{aligned}$$


$$\begin{aligned} \sqrt{\sum \limits _{k=0}^n\Vert X_k\Vert ^2}\ge \left[ \left( 1+\Vert A\Vert ^2\right) ^{1/2}-\Vert A\Vert \right] \left( \sum \limits _{k=1}^n\Vert \xi _k\Vert ^2\right) ^{1/2} \end{aligned}$$

and noting that \(\sum \nolimits _{k=1}^n\Vert \xi _k\Vert ^2=\sum \nolimits _{k=1}^n\varepsilon _k^2\) one comes to (55). Hence Lemma 4. \(\square \)

Lemma 5

Let \((\varepsilon _n)_{n\ge 1}\) in AR(p) model (1) be a sequence of i.i.d. random variables with \(\mathsf {E} \varepsilon _n=0\) and \(\mathsf {E} \varepsilon _n^2=1\) and K be a compact subset of \([\varLambda _p]\) satisfying (12). Then for any \(\delta >0\) and natural number r

$$\begin{aligned} \lim _{m\rightarrow \infty }\sup _{\theta \in K}P_{\theta }\left( \Vert X_{n+r}\Vert ^2\ge \delta \sum \limits _{k=1}^n\Vert X_{k-1}\Vert ^2\text { for some }n\ge m\right) =0. \end{aligned}$$

Proof of Lemma 5. Applying repeatedly, Equation (54) yields for any \(l\ge 1\)

$$\begin{aligned} X_{n+r}=A^{l+r} X_{n-l}+\sum \limits _{i=0}^{l+r-1}A^i\xi _{n+r-i}. \end{aligned}$$

Further, for each \(s=1,2,\ldots \), we define a number \(l_n^{(s)}\) such that

$$\begin{aligned} l_n^{(s)}\in \{l:\;\Vert X_{n-l}\Vert =\min _{1\le j\le s}\Vert X_{n-j}\Vert ,\;1\le l\le s\}. \end{aligned}$$

Substituting \(l_n^{(s)}\) for l in (56), one has

$$\begin{aligned} X_{n+r} = A^{l_n^{(s)}+r}X_{n-l_n^{(s)}}+\sum \limits _{i=0}^{l_n^{(s)}+r-1}A^i \xi _{n+r-i}. \end{aligned}$$

Using the elementary inequalities and taking into account (12), we obtain

$$\begin{aligned} \Vert X_{n+r}\Vert ^2\le & {} 2\Vert A^{l_n^{(s)}+r}\Vert ^2 \Vert X_{n-l_n^{(s)}}\Vert ^2+2\left( \sum \limits _{i=0}^{l_n^{(s)}+r-1}\Vert A^i\Vert \Vert \xi _{n+r-i}\Vert \right) ^2\\\le & {} 2\Vert A^{l_n^{(s)}+r}\Vert ^2 \Vert X_{n-l_n^{(s)}}\Vert ^2+2\sum \limits _{i=0}^{l_n^{(s)}+r-1}\Vert A^i\Vert ^2\sum \limits _{i=0}^{l_n^{(s)}+r-1}\Vert \xi _{n+r-i}\Vert ^2\\\le & {} 2\kappa ^2_p\Vert X_{n-l_n^{(s)}}\Vert ^2+2 (r+s) \kappa ^2_p\sum \limits _{i=0}^{s+r-1}\varepsilon ^2_{n-i}. \end{aligned}$$

From here and Lemma 4, it follows that

$$\begin{aligned} \frac{\Vert X_{n+r}\Vert ^2}{\sum \limits _{k=0}^n\Vert X_k\Vert ^2}\le & {} \frac{2\kappa _p^2\Vert X_n\Vert ^2}{\sum \limits _{k=0}^n\Vert X_k\Vert ^2}+ \frac{2 (r+s) \kappa _p^2\sum \limits _{i=0}^{s+r-1}\varepsilon _{n-i}^2}{c_p\sum \limits _{k=1}^n \varepsilon _k^2}\\\le & {} \frac{2\kappa _p^2}{s}+\frac{2 (r+s) \kappa _p^2\sum \limits _{i=0}^{s+r-1}\varepsilon _{n-i}^2}{c_p\sum \limits _{k=1}^n \varepsilon _k^2},\;s>r+1. \end{aligned}$$

Using this estimate and the strong law of large numbers completes the proof of Lemma 5. \(\square \)

Lemma 6

Let \((y_j)_{j\ge 0}\) be defined by (47) and \(\lambda =(\lambda _1,\ldots ,\lambda _p)'\), \(\lambda _i\ne 0\), \(\lambda _1^2+\ldots +\lambda _p^2=1.\) Then for any set \(K\subset [\varLambda _p]\) satisfying (12)

$$\begin{aligned} \lim \limits _{m\rightarrow \infty }\sup _{\theta \in K} P_{\theta }\left( y^2_n\ge \delta \sum \limits _{i=0}^{n-1}y_i^2\text { for some }n\ge m\right) =0. \end{aligned}$$

Proof. Using (47), we represent \(y_{i-1}^2\) as

$$\begin{aligned} y_{i-1}^2=\sum \limits _{k=1}^{p-1}\lambda _k^2 x_{i-k}^2 \chi _{(\tau _{k-1}+1\le i<\tau _k)}+\sum \limits _{k=1}^{p-1}\lambda _k^2 \alpha _k x_{i-k}^2 \chi _{i=\tau _k)}+\lambda _p^2 x_{i-p}^2 \chi _{(i\ge \tau _{p-1}+1)}. \end{aligned}$$

From here, one has

$$\begin{aligned} \sum \limits _{i=1}^n y_{i-1}^2\ge \lambda _{*}^2\left( \sum \limits _{k=1}^{p-1}\sum \limits _{i=1}^n x_{i-k}^2\chi _{(\tau _{k-1}+1\le i<\tau _k)}+\sum \limits _{i=1}^n x_{i-p}^2\chi _{(i\ge \tau _{p-1}+1)}\right) \end{aligned}$$

where \(\lambda _{*}^2=\min (\lambda _1^2,\ldots ,\lambda _p^2)\). Further we note that

$$\begin{aligned}&\sum \limits _{i=1}^n x_{i-k}^2\chi _{(\tau _{k-1}+1\le i<\tau _k)}= \sum \limits _{j=0}^{n-k} x_{j}^2\chi _{(\tau _{k-1}-k+1\le j<\tau _k-k)},\\&\sum \limits _{i=1}^n x_{i-p}^2\chi _{(i\ge \tau _{p-1}+1)}=\sum \limits _{j=0}^{n-p} x_{j}^2\chi _{(j\ge \tau _{p-1}-p+1)}. \end{aligned}$$


$$\begin{aligned} \sum \limits _{i=1}^n y_{i-1}^2\ge & {} \lambda ^2_* \left( \sum \limits _{k=1}^{p-1}\sum \limits _{j=0}^{n-k} x_{j}^2\chi _{(\tau _{k-1}-k+1\le j<\tau _k-k)}+\sum \limits _{j=0}^{n-p} x_{j}^2\chi _{(j\ge \tau _{p-1}-p+1)}\right) \nonumber \\\ge & {} \lambda _{*}^2 \sum \limits _{j=0}^{n-p}x_j^2\left( \chi _{(0\le j<\tau _{p-1}-p+1)}+\chi _{(j\ge \tau _{p-1}-p+1)}\right) =\lambda _{*}^2 \sum \limits _{j=0}^{n-p}x_j^2. \end{aligned}$$

In view of the identity

$$\begin{aligned} \sum \limits _{k=0}^n\Vert X_k\Vert ^2=\sum \limits _{i=0}^{p-1}\sum \limits _{k=i}^n x_{k-i}^2=p \sum \limits _{l=0}^n x_l^2-\sum \limits _{i=0}^{p-1}\sum \limits _{l=n-i+1}^n x_l^2, \end{aligned}$$

we get

$$\begin{aligned} \sum \limits _{l=0}^n x_l^2\ge \frac{1}{p}\sum \limits _{k=0}^n \Vert X_k\Vert ^2. \end{aligned}$$

Combining (57) and (58), one obtains

$$\begin{aligned}&P_{\theta }\left( y_n^2\ge \delta \sum \limits _{i=1}^{n-1} y_{i}^2\text { for some } n\ge m\right) \\&\quad \le P_{\theta }\left( \max _{1\le k\le p} x_{n-k+1}^2\ge \delta \lambda _*^2\sum \limits _{j=0}^{n-p}x_j^2\text { for some } n \ge m\right) \\&\quad \le P_{\theta }\left( \Vert X_n\Vert ^2\ge \delta \cdot \lambda _*^2\cdot \frac{1}{p}\sum \limits _{k=0}^{n-p}\Vert X_k\Vert ^2\text { for some }n\ge m\right) . \end{aligned}$$

It remains to apply Lemma 5 to arrive at the desired result. Hence Lemma 6. \(\square \)

Proof of Proposition 6

First we note that the sequence \((y_j)\) defined by (47) is adapted to the filtration \((\mathcal {F}_j)\) in (44). In order to show (52), we will use the following probabilistic result for martingales from the paper by Lai and Siegmund (1983).

Lemma 7

(Lai and Siegmund (1983), Proposition 2.1) Let \(\{x_n\}_{n\ge 0}\) and \(\{\varepsilon _n\}_{n\ge 1}\) be sequences of random variables adapted to the increasing sequence of \(\sigma \)-algebras \((\mathcal {F}_n)_{n\ge 0}.\) Let \(\{P_{\theta },\theta \in \varTheta \}\) be a family of probability measures such that under every \(P_\theta \)

  1. A1:

    \(\{\varepsilon _n\}\) are i.i.d. with \(\mathsf {E} \varepsilon _1=0\), \(\mathsf {E} \varepsilon _1^2=1\);

  2. A2:

    \(\sup \nolimits _{\theta }{} \mathsf {E} _{\theta }\{\varepsilon _1^2|\varepsilon _1|>a|\}\rightarrow 0\) as \(a\rightarrow \infty \);

  3. A3:

    \(\varepsilon _n\) is independent of \(\mathcal {F}_{n-1}\) for each \(n\ge 1\);

  4. A4:

    \(P_{\theta }\left( \sum \nolimits _{i=0}^{\infty }x_{i}^2=\infty \right) =1\);

  5. A5:

    \(\sup \nolimits _{\theta }P_{\theta }(x^2_n>a)\rightarrow 0\) as \(a\rightarrow \infty \) for each \(n\ge 0\);

  6. A6:

    \(\lim \nolimits _{m\rightarrow \infty }\sup \nolimits _{\theta }P_{\theta }\left( x_n^2\ge \delta \sum \nolimits _{i=0}^{n-1}x_i^2\text { for some }n\ge m\right) =0\) for each \(\delta >0\).

For \(h>0\) let \(T(h)=\inf \left\{ n:~\sum \nolimits _{i=1}^n x_{i-1}^2\ge h\right\} \), \(\inf \{\emptyset \}=+\infty \). Then uniformly in \(\theta \in \varTheta \) and \(-\infty<t<\infty \)

$$\begin{aligned} P_{\theta }\left\{ \frac{1}{\sqrt{h}}\sum \limits _{i=1}^{T(h)}x_{i-1}\varepsilon _i\le t\right\} \rightarrow \varPhi (t)\text { as }h\rightarrow \infty , \end{aligned}$$

where \(\varPhi \) is the standard normal distribution function.

We apply Lemma 7 to the sequence \((y_j)_{j\ge 1}\) which, in view of Lemma 6, satisfies all the conditions A1-A6 for any parametric set K subjected to the restriction (12).

It is easy to check that, for the sequence (47), the stopping time

$$\begin{aligned} T(h)=\inf \{n\ge 1:\;\sum \limits _{i=1}^n y_{i-1}^2\ge h\} \end{aligned}$$

coincides with \(\tau _p(h)\) defined in (7).

Therefore, Y(h) in (50) can be written as

$$\begin{aligned} Y(h)=\frac{1}{\sqrt{h}}\sum \limits _{j=1}^{T(h)}y_{j-1}\varepsilon _j \end{aligned}$$

and, in virtue of Lemma 7, one comes to (52). Hence Proposition 6. \(\square \)

Remark 3

In spite of the fact that the sequence \((y_j)\) in (47) depends on the parameter h, the proof of Lemma 7 proceeds along the lines of Proposition 2.1 in Lai and Siegmund (1983) and is omitted.

Proof of Lemma 3

Taking into account (60), one gets the following estimate

$$\begin{aligned} P_{\theta }\left( |\zeta (h)|>\varDelta \right)\le & {} P_{\theta } \left( \left| \frac{y_{T(h)-1}\varepsilon _{T(h)}}{\sqrt{h}}\right|>\varDelta \right) \le P_{\theta }\left( |\varepsilon _{T(h)}|>L\right) \\&+P_{\theta }\left( |y_{T(h)-1}|>\varDelta \sqrt{h} L\right) \\\le & {} \frac{1}{L^2}\mathsf {E}_{\theta }\varepsilon _{T(h)}^2+P_{\theta }\left( \sum \limits _{k=1}^m y_{k-1}^2\ge h\right) +\alpha _{\theta }(m), \end{aligned}$$

where L is a positive constant,

$$\begin{aligned} \alpha _{\theta }(m)=P_{\theta }\left( y^2_{n-1}\ge \varDelta ^2 L^2\sum \limits _{k=1}^{n-1}y_{k-1}^2\text { for some }n\ge m\right) . \end{aligned}$$

Since \((T(h)=m)\in \mathcal {F}_{m-1},\;m=1,2,\ldots ,\) we compute

$$\begin{aligned} \mathsf {E}_{\theta }\varepsilon _{T(h)}^2= & {} \sum \limits _{m\ge }\mathsf {E}_{\theta }\varepsilon _m^2\chi _{(T(h)=m)}=\mathsf {E}_{\theta }\sum \limits _{m\ge }\mathsf {E}_{\theta }\left( \varepsilon _m^2\chi _{(T(h)=m)}|\mathcal {F}_{m-1}\right) \\= & {} \mathsf {E}_{\theta }\sum \limits _{m\ge }\chi _{(T(h)=m)}\mathsf {E}(\varepsilon _m^2|\mathcal {F}_{m-1})=1. \end{aligned}$$


$$\begin{aligned} \sup \limits _{\theta \in K}P_{\theta }\left( |\zeta (h)|>\varDelta \right) \le \frac{1}{L^2}+\sup \limits _{\theta \in K}P_{\theta }\left( \sum \limits _{k=1}^m y_{k-1}^2\ge h\right) +\sup \limits _{\theta \in K}\alpha _{\theta }(m). \end{aligned}$$

Further, we note that

$$\begin{aligned} P_{\theta }\left( \sum \limits _{k=1}^m y_{k-1}^2\ge h\right) \le P_{\theta }\left( \sum \limits _{k=1}^m \Vert X_k\Vert ^2\ge h\right) \end{aligned}$$

and that for each \(\theta \in [\varLambda _p]\)

$$\begin{aligned} \Vert X_k\Vert ^2\le \left( \sum \limits _{j=1}^k\Vert A^{k-j}\Vert \cdot |\varepsilon _j|\right) ^2\le \sum \limits _{j=1}^K\Vert A^{k-j}\Vert ^2\sum \limits _{j=1}^k\varepsilon _j^2\le \kappa ^2 k \sum \limits _{j=1}^k \varepsilon _j^2. \end{aligned}$$


$$\begin{aligned} P_{\theta }\left( \sum \limits _{k=1}^m y_{k-1}^2\ge h\right) \le P\left( \kappa _p^2\sum \limits _{k=1}^m k \sum \limits _{j=1}^k\varepsilon _j^2\ge h\right) . \end{aligned}$$

From here it follows that

$$\begin{aligned} \lim \limits _{h\rightarrow \infty }\sup \limits _{\theta \in K}P_{\theta }\left( \sum \limits _{k=1}^m y_{k-1}^2\ge h\right) =0. \end{aligned}$$

Now limiting \(h\rightarrow \infty \), \(m\rightarrow \infty \) and \(L\rightarrow \infty \) in (47) and taking into account (61) and Lemma (6), we come to (53). This completes the proof of Lemma 3. \(\square \)

Proof of Proposition 5

In order to apply Theorem 2, one has to check condition (12).

Since for each \(\theta \in \varLambda _p\)

$$\begin{aligned} \sup _{n\ge 1}\Vert A^n(\theta )\Vert ^2\le \sum \limits _{j\ge 0}\Vert A^j(\theta )\Vert ^2, \end{aligned}$$

it suffices to show that for any compact set \(K\subset \varLambda _p\)

$$\begin{aligned} \sup _{\theta \in K}\sum \limits _{j\ge 0}\Vert A^j\Vert ^2<\infty . \end{aligned}$$

We set

$$\begin{aligned} \gamma (\theta )=\sum \limits _{j\ge 0}A^j(\theta )(A'(\theta ))^j. \end{aligned}$$

This matrix series satisfies the equation

$$\begin{aligned} \gamma (\theta )-A(\theta )\gamma (\theta )A'(\theta )=I_p \end{aligned}$$

where \(I_p\) is the \(p\times p\) identity matrix. This equation can be rewritten as

$$\begin{aligned} (I_{p^2}-A(\theta )\otimes A(\theta ))\text {vec }\gamma (\theta )=\text {vec}(I_p) \end{aligned}$$

where the vector operation \(\text {vec}(\cdot )\) is defined by

$$\begin{aligned} \text {vec}(v)=(v_{1,1},\ldots ,v_{p,1},\ldots ,v_{1,p},\ldots ,v_{p,p})' \end{aligned}$$

and \(U\otimes V=(U_{ij}\cdot V_{kl})_{1\le i,j,k,l\le p}\) is the Kronecker product of \(p\times p\) matrices \(U=\Vert U_{ij}\Vert \) and \(V=\Vert V_{ij}\Vert .\) The matrix \(I_{p^2}-A(\theta )\otimes A(\theta )\) is invertible because all eigenvalues of matrix \(A(\theta )\otimes A(\theta )\) are less than one in modulus for any \(\theta \in \varLambda _p\). Therefore

$$\begin{aligned} \text {vec }\gamma (\theta )=(I_{p^2}-A(\theta )\otimes A(\theta ))^{-1}\text {vec}(I_p). \end{aligned}$$

Now we prove (29) and (30). It is well known (Anderson 1971) that the sample Fisher information matrix (5) has the property

$$\begin{aligned} \lim \limits _{h\rightarrow \infty }\frac{G_n}{n}=F\quad \text { a.s.}, \end{aligned}$$

for each \(\theta \in \varLambda _p\), where F is a positive definite matrix F satisfying the equation

$$\begin{aligned} F-A F A'=\Vert \delta _{1,i}\delta _{1,j}\Vert . \end{aligned}$$

By making use of the definitions of stopping times (7), we obtain

$$\begin{aligned} \lim \limits _{h\rightarrow \infty }\frac{\tau _i(h)-\tau _{i-1}(h)}{h}=\frac{1}{\langle F\rangle _{11}}. \end{aligned}$$

Further we have

$$\begin{aligned} \lim \limits _{h\rightarrow \infty }\frac{\hat{G}_{i,\tau _i(h)}}{\tau _i(h)-\tau _{i-1}(h)}=F. \end{aligned}$$


$$\begin{aligned} \lim \limits _{h\rightarrow \infty }\frac{G_p(h)}{h}=\lim \limits _{h\rightarrow \infty }\frac{\hat{G}_{i,\tau _i(h)}}{h}=\frac{F}{\langle F\rangle _{11}}\quad \text { a.s.} \end{aligned}$$
$$\begin{aligned} \lim \limits _{h\rightarrow \infty }\frac{\tau _p(h)}{h}=\frac{p}{\langle F\rangle _{11}}. \end{aligned}$$

This completes the proof of Proposition 5. \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Konev, V., Nazarenko, B. Sequential fixed accuracy estimation for nonstationary autoregressive processes. Ann Inst Stat Math 72, 235–264 (2020) doi:10.1007/s10463-018-0689-2

Download citation


  • Unstable autoregressive process
  • Non-asymptotic distribution of estimates
  • Sequential least squares method
  • Uniform asymptotic normality of estimates