Skip to main content
Log in

Confidence Estimation of Autoregressive Parameters Based on Noisy Data

  • INTELLECTUAL CONTROL SYSTEMS, DATA ANALYSIS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

We consider the problem of estimating the parameters of an autoregressive process based on observations with additive noise. A sequential method has been developed for constructing a fixed-size confidence domain with a given confidence factor for a vector of unknown parameters based on a finite sample. Formulas are obtained for the duration of a procedure that achieves the required performance of estimates of unknown parameters in the case of Gaussian noise. Confidence parameter estimates are constructed using a special sequential modification of the classic Yule–Walker estimates; this permits one to estimate the confidence factor for small and moderate sample sizes. The results of numerical modeling of the proposed estimates are presented and compared with the Yule–Walker estimates using the example of confidence estimation of spectral density.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.

Similar content being viewed by others

REFERENCES

  1. Ljung, L. and Söderstrom, T., Theory and Practice of Recursive Identification, Cambridge, MA: MIT Press, 1986.

    MATH  Google Scholar 

  2. Anderson, T.W., The Statistical Analysis of Time Series, New York: Wiley, 1971. Translated under the title: Statisticheskii analiz vremennykh ryadov, Moscow: Mir, 1976.

    MATH  Google Scholar 

  3. Brockwell, P.J. and Davis, R.A., Time Series: Theory and Methods, New York: Springer Sci.+Business Media, 1991.

    Book  Google Scholar 

  4. Vasil’ev, V.A., Dobrovidov, A.V., and Koshkin, G.M., Neparametricheskoe otsenivanie funktsionalov ot raspredelenii statsionarnykh posledovatel’nostei (Nonparametric Estimation of Functionals of Distributions of Stationary Sequences), Moscow: Nauka, 2004.

    MATH  Google Scholar 

  5. Kashkovskii, D.V. and Konev, V.V., Successive identification of the random-parameter linear dynamic systems, Autom. Remote Control, 2008, vol. 69, no. 8, pp. 1344–1356.

    Article  MathSciNet  Google Scholar 

  6. Konev, V.V. and Pergamenshchikov, S.M., Robust model selection for a semimartingale continuous time regression from discrete data, Stochastic Process. Their Appl., 2015, vol. 125, no. 1, pp. 294–326.

    Article  MathSciNet  Google Scholar 

  7. Emel’yanova, T.V. and Konev, V.V., On sequential estimation of the parameters of continuous-time trigonometric regression, Autom. Remote Control, 2016, vol. 77, no. 6, pp. 992–1008.

    Article  MathSciNet  Google Scholar 

  8. Seber, G.A.F., Linear Regression Analysis, New York: John Wiley and Sons, 1977. Translated under the title: Lineinyi regressionnyi analiz, Moscow: Mir, 1980.

    MATH  Google Scholar 

  9. Novikov, A.A., Sequential estimation of parameters of diffusion processes, Teoriya Veroyatn. Ee Primen., 1971, vol. 16, no. 2, pp. 394–396.

    MathSciNet  Google Scholar 

  10. Liptser, R.Sh. and Shiryaev, A.N., Statistika sluchainykh protsessov (Statistics of Random Processes), Moscow: Nauka, 1974.

    Google Scholar 

  11. Lai, T.L. and Siegmund, D., Fixed accuracy estimation of an autoregressive parameter, Ann. Stat., 1983, vol. 11, pp. 478–485.

    Article  MathSciNet  Google Scholar 

  12. Galtchouk, L. and Konev, V., On asymptotic normality of sequential LS-estimate for unstable autoregressive process AR(2), J. Multivariate Anal., Academic Press, 2010, vol. 101, no. 10, pp. 2616–2636.

    Article  MathSciNet  Google Scholar 

  13. Borisov, V.Z. and Konev, V.V., On sequential parameter estimation in discrete-time processes, Autom. Remote Control, 1977, vol. 38, no. 10, pp. 1475–1480.

    MATH  Google Scholar 

  14. Vorobeichikov, S.E. and Konev, V.V., On sequential identification of stochastic systems, Izv. Akad. Nauk SSSR. Tekhn. Kibern., 1980, no. 4, pp. 91–98.

  15. Konev, V.V. and Pergamenshchikov, S.M., Sequential plans of parameter identification in dynamic systems, Autom. Remote Control, 1981, vol. 42, no. 7 (Part 1), pp. 917–924.

    MATH  Google Scholar 

  16. Vasil’ev, V.A. and Konev, V.V., Sequential estimation of parameters of dynamic systems under incomplete observation, Izv. Akad. Nauk SSSR. Tekhn. Kibern., 1982, no. 6, pp. 145–154.

  17. Xia, Y. and Zheng, W.X., Novel parameter estimation of autoregressive signals in the presence of noise, Automatica, 2015, vol. 62, pp. 98–105.

    Article  MathSciNet  Google Scholar 

  18. Kulikova, M.V., Maximum likelihood estimation of linear stochastic systems in the class of sequential square-root orthogonal filtering methods, Autom. Remote Control, 2011, vol. 72, no. 4, pp. 766–786.

    Article  MathSciNet  Google Scholar 

  19. Diversi, R., Guidorzi, R., and Soverini, U., Identification of autoregressive models in the presence of additive noise, Int. J. Adapt. Control Signal Process., 2008, vol. 22, no. 5, pp. 465–481.

    Article  MathSciNet  Google Scholar 

  20. Labarre, D., Grivel, E., Berthoumieu, Y., Todini, E., and Najim, M., Consistent estimation of autoregressive parameters from noisy observations based on two interacting Kalman filters, Signal Process., 2006, vol. 86, no. 10, pp. 2863–2876.

    Article  Google Scholar 

  21. Zheng, W.X., Fast identification of autoregressive signals from noisy observations, IEEE Trans. Circuits Syst.–II: Express Briefs., 2005, vol. 52, no. 1, pp. 43–48.

    Article  Google Scholar 

  22. Pagano, M., Estimation of models of autoregressive signal plus white noise, Ann. Stat., 1974, vol. 2, no. 1, pp. 99–108.

    Article  Google Scholar 

  23. Konev, V.V., On one property of martingales with conditionally Gaussian increments and its application in the theory of nonasymptotic inference, Dokl. Math., 2016, vol. 471, no. 5, pp. 523–527.

    MATH  Google Scholar 

  24. Konev, V. and Nazarenko, B., Sequential fixed accuracy estimation for nonstationary autoregressive processes, AISM, 2020, vol. 72, no. 1, pp. 235–264.

    Article  MathSciNet  Google Scholar 

  25. Vorobeichikov, S.E. and Konev, V.V., On sequential confidence estimation of parameters of stochastic dynamical systems with conditionally Gaussian noises, Autom. Remote Control, 2017, vol. 78, no. 10, pp. 1803–1818.

    Article  MathSciNet  Google Scholar 

  26. Kurzhanskii, A.B. and Furasov, V.D., Identification of bilinear systems. Guaranteed pseudoellipsoidal estimates, Autom. Remote Control, 2000, vol. 61, no. 1, pp. 38–49.

    MathSciNet  MATH  Google Scholar 

  27. Konev, V.V. and Pergamenshchikov, S.M., General model selection estimation of a periodic regression with a Gaussian noise, AISM, 2010, vol. 62, no. 6, pp. 1083–1111.

    Article  MathSciNet  Google Scholar 

  28. Shiryaev, A.N., Statisticheskii posledovatel’nyi analiz (Statistical Sequential Analysis), Moscow: Nauka, 1976.

  29. Tartakovsky, A., Nikiforov, I., and Basseville, M., Sequential Analysis: Hypothesis Testing and Changepoint Detection, Chapman & Hall/CRC Press, 2015.

Download references

ACKNOWLEDGMENTS

The authors are grateful to anonymous referees for constructive comments.

Funding

This work was supported by the Russian Science Foundation, project no. 17-11-01049.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to V. V. Konev or A. V. Pupkov.

Additional information

Translated by V. Potapchouck

APPENDIX

Let us present Theorem A.1 and Theorem A.2 on the properties of stopped martingales with conditionally Gaussian increments from the papers [23] and [24], which were used in Secs. 2 and 3 when selecting the weight coefficients (2.15) and (3.9), (3.10) in the sequential Yule–Walker estimates (2.14) and (3.11) as well as when determining the duration of the procedure. We will also prove some technical results.

Theorem A.1.

Let \(\left (M_k, \mathcal {F}_k \right )_{k\geqslant 0}\) be a square integrable martingale [23] such that

  1. (a)

    Its quadratic characteristic satisfies the condition

    $$ \mathsf {P}_\theta \left (\langle M\rangle _\infty =+\infty \right )=1. $$
  2. (b)

    \(\mathrm {Law}\thinspace (\varDelta M_k|\mathcal {F}_{k-1})=\mathcal {N}(0,\sigma ^2_{k-1}) \), \( k=1, 2, \dots \); i.e., the \( \mathcal {F}_{k-1}\) -conditional distribution of \(\varDelta M_k=M_k-M_{k-1} \) is Gaussian with parameters \(0\) and \(\sigma ^2_{k-1}=\mathbf {E}\left ((\varDelta M_k)^2|\mathcal {F}_{k-1} \right )\).

For each \(h \) , we define the stopping time

$$ \tau =\tau (h)=\inf \left \{n>0:\sum _{k=1}^{n}\sigma _{k-1}^2\geqslant h \right \}, \quad \inf \{\varnothing \}=\infty , $$
(A.1)
and the random variable
$$ m(h)=\frac {1}{\sqrt {h}}\sum _{k=1}^{\tau (h)}\sqrt {\beta _k(h)}\varDelta M_k, \quad \beta _k(h)=\left \{ \begin {aligned} &1&&\text { if }1\leqslant k<\tau (h)\\ &\alpha (h)&&\text { if }k=\tau (h); \end {aligned} \right . $$
here \( \alpha (h)\) is a multiplier determined from the equation
$$ \sum _{k=1}^{\tau (h)-1}\sigma _{k-1}^2+\alpha (h)\sigma _{\tau (h)-1}^2=h.$$

Then for each \(h>0 \) the variable \( m(h)\) is standard Gaussian.

Proof of Lemma 1. Let us verify whether \(\left (\zeta _1(n),\mathcal {F}_n^{(1)}\right )_{n\geqslant 3} \) is a martingale. The measurability of \(\zeta _1(n) \) with respect to \(\mathcal {F}_n^{(1)} \) follows from definitions (2.10) and (2.11). Let us show that \( \mathbf {E}\left (\zeta _1(n+1)|\mathcal {F}_n^{(1)}\right )=\zeta _n^{(1)} \). Consider, for example, the case of even \(n \). Let \(n=2l \); then

$$ \zeta _1(2l+1)=\sum _{k=3}^{2l+1}\chi _{\{k\in T_1(2l+1)\}}y_{k-2}\xi _k.$$
Hence
$$ \begin {aligned} \mathbf {E}\left (\zeta _1(2l+1)|\mathcal {F}_{2l-1}^{(1)}\right )&=\mathbf {E}\left (\thinspace \sum _{k=3}^{2l+1}\chi _{\{k\in T_1(2l+1)\}}y_{k-2}\xi _{k}|\mathcal {F}_{2l-1}^{(1)} \right )\\ &=\sum _{k=3}^{2l}\chi _{\{k\in T_1(2l+1)\}}y_{k-2}\xi _k+\chi _{\{(2l+1)\in T_1(2l+1)\}}y_{2l-1}\mathbf {E}\left (\xi _{2l+1}|\mathcal {F}_{2l-1}^{(1)}\right )=\zeta _1(2l). \end {aligned} $$

The verification of whether \(\zeta _2(n)\) is a martingale can be carried out in a similar way.

This completes the proof of Lemma 1. \(\quad \blacksquare \)

Proof of Lemma 2. Let us show that the desired result follows from Theorem A.1. We introduce the random processes

$$ \begin {aligned} M_n^{(1)}&=\sum _{k=3}^{n}\chi _{\{k\in T_1(n)\}}y_{k-2}\tilde {\xi }_k, \\ M_n^{(2)}&=\sum _{k=3}^{n}\chi _{\{k\in T_2(n)\}}y_{k-2}\tilde {\xi }_k, \quad n\geqslant 3. \end {aligned} $$
(A.2)

The processes \(\left (M_n^{(1)}, \mathcal {F}_n^{(1)} \right )_{n\geqslant 3}\) and \(\left (M_n^{(2)}, \mathcal {F}_n^{(2)} \right )_{n\geqslant 3}\) are martingales with

$$ \begin {aligned} \mathbf {E}_\theta \left (\left (\varDelta M_n^{(1)} \right )^2|\mathcal {F}_{n-1}^{(1)} \right )&=y_{n-2}^2\mathbf {E}_\theta \tilde {\xi }_n^2\chi _{\{n\in T_1(n)\}},&\\ \mathbf {E}_\theta \left (\left (\varDelta M_n^{(2)} \right )^2|\mathcal {F}_{n-1}^{(2)} \right )&=y_{n-2}^2\mathbf {E}_\theta \tilde {\xi }_n^2\chi _{\{n\in T_2(n)\}};&\mathbf {E}\tilde {\xi }_n^2=\Delta ^2+\sigma ^2. \end {aligned}$$
(A.3)

Let us verify these properties for \(M_n^{(1)} \). By the definition of \(T_1(n) \) and \(T_2(n) \) in (2.9), we obtain

$$ M_n^{(1)}=\sum _{k=3}^{n-1}\chi _{\{k\in T_1(n)\}}y_{k-2}\tilde {\xi }_k+\chi _{\{n\in T_1(n)\}}y_{n-2}\tilde {\xi }_n=M_{n-1}^{(1)}+\chi _{\{n\in T_1(n)\}}y_{n-2}\tilde {\xi }_n. $$
Hence \(\varDelta M_n^{(1)}=\chi _{\{n\in T_1(n)\}}y_{n-2}\tilde {\xi }_n\). Therefore,
$$ \mathbf {E}_\theta \left (\left (\varDelta M_n^{(1)} \right )^2|\mathcal {F}_{n-1}^{(1)} \right )=\chi _{\{n\in T_1(n)\}}\mathbf {E}_\theta \left (y_{n-2}^2\tilde {\xi }_n^2| \mathcal {F}_{n-1}^{(1)}\right ).$$

Note that if \(n\in T_1(n)\), then \(\mathcal {F}_{n-1}^{(1)}=\mathcal {F}_{n-2}^{(1)}\). If, for example, \(n \) is odd, then the number \(n-1 \) is even. Consequently, \(m_1(n)=n \) and \(m_1(n-1)=m_1(n-2) \). By definition, \(\mathcal {F}_{n-1}^{(1)}=\mathcal {F}_{n-2}^{(1)}\). Since \({\xi }_n^2\) is independent of \(\mathcal {F}_{n-2}^{(1)} \), we have

$$ \mathbf {E}_\theta \left (\left (\varDelta M_n^{(1)} \right )^2|\mathcal {F}_{n-1}^{(1)} \right )=y_{n-2}^2\mathbf {E}_\theta \left (\tilde {\xi }_n^2\chi _{\{n\in T_1(n)\}} \right ).$$
Further, note that the martingales \(\left (M_n^{(1)},\mathcal {F}_n^{(1)} \right )\) and \(\left (M_n^{(2)},\mathcal {F}_n^{(2)} \right )\) defined by relations (A.2) and their quadratic characteristics (A.3) satisfy the assumptions of Theorem A.1. In this case, the stopping time \(\tau _1(h) \) in (2.12) coincides with (A.1). By Theorem A.1, the random variables \(\tilde {\zeta }_1(h)/\sqrt {h} \) and \(\tilde {\zeta }_2(h)/\sqrt {h} \) are standard Gaussian. This completes the proof of Lemma 2. \(\quad \blacksquare \)

Proof of Proposition 2. The model (2.1), (2.2) is written in vector form as

$$ \begin {aligned} X_k&=AX_{k-1}+\nu _k, \\ Y_k&=X_k+\zeta _k, \end {aligned}$$
(A.4)
where \(X_k=(x_k, \dots , x_{k-p+1})^{\mathrm {T}}\), \(\nu _k=(\varepsilon _{k}, 0, \dots , 0)^{\mathrm {T}}\), \(\zeta _k=(\eta _k, \dots , \eta _{k-p+1})^{\mathrm {T}}\), and
$$ A=\left (\begin {aligned} &\theta _1 &\dots &&\theta _p\\[.3em] &I_{p-1} &&&0 \end {aligned} \right ). $$

To analyze \(\tau (h)\), we need the asymptotic behavior of the sum

$$ C_n^{(i)}=\sum _{k=2p+1}^{n}\chi _{\{k\in T_i(n)\}}Y_{k-p-1}Y^{\mathrm {T}}_{k-p-1}.$$
Substituting \(Y_k \) from (A.4), we have the decomposition
$$ \begin {gathered} C_n^{(i)}=U_n^{(i)}+V_n^{(i)}+R_n^{(i)};\\ \begin {aligned} U_n^{(i)}&=\sum _{k=2p+1}^{n}\chi _{\{k\in T_i(n)\}}X_{k-p-1}X^{\mathrm {T}}_{k-p-1}, \\ V_n^{(i)}&=\sum _{k=2p+1}^{n}\chi _{\{k\in T_i(n)\}}\zeta _{k-p-1}\zeta ^{\mathrm {T}}_{k-p-1},\\ R_n^{(i)}&=\sum _{k=2p+1}^{n}\chi _{\{k\in T_i(n)\}}\zeta _{k-p-1}X^{\mathrm {T}}_{k-p-1}+\sum _{k=2p+1}^{n}\chi _{\{k\in T_i(n)\}}X_{k-p-1}\zeta _{k-p-1}^{\mathrm {T}}. \end {aligned} \end {gathered} $$
(A.5)

Considering (3.1), we obtain

$$ U_n^{(i)}=\sum _{j=1}^{d_i(n)}Z_jZ_j^{\mathrm {T}}, \quad Z_j=X_{(p+1)j+i-2}.$$
(A.6)

The sequence \(\{Z_j\}\) satisfies the vector autoregression equation

$$ Z_j=A^{p+1}Z_{j-1}+w_j, \quad w_j=\sum _{s=1}^{p+1}A^{p+1-s}\nu _{(p+1)(j-1)+i-2+s}$$
with
$$ \mathbf {E} w_j=0,\quad \mathbf {E} w_jw_j^{\mathrm {T}}=\sum _{l=0}^{p}A^lB(A^{\mathrm {T}})^l=\tilde {B},\quad B=\lVert \sigma ^2\delta _{1,i}\delta _{1,j} \rVert . $$

Since the process \(Z_j\) is stable, one has (see, e.g., [2])

$$ \begin {gathered} \lim _{n\rightarrow \infty }\frac {1}{n}\sum _{j=1}^{n}Z_jZ_j^{\mathrm {T}}=\tilde {F}\enspace \text {a.s.}; \\ \tilde {F}=\sum \limits _{j\geqslant 0}A^{(p+1)j}\tilde {B}(A^{\mathrm {T}})^{(p+1)j}=\sum \limits _{j\geqslant 0}A^jB(A^{\mathrm {T}})^j=:F. \end {gathered}$$
(A.7)

From (A.6) and (A.7), in view of (3.1), we find that

$$ \lim \limits _{n\rightarrow \infty }\frac {U_n^{(i)}}{n}=\lim \limits _{n\rightarrow \infty }\frac {U_n^{(i)}}{d_i(n)}\frac {d_i(n)}{n}=\frac {F}{p+1}\enspace \text {a.s.}$$
(A.8)

Further, we directly verify that

$$ \lim \limits _{n\rightarrow \infty }\frac {V_n^{(i)}}{n}=\frac {\Delta ^2}{p+1}I_p, \quad \lim \limits _{n\rightarrow \infty }\frac {R_n^{(i)}}{n}=0\enspace \text {a.s.}$$
Based on this and (A.5) and (A.8), we obtain
$$ {\lim \limits _{n\rightarrow \infty }n^{-1}{C_n^{(i)}}=\left (p+1\right )^{-1}\left (F+\Delta ^2I_p\right )}. $$
Since
$$ \sum _{k=2p+1}^{n}\chi _{\{k\in T_i(n)\}}y_{k-p-l}^2=\langle C_n^{(i)}\rangle _{{ll}}, $$
we have
$$ \lim \limits _{n\rightarrow \infty }\frac {1}{n}\sum _{k=2p+1}^{n}\chi _{\{k\in T_i(n)\}}y_{k-p-l}^2=\frac {1}{p+1}\big (\langle F\rangle _{{11}}+\Delta ^2 \big ), \quad l={1,\ldots ,p}. $$
(A.9)

Now let us find the asymptotics of the stopping times \(\tau _l^{(i)}(h) \). By the definition of \(\tau _1^{(i)}(h) \) in (3.7), we have

$$ \sum _{k=2p+1}^{\tau _1^{(i)}(h)-1}\chi _{\left \{k\in T_i\left (\tau _1^{(i)}\right )\right \}}y_{k-p-1}^2<\frac {h}{\Delta ^2+\sigma ^2}\leqslant \sum _{k=2p+1}^{\tau _1^{(i)}(h)}\chi _{\left \{k\in T_i\left (\tau _1^{(i)}\right )\right \}}y_{k-p-1}^2.$$
Hence, using (A.9), we obtain
$$ \lim \limits _{h\rightarrow \infty }\frac {h}{(\Delta ^2+\sigma ^2)\tau _1^{(i)}(h)}=\frac {1}{p+1}\big (\langle F\rangle _{{11}}+\Delta ^2\big ); $$
i.e.,
$$ \lim \limits _{h\rightarrow \infty }\frac {\tau _1^{(i)}(h)}{h}=\frac {p+1}{(\Delta ^2+\sigma ^2)\big (\langle F\rangle _{11}+\Delta ^2\big )}.$$

In a similar way, we find that

$$ \lim \limits _{h\rightarrow \infty }\frac {\tau _l^{(i)}(h)}{h}=\frac {l(p+1)}{(\Delta ^2+\sigma ^2)\big ( \langle F\rangle _{11}+\Delta ^2\big )}, \quad 2\leqslant l\leqslant p. $$
(A.10)

Taking into account (3.8), we obtain relation (3.15) for the duration of the sequential procedure. Now let us consider the asymptotic behavior of the matrix \(G(h) \) defined in (3.13). For the stable process (2.1), one has the property [2]

$$ \lim \limits _{n\rightarrow \infty }\frac {1}{n}\sum _{k=2p+1}^{n}Y_{k-p-1}Y_{k-1}^{\mathrm {T}}=F(A^{\mathrm {T}})^p. $$
By analogy with (A.8), it can be established that the matrix
$$ D^{(i)}(n)=\sum _{k=2p+1}^{n}\chi _{\{k\in T_i(n)\}}Y_{k-p-1}Y_{k-1}^{\mathrm {T}} $$
satisfies the limit relation
$$ \lim \limits _{n\rightarrow \infty }\frac {1}{n}D^{(i)}(n)=\frac {1}{p+1}F(A^{\mathrm {T}})^p. $$
(A.11)

Further, substituting the coefficients (3.10) into the entry of the matrix (3.13) and taking into account the fact that \(y_{k-p-l}=\langle Y_{k-p-1}\rangle _l\) and \(y_{k-s}=\langle Y_{k-1}\rangle _s\), we obtain

$$ \big \langle G(h) \big \rangle _{l, s}=\sum _{i=1}^{p+1}\sum _{k=\tau _{l-1}^{(i)}(h)+1}^{\tau _{l}^{(i)}(h)}\chi _{\{k\in T_i(k)\}}\sqrt {\beta _{l,k}^{(i)}}\langle Y_{k-p-1}Y_{k-1}^{\mathrm {T}}\rangle _{l, s}.$$
(A.12)

Taking into account the definition of the coefficients (3.9), note that the inner sum coincides, except for one term corresponding to the time \(\tau _l^{(i)}(h)\), with the sum

$$ \sum _{k=\tau _{l-1}^{(i)}(h)+1}^{\tau _{l}^{(i)}(h)} \chi _{\{k\in T_i(k)\}}\langle Y_{k-p-1}Y_{k-1}^{\mathrm {T}}\rangle _{l, s} = \left \langle D^{(i)}\left (\tau _l^{(i)}(h)\right ) \right \rangle _{l, s} - \left \langle D^{(i)}\left (\tau _{l-1}^{(i)}(h)\right ) \right \rangle _{l, s} .$$
Based on this, it follows from (A.10) and (A.11) that
$$ \begin {aligned} &\lim _{h\rightarrow \infty }\frac {1}{h}\sum _{k=\tau _{l-1}^{(i)}(h)+1}^{\tau _{l}^{(i)}(h)}\chi _{\{k\in T_i(k)\}}\langle Y_{k-p-1}Y_{k-1}^{\mathrm {T}}\rangle _{l, s}\\ &\qquad {}=\lim _{h\rightarrow \infty }\left (\frac {\tau _{l}^{(i)}(h)}{h}\frac {\left \langle D^{(i)}\left (\tau _l^{(i)}(h)\right ) \right \rangle _{l, s}}{\tau _l^{(i)}(h)}-\frac {\tau _{l-1}^{(i)}(h)}{h}\frac {\left \langle D^{(i)}\left (\tau _{l-1}^{(i)}(h)\right ) \right \rangle _{l, s}}{\tau _{l-1}^{(i)}(h)}\right )\!=\!\frac {\big \langle F(A^{\mathrm {T}})^p\big \rangle _{l,s}}{\left (\Delta ^2+\sigma ^2 \right )\big (\langle F \rangle _{{11}}+\Delta ^2 \big )}. \end {aligned}$$
Using this relation in (A.12), we obtain
$$ \lim _{h\rightarrow \infty }\frac {G(h)}{h}=\frac {(p+1)F(A^{\mathrm {T}})^p}{\big (\Delta ^2+\sigma ^2)(\langle F\rangle _{{11}}+\Delta ^2\big )}.$$

The proof of Proposition 2 is complete. \(\quad \blacksquare \)

Theorem A.2.

Suppose that we are given [24]

  1. 1.

    A probability space \((\Omega , \mathcal {F},\mathsf {P})\) with a filtration \( (\mathcal {F})_{k\geqslant 0}\).

  2. 2.

    A family \((M_k^{(l)}, \mathcal {F}_k )_{k\geqslant 0}\), \( l={1,\ldots ,p}\), of square integrable martingales with quadratic characteristics \(\{\langle M^{(l)} \rangle _{n} \}_{n\geqslant 1}\), \( l={1,\ldots ,p}\), such that

    1. (a)

      \(\mathsf {P}(\langle M^{(l)}\rangle _\infty =+\infty )=1 \), \( l={1,\ldots ,p}\).

    2. (b)

      \(\mathrm {Law}(\varDelta M_k^{(l)}|\mathcal {F}_{k-1})=\mathcal {N}(0,\sigma _l^2(k-1)) \), \( k=1, 2, \dots \), \( l={1,\ldots ,p}\); i.e., the \( \mathcal {F}_{k-1}\) -conditional distribution of the increment \(\varDelta M_k^{(l)}=M_k^{(l)}-M_{k-1}^{(l)}\) is Gaussian with parameters \(0 \) and \( \sigma _l^2(k-1)=\mathbf {E}\big ((\varDelta M_k^{(l)} )^2|\mathcal {F}_{k-1}\big )\).

For each \(h>0 \) , we define the stopping time

$$ \begin {gathered} \tau _l=\tau _l(h)=\inf \left \{n>\tau _{l-1}(h):\sum _{k=\tau _{l-1}+1}^{n}\sigma _l^2({k-1})\geqslant h \right \}, \\ l={1,\ldots ,p},\quad \tau _0=\tau _0(h)=0, \quad \inf \{\varnothing \}=+\infty , \end {gathered} $$
and the random variables
$$ \begin {gathered} m_l(h)=\frac {1}{\sqrt {h}}\sum _{k=\tau _{l-1}+1}^{\tau _l}\sqrt {\beta _k(h,l)}\varDelta M_k^{(l)}, \quad l={1,\ldots ,p};\\ \beta _k(h,l)=\left \{ \begin {aligned} &1&&\text { if }\tau _{l-1}(h)< k<\tau _l(h)\\ &\alpha _l(h)&&\text { if }k=\tau _l(h); \end {aligned} \right . \end {gathered} $$
here the \( \alpha _l(h)\) , \( l={1,\ldots ,p}\) , are the correction factors determined from the equations
$$ \sum _{k=\tau _{l-1}+1}^{\tau _l-1}\sigma _l^2{(k-1)}+\alpha _l(h)\sigma _l^2\big ({\tau _l(h)-1}\big )=h.$$
Then for each \(h>0\) the random vector \(m(h)=(m_1(h), \dots ,m_p(h))^{\mathrm {T}}\) has standard normal distribution; i.e., \(m(h)\sim \mathcal {N}(0, I_p) \) , where \(I_p \) is the identity matrix of dimension \(p\) .

Proof of Lemma 3. Write (3.4) in the form

$$ M_l^{(i)}(n)=\sum _{k=2p+1}^{n-1}\chi _{\{k\in T_i(n)\}}y_{k-p-l}\tilde {\xi }_k+\chi _{\{n\in T_i(n)\}}y_{n-p-l}\tilde {\xi }_n=M_{i}^{(l)}(n-1)+\chi _{\{n\in T_i(n)\}}y_{n-p-l}\tilde {\xi }_n; $$
i.e.,
$$ \varDelta M_l^{(i)}(n)=\chi _{\{n\in T_i(n)\}}y_{n-p-l}\tilde {\xi }_n.$$
(A.13)

Based on this, we have

$$ \sigma _{i,l}^2(n-1)=\mathbf {E}_\theta \left (\left (\varDelta M_l^{(i)}(n) \right )^2|\mathcal {F}_{n-1}^{(i)} \right )=\chi _{\{n\in T_i(n)\}}\mathbf {E}_\theta \left (y_{n-p-l}^2\tilde {\xi }_n^2|\mathcal {F}_{n-1}^{(i)} \right ),$$
(A.14)
where \(\mathcal {F}_{n-1}^{(i)} \) is the \(\sigma \)-algebra defined in (3.2). Further, note that if \(n\in T_i(n) \), then \(t_i(n)=n \) and \(t_i(n-1)=n-p-1 \). Therefore, \(\mathcal {F}_{n-1}^{(i)}=\mathcal {F}_{n-p-1}^{(i)}\), the random variable \(y_{n-p-l}^2\) is measurable with respect to \(\mathcal {F}_{n-p-1}^{(i)} \), and \(\tilde {\xi }_n \) is independent of \(\mathcal {F}_{n-p-1}^{(i)} \). By virtue of (3.6) and the Gaussian property of \(\tilde {\xi }_n \), the increment (A.13) has a conditionally Gaussian distribution \(\mathcal {F}_{n-1}^{(i)}\), and in view of (A.14) we have \(\sigma _{i,l}^2(n-1)=\chi _{\{n\in T_i(n)\}}(\sigma ^2+\Delta ^2)y_{n-p-l}^2 \). The proof of Lemma 3 is complete. \(\quad \blacksquare \)

Proof of Theorem 2. Substituting \(y_k \) from (2.1) into (3.12) and considering (3.13), we obtain

$$ \begin {aligned} \vartheta _l(h)&=\sum _{k=2p+1}^{\tau (h)}\gamma _{l,k}y_{k-p-l}\left (\sum _{s=1}^{p}y_{k-s}\theta _s+\xi _{k} \right )\\ &=\sum _{s=1}^{p}\left (\sum _{k=2p+1}^{\tau (h)}\gamma _{l,k}y_{k-p-l}y_{k-s} \right )\theta _s+\sum _{k=2p+1}^{\tau (h)}\gamma _{l,k}y_{k-p-l}\xi _k=\big \langle G(h)\theta \big \rangle _l+\zeta _l(n), \end {aligned} $$
(A.15)
where
$$ \zeta _l(h)=\sum _{k=2p+1}^{\tau (h)}\gamma _{l,k}y_{k-p-l}\xi _k, \quad 1\leqslant l\leqslant p. $$
(A.16)

In vector form, system (A.15) is as follows:

$$ \vartheta (h)=G(h)\theta +\zeta (h); \quad \zeta (h)=\big (\zeta _1(h),\dots , \zeta _p(h)\big )^{\mathrm {T}}. $$
(A.17)

Substituting the weight coefficients (3.10) into (A.16), we obtain

$$ \zeta _l\left (h\right )=\sum _{i=1}^{p+1}\zeta _l^{(i)}\left (h\right ),\qquad \zeta _l^{(i)}\left (h\right )=\sum _{k=\tau _{l-1}^{(i)}(h)+1}^{\tau _{l}^{(i)}(h)} \thinspace \sqrt {\beta _{l,k}^{(i)}\left (h\right )}\thinspace \chi _{\{k\in T_i(k)\}} \thinspace y_{k-p-l}\thinspace \xi _k.$$
(A.18)

Having introduced the vectors \(\zeta ^{(i)}(h)=\left (\zeta _1^{(i)}(h), \dots ,\zeta _p^{(i)}(h) \right )^{\mathrm {T}}\), \(i={1,\ldots ,p+1}\), we have the expansion

$$ {\zeta (h)=\sum _{i=1}^{p+1}\zeta ^{(i)}(h)}.$$
Further, consider the vector
$$ m^{(i)}(h)=\left (m_1^{(i)}(h), \dots , m_p^{(i)}(h)\right )^{\mathrm {T}}:=\varkappa _\theta h^{-1/2}\zeta ^{(i)}(h). $$
(A.19)
Taking into account (3.5), we write the coordinates of this vector in the form
$$ m_l^{(i)}(h)=\frac {1}{\sqrt {h}}\sum _{k=\tau _{l-1}^{(i)}(h)+1}^{\tau _{l}^{(i)}(h)}\sqrt {\beta _{l,k}^{(i)}(h)}\varDelta M_l^{(i)}(k).$$
(A.20)
Using (A.17), (A.18), and (A.19), we obtain
$$ G(h)\left (\theta ^*(h)-\theta \right )=\frac {\sqrt {h}}{\varkappa _\theta }\sum _{i=1}^{p+1}m^{(i)}(h). $$
(A.21)
Theorem A.2 can be applied to vectors (A.19), because the martingales (3.5) and the stopping times (3.7) in (A.20) satisfy all assumptions in this Theorem by virtue of Lemma 3. By Theorem A.2, all vectors (A.19) have standard \(p \)-dimensional normal distribution \(\mathcal {N}(0, I_p) \). It follows that the squares \(\rVert m^{(i)}(h) \lVert ^2 \), \(i={1,\ldots ,p+1} \), of the norms of these vectors have \(\chi ^2 \) distribution with \(p \) degrees of freedom; i.e., \(\mathsf {P}\left (\lVert m^{(i)}(h) \rVert >a \right )=1-\Phi _p(a)\), where \(\Phi _p(a)\) is the distribution function defined in (3.14). Let us construct a confidence domain for \(\theta \) based on some given confidence level. From (A.21) we have the inequality
$$ \frac {\varkappa _\theta }{\sqrt {h}}\Big \| G(h)\big (\theta ^*(h)-\theta \big )\Big \|\leqslant \sum _{i=1}^{p+1}\big \| m^{(i)}(h)\big \|.$$
(A.22)
Using the estimate
$$ \begin {aligned} \mathsf {P}_\theta \left (\sum _{i=1}^{p+1}\big \| m^{(i)}(h)\big \|>\mu \right )&\leqslant \sum _{i=1}^{p+1}\mathsf {P}_\theta \left (\big \| m^{(i)}(h)\big \|>\frac {\mu }{p\!+\!1} \right )\\ &{}=(p\!+\!1)\mathsf {P}_\theta \left (\big \| m^{(1)}(h)\big \|>\frac {\mu }{p\!+\!1} \right )=(p\!+\!1)\left (1-\Phi _p\left (\frac {\mu }{p\!+\!1} \right ) \right ), \end {aligned} $$
(A.23)
we can find \(\mu \) based on a given \(\rho \) from the equation \((p+1)\left (1-\Phi _p\left ({\mu }/{(p+1)}\right )\right )=1-\rho \). The root \(\mu ^* \) of this equation is determined by formula (3.14). Using (A.22) and (A.23), we obtain
$$ \mathsf {P}_\theta \left (\frac {\varkappa _\theta }{\sqrt {h}}\Big \| G(h)\big (\theta ^*(h)-\theta \big )\Big \|\rVert \leqslant \mu ^* \right )\geqslant \rho .$$

The proof of Theorem 2 is complete. \(\quad \blacksquare \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Konev, V.V., Pupkov, A.V. Confidence Estimation of Autoregressive Parameters Based on Noisy Data. Autom Remote Control 82, 1030–1048 (2021). https://doi.org/10.1134/S0005117921060059

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117921060059

Keywords

Navigation