Skip to main content
Log in

Efficient estimation methods for non-Gaussian regression models in continuous time

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

In this paper, we develop an efficient nonparametric estimation theory for continuous time regression models with non-Gaussian Lévy noises in the case when the unknown functions belong to Sobolev ellipses. Using the Pinsker’s approach, we provide a sharp lower bound for the normalized asymptotic mean square accuracy. However, the main result obtained by Pinsker for the Gaussian white noise model is not correct without additional conditions for the ellipse coefficients. We find such constructive sufficient conditions under which we develop efficient estimation methods. We show that the obtained conditions hold for the ellipse coefficients of an exponential form. For exponential coefficients, the sharp lower bound is calculated in explicit form. Finally, we apply this result to signals number detection problems in multi-pass connection channels and we obtain an almost parametric convergence rate that is natural for this case, which significantly improves the rate with respect to power-form coefficients.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Beltaief, S., Chernoyarov, O., Pergamenshchikov, S. M. (2020). Model selection for the robust efficient signal processing observed with small Levy noise. Annals of the Institute of Statistical Mathematics, 72, 1205–1235.

    Article  MathSciNet  Google Scholar 

  • Hodara, P., Krell, N., Löcherbach, E. (2018). Non-parametric estimation of the spiking rate in systems of interacting neurons. Statistical Inference for Stochastic Processes, 21, 81–111.

    Article  MathSciNet  Google Scholar 

  • Ibragimov, I. A., Khasminskii, R. Z. (1981). Statistical estimation: Asymptotic theory. New York: Springer.

    Book  Google Scholar 

  • Kassam, S. A. (1988). Signal detection in non-Gaussian noise. IX. New York: Springer.

    Book  Google Scholar 

  • Konev, V. V., Pergamenshchikov, S. M. (2009a). Nonparametric estimation in a semimartingale regression model. Part 1. Oracle Inequalities. Vestnik Tomskogo Gosudarstvennogo Universiteta. Matematika i Mekhanika, 3(7), 23–41.

    Google Scholar 

  • Konev, V. V., Pergamenshchikov, S. M. (2009b). Nonparametric estimation in a semimartingale regression model. Part 2. Robust asymptotic efficiency. Vestnik Tomskogo Gosudarstvennogo Universiteta. Matematika i Mekhanika, 4(8), 31–45.

    Google Scholar 

  • Konev, V. V., Pergamenshchikov, S. M. (2012). Efficient robust nonparametric in a semimartingale regression model. Annales de lInstitut Henri Poincare, 48(4), 1217–1244.

    MathSciNet  MATH  Google Scholar 

  • Konev, V. V., Pergamenshchikov, S. M. (2015). Robust model selection for a semimartingale continuous time regression from discrete data. Stochastic Processes and Their Applications, 125, 294–326.

    Article  MathSciNet  Google Scholar 

  • Konev, V. V., Pergamenshchikov, S. M., Pchelintsev, E. A. (2014). Estimation of a regression with the pulse type noise from discrete data. Theory of Probability & its Applications, 58(3), 442–457.

    Article  MathSciNet  Google Scholar 

  • Kuks, I. A., Olman, V. (1971). A minimax linear estimator of regression coefficients. Izv. Akad. Nauk Eston. SSR, 20, 480–482.

    Google Scholar 

  • Kutoyants, Yu. A. (1984). Parameter Estimation for Stochastic Processes. Berlin: Heldeman-Verlag.

    MATH  Google Scholar 

  • Kutoyants, Yu. A. (1994). Identification of dynamical systems with small noise. Dordrecht: Kluwer Academic Publishers Group.

    Book  Google Scholar 

  • Le Cam, L. (1990). Asymptotic methods in statistical decision theory. Springer series in statistics. New York: Springer.

    Google Scholar 

  • Lepski, O. V. (1990). One problem of adaptive estimation in Gaussian white noise. Theory of Probability & Its Applications, 35, 459–470.

    MathSciNet  Google Scholar 

  • Lepski, O. V., Spokoiny, V. G. (1997). Optimal pointwise adaptive methods in nonparametric estimation. The Annals of Statistics, 25, 2512–2546.

    MathSciNet  MATH  Google Scholar 

  • Liptser, R. S., Shiryaev, A. N. (1989). Theory of martingales. New York: Springer.

    Book  Google Scholar 

  • Nemirovskii, A. (2000). Topics in non-parametric statistics. Lecture Notes in Mathematics, 1738, 85–277.

    MathSciNet  Google Scholar 

  • Pchelintsev, E. (2013). Improved estimation in a non-Gaussian parametric regression. Statistical Inference for Stochastic Processes, 16(1), 15–28.

    Article  MathSciNet  Google Scholar 

  • Pchelintsev, E., Pergamenshchikov, S. (2018). Oracle inequalities for the stochastic differential equations. Statistical Inference for Stochastic Processes, 21(2), 469–483.

    Article  MathSciNet  Google Scholar 

  • Pchelintsev, E., Pergamenshchikov, S. (2019). Adaptive model selection method for a conditionally Gaussian semimartingale regression in continuous time. Vestnik Tomskogo Gosudarstvennogo Universiteta. Matematika i Mekhanika, 58, 14–31.

    Article  Google Scholar 

  • Pchelintsev, E. A., Pchelintsev, V. A., Pergamenshchikov, S. M. (2018). Non asymptotic sharp oracle inequality for the improved model selection procedures for the adaptive nonparametric signal estimation problem. Communications—Scientific Letters of the University of Zilina, 20(1), 72–76.

    Google Scholar 

  • Pchelintsev, E. A., Pchelintsev, V. A., Pergamenshchikov, S. M. (2019). Improved robust model selection methods for a Lévy nonparametric regression in continuous time. Journal of Nonparametric Statistics, 31(3), 612–628.

    Article  MathSciNet  Google Scholar 

  • Pinsker, M. S. (1981). Optimal filtration of square integrable signals in gaussian white noise. Problems of Transimission Information, 17, 120–133.

    Google Scholar 

  • Tsybakov, A. B. (1998). Pointwise and sup-norm sharp adaptive estimation of functions on the Sobolev classes. Annals of Statistics, 26, 2420–2469.

    Article  MathSciNet  Google Scholar 

  • Tsybakov, A. B. (2009). Introduction in nonparametric estimation. New York: Springer.

    Book  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the anonymous referees and to the AE for careful reading and for helpful comments. The results of Sect. 6 have been obtained under Grant of the President of the Russian Federation no. MK-834.2020.9.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Evgeny Pchelintsev.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by RSF, Grant no 20-61-47043.

Auxiliary results

Auxiliary results

1.1 Proof of Lemma 1

First, note that the mean square error (13) can be represented as

$$\begin{aligned} \mathbf{D}_{*,\varepsilon }=\inf _{\gamma } \sup _{S \in W^k_{\mathbf{r}}} {{\mathcal {R}}}(\widehat{S}_{\gamma },S)= \inf _{\gamma } \sup _{\theta \in \varTheta } \sum _{j=1}^{\infty } \mathbf{E}_{\theta } \left( \gamma _{j} \widehat{\theta }_{j}-\theta _{j} \right) ^2 \,. \end{aligned}$$

Recall that

$$\begin{aligned} \inf _{\gamma } \sup _{\theta \in \varTheta } \sum _{j=1}^{\infty } \mathbf{E}_{\theta } \left( \gamma _{j} \widehat{\theta }_{j}-\theta _{j} \right) ^2\geqslant \sup _{\theta \in \varTheta } \inf _{\gamma } \sum _{j=1}^{\infty } \mathbf{E}_{\theta } \left( \gamma _{j} \widehat{\theta }_{j}-\theta _{j} \right) ^2 \,. \end{aligned}$$

In view of (10) and \(\mathbf{E}_{\theta }\xi _{j}=0\), one has

$$\begin{aligned} \mathbf{E}_{\theta } \left( \gamma _{j} \widehat{\theta }_{j}-\theta _{j} \right) ^2= (1-\gamma _{j})^2\theta _{j}^2+\gamma _{j} \varepsilon ^2 \,, \end{aligned}$$
(72)

and

$$\begin{aligned} \inf _{\gamma _{j}}\mathbf{E}_{\theta } \left( \gamma _{j} \widehat{\theta }_{j}-\theta _{j} \right) ^2= \frac{\theta _{j}^2 \varepsilon ^2}{\theta _{j}^2+ \varepsilon ^2} \quad \text{ with } \quad \gamma _{j} = \frac{\theta _{j}^2}{\theta _{j}^2+ \varepsilon ^2} \,. \end{aligned}$$
(73)

Furthermore, using (73) and  (72), we can rewrite

$$\begin{aligned} \mathbf{D}_{*,\varepsilon } \leqslant \sup _{\theta \in \varTheta } \sum _{j=1}^{\infty } \frac{\theta _{j}^2 \varepsilon ^2}{\theta _{j}^2+ \varepsilon ^2} \,, \end{aligned}$$

and, therefore,

$$\begin{aligned} \mathbf{D}_{*,\varepsilon } = \sup _{\theta \in \varTheta } \sum _{j=1}^{\infty } \frac{\theta _{j}^2 \varepsilon ^2}{\theta _{j}^2+ \varepsilon ^2} \,. \end{aligned}$$

Note that \(\gamma _{j}\) from  (73) cannot be used in  (12), because they depend of unknown parameters, and the estimate \(\widehat{S}_{\gamma }(t)\) cannot be calculated. By the Lagrange method, we obtain that

$$\begin{aligned} \sup _{\theta \in \varTheta }\sum _{j=1}^{\infty } \frac{\varepsilon ^2\theta _{j}^2}{\theta _{j}^2+\varepsilon ^2}= \sum _{j=1}^{\infty } \frac{\varepsilon ^2(\theta ^{*}_{j})^2}{(\theta ^{*}_{j})^2+\varepsilon ^{2}} \quad \text{ and }\quad (\theta ^{*}_{j})^{2}= \varepsilon ^2 \left( \frac{\mu }{\sqrt{a_{j}}}-1 \right) _{+} \,, \end{aligned}$$

where \((x)_{+}=\max (0,x)\) and the Lagrange coefficient \(\mu\) is the solution of the following equation

$$\begin{aligned} f(\mu )= \varepsilon ^{-2}\mathbf{r}\quad \text{ with }\quad f(\mu )=\sum _{j=1}^{\infty } a_{j} \, \left( \frac{\mu }{\sqrt{a_{j}}}-1 \right) _{+} \,. \end{aligned}$$

If the condition \(\mathbf{A}_{1})\) holds, then the function \(f(\mu )\) is continuous increasing function with \(f(0)=0\) and \(\lim _{\mu \rightarrow \infty }f(\mu )=\infty\), and we can deduce that this equation has an unique solution

$$\begin{aligned} \mu = \frac{\varepsilon ^{-2}\mathbf{r}+\sum _{j=1}^{\mathbf{m}}a_{j}}{\sum _{j=1}^{\mathbf{m}}\sqrt{a_{j}}} \,, \end{aligned}$$
(74)

where \(\mathbf{m}=N_{\mu ^{2}}\) is defined in (7). This implies that

$$\begin{aligned} \sqrt{a_{\mathbf{m}}}\leqslant \mu \quad \text{ and }\quad \sqrt{a_{\mathbf{m}+1}}> \mu \,. \end{aligned}$$

Setting \(g(n)=\sqrt{a_n}\sum _{j=1}^{n}\sqrt{a_{j}}-\sum _{j=1}^{n}a_{j}\) and using the definition (74), we obtain that

$$\begin{aligned} g(\mathbf{m})\leqslant \varepsilon ^{-2}\mathbf{r}\quad \text{ and }\quad \sqrt{a_{\mathbf{m}+1}}\sum _{j=1}^{\mathbf{m}}\sqrt{a_{j}}-\sum _{j=1}^{\mathbf{m}}a_{j}= g(\mathbf{m}+1)>\varepsilon ^{-2}\mathbf{r}\,. \end{aligned}$$

Therefore, in view of the definition (14), we find

$$\begin{aligned} \mathbf{m}=\max \left\{ n\geqslant 1 :g(n)\leqslant \varepsilon ^{-2}\mathbf{r}\right\} =n^* \,. \end{aligned}$$

Hence, Lemma 1. \(\Box\)

1.2 Representation for the \(\sigma\)-field generated by \((\xi _{t})_{0\le t\le 1}\).

Lemma 3

Let \((\varphi _{k})_{k\ge 1}\) be arbitrary orthonormal basis in \({{\mathcal {L}}}_{2}[0,1]\) with \(\varphi _{1}\equiv 1\). Then,

$$\begin{aligned} \sigma \{\xi _{t}\,,\,0\le t\le 1\}=\sigma \{\xi _{k}\,,\,k\ge 1\}\,, \end{aligned}$$

where \(\xi _{k}=\int ^{1}_{0}\,\varphi _{k}(t)\mathrm {d}\xi _{t}\).

Proof

Let \((\text{ Tr}_{j})_{j\ge 1}\) be the trigonometric basis in \({{\mathcal {L}}}_{2}[0,1]\). Taking into account that any trajectory of the process \(\xi\) belongs to \({{\mathcal {L}}}_{2}[0,1]\), we can represent it as

$$\begin{aligned} \xi _{t}=\sum ^{\infty }_{j=1}\,\tau _{j}\,\text{ Tr}_{j}(t) \quad \text{ and }\quad \tau _{j}=\int ^{1}_{0}\,\xi _{s}\,\text{ Tr}_{j}(s)\mathrm {d}s \,. \end{aligned}$$

Using here the Ito formula, we obtain that

$$\begin{aligned} \tau _{j}=\xi _{1}\widetilde{\text{ Tr }}_{j}(1)- \int ^{1}_{0}\widetilde{\text{ Tr }}_{j}(s)\mathrm {d}\xi _{s} \quad \text{ and }\quad \widetilde{\text{ Tr }}_{j}(s)\mathrm {d}s=\int ^{t}_{0}\,\text{ Tr}_{j}(s)\mathrm {d}s\,. \end{aligned}$$

Note now that the functions \(\widetilde{\text{ Tr }}_{j}\) can be represented as

$$\begin{aligned} \widetilde{\text{ Tr }}_{j}(s)=\sum ^{\infty }_{l=1}\,\mathbf{k}_{j,l}\varphi _{l}(s) \quad \text{ and }\quad \mathbf{k}_{j,l}=\int ^{1}_{0}\,\widetilde{\text{ Tr }}_{j}(u)\varphi _{l}(u)\mathrm {d}u \,. \end{aligned}$$

In view of \(\xi _{1}=\int ^{1}_{0}\varphi _{1}(s)\mathrm {d}\xi _{s}\), we can rewrite the coefficients \(\tau _{j}\) as

$$\begin{aligned} \tau _{j}=\xi _{1}\widetilde{\text{ Tr }}_{j}(1)- \sum ^{\infty }_{l=1}\,\mathbf{k}_{j,l}\,\xi _{l}\,. \end{aligned}$$

So, the coefficients \(\tau _{j}\) are measurable with respect to the \(\sigma\)-field \(\sigma \{\xi _{k}\,,\,k\ge 1\}\), and therefore, the Brownian motion is measurable with respect to this \(\sigma\)-field also, i.e. \(\sigma \{\xi _{t}\,,\,0\le t\le 1\}\subseteq \sigma \{\xi _{k}\,,\,k\ge 1\}\). The inverse inclusion is obvious. Hence, Lemma 3. \(\square\)

1.3 Conditional distribution tool

Lemma 4

Let \(\zeta\) and \(\xi\) be independent Gaussian random variables with the parameters \((0,\theta ^2)\) and \((\nu ,\sigma ^{2})\) , respectively, and let \(\eta =\zeta +\xi\). Then,

$$\begin{aligned} \mathbf{E}(\zeta -\mathbf{E}(\zeta |\eta ))^2 = \frac{\theta ^2\sigma ^{2}}{\theta ^2+\sigma ^2}\,. \end{aligned}$$

Proof

Note that \(\eta\) is Gaussian random variable with the parameters \((\nu ,\theta ^2+\sigma ^2)\). By the definition of the conditional expectation

$$\begin{aligned} \mathbf{E}\,(\zeta |\eta =y)=\int _{{{\mathbb {R}}}} x\,\mathbf{p}_{\zeta |\eta }(x|y)\mathrm {d}\, x \end{aligned}$$

and \(\mathbf{p}_{\zeta |\eta }(x|y)\) is the corresponding conditional distribution density

$$\begin{aligned} p_{\zeta |\eta }(x|y)=\frac{1}{\sqrt{2\pi \sigma ^2_{1}}} \exp \left( -\frac{1}{2\sigma ^2_{1}} (x-\mathbf{m}(y))^2 \right) \,, \end{aligned}$$

where

$$\begin{aligned} \sigma ^2_{1}=\frac{\theta ^2\sigma ^2}{\theta ^2+\sigma ^2} \quad \text{ and }\quad \mathbf{m}(y)=\frac{(y-\nu )\theta ^2}{\theta ^2+\sigma ^2}\,. \end{aligned}$$

This implies that

$$\begin{aligned} \mathbf{E}(\zeta |\eta )=\mathbf{m}(\eta ) \quad \text{ and }\quad \mathbf{E}(\zeta -\mathbf{E}(\zeta |\eta ))^2= \frac{\theta ^2\sigma ^2}{\theta ^2+\sigma ^2} \,. \end{aligned}$$

Hence Lemma 4. \(\square\)

Lemma 5

Let \(\zeta\) and \(\xi\) be two independent random variables such that \(\zeta\) is uniformly distributed on \((-\theta ,\theta )\) for some \(\theta >0\) and \(\xi\) is Gaussian with the parameters \((\nu ,\sigma ^{2})\), and let \(\eta =\zeta +\xi\). Then,

$$\begin{aligned} \lim _{L\rightarrow \infty }\,\sup _{\nu \in {{\mathbb {R}}}} \left| \frac{\mathbf{E}(\zeta -\mathbf{E}(\zeta |\eta ))^2}{\sigma ^2}- 1 \right| =0 \,, \end{aligned}$$

where \(L=\theta /\sigma\).

Proof

First, note that

$$\begin{aligned} \mathbf{E}(\zeta -\mathbf{E}(\zeta |\eta ))^2=\mathbf{E}(\xi -\mathbf{E}(\xi |\eta ))^2 =\sigma ^2-\mathbf{E}\, \mathbf{m}^2(\eta ) \,, \end{aligned}$$

where \(\mathbf{m}(\eta )=\mathbf{E}({\tilde{\xi }}|\eta )\) and \({\tilde{\xi }}=\xi -\nu\). It is clear that

$$\begin{aligned} m(z)= \frac{\int _{-\infty }^{+\infty }\,x\mathbf{p}_{\eta |{\tilde{\xi }}}(z|x) \mathbf{p}_{{\tilde{\xi }}}(x)\mathrm {d}x}{\mathbf{p}_{\eta }(z)}= \frac{\int _{-\infty }^{+\infty } x \, \mathbf{1}_{\left\langle |z-x-\nu | \leqslant \theta \right\rangle } \mathbf{p}_{{{\tilde{\xi }}}}(x)\mathrm {d}x}{2\theta \mathbf{p}_{\eta }(z)} \,, \end{aligned}$$

where \(\mathbf{p}_{\eta |{\tilde{\xi }}}(z|x)\), \(\mathbf{p}_{{\tilde{\xi }}}\) and \(\mathbf{p}_{\eta }\) are the corresponding distribution densities. Since the random variables \(\zeta\) and \({\tilde{\xi }}\) are independent and \(\zeta\) is uniform on the interval \((-\theta ,\theta )\), then

$$\begin{aligned} \mathbf{p}_{\eta }(z)= \int _{-\infty }^{+\infty } \mathbf{p}_{\zeta }(z-x)\mathbf{p}_{\xi }(x)\mathrm {d}x= \frac{1}{2\theta }\int _{-\infty }^{+\infty }\, \mathbf{1}_{\left\langle |z-x-\nu | \le \theta \right\rangle }\mathbf{p}_{{\tilde{\xi }}}(x) \mathrm {d}x \,. \end{aligned}$$

Here \(\mathbf{p}_{{\tilde{\xi }}}(x)=\sigma ^{-1}\phi (x/\sigma )\), where \(\phi\) is the (0, 1)-Gaussian density. Now, we have

$$\begin{aligned} m(z)=\sigma \, \frac{\int _{-\infty }^{+\infty } y \mathbf{1}_{\varGamma }(y)\, \phi (y)\mathrm {d}y}{\int _{-\infty }^{+\infty } \mathbf{1}_{\varGamma }(y)\, \phi (y)\mathrm {d}y} \,, \end{aligned}$$

where

$$\begin{aligned} \varGamma = \left\{ y\,:\, \left| y-\frac{{\tilde{z}}}{\sigma }\right| \le L \right\} \quad \text{ and }\quad {\tilde{z}}=z-\nu \,. \end{aligned}$$

For \(|{\tilde{z}}|<(1-\epsilon )\theta\) with \(\epsilon =1/\sqrt{L}\), the indicator \(\mathbf{1}_{\varGamma }\rightarrow 1\) as \(L\rightarrow \infty\) and, therefore, \(m(z)/\sigma \rightarrow 0\). Let now \(\rho _{L}=m^2(\eta )/ \sigma ^2=(\mathbf{E}({\overline{\xi }}|\eta ))^{2}\) and \({\overline{\xi }}=\widetilde{\xi }/\sigma \sim {{\mathcal {N}}}(0,1)\). Then,

$$\begin{aligned} \mathbf{E}\rho _{L} = \mathbf{E}\rho _{L}\mathbf{1}_{\left\langle |\widetilde{\eta }| <(1-\epsilon )\theta \right\rangle }+ \mathbf{E}\rho _{L}\mathbf{1}_{\left\langle (1-\epsilon )\theta \leqslant |\widetilde{\eta }| \leqslant (1+\epsilon )\theta \right\rangle }+ \mathbf{E}\rho _{L}\mathbf{1}_{\left\langle |\widetilde{\eta }| >(1+\epsilon )\theta \right\rangle } \,, \end{aligned}$$

where \(\widetilde{\eta }=\eta -\nu\). By the Jensen inequality \(\rho _{L}^{2} \le \mathbf{E}\left( {\overline{\xi }}^{4}\, \mid \, \eta \right)\), and, therefore,

$$\begin{aligned} \mathbf{E}\rho _{L}^2 \le \mathbf{E}\mathbf{E}\left( {\overline{\xi }}^{4}\, \mid \, \eta \right) = \mathbf{E}\,{\overline{\xi }}^{4}=3\,, \end{aligned}$$

i.e. \((\rho _{L})_{L\ge 1}\) is uniformly integrable. Since the random variables \(\rho _{L}\mathbf{1}_{\left\langle |\widetilde{\eta }| <(1-\epsilon )\theta \right\rangle }\rightarrow 0\) as \(L\rightarrow \infty\) almost sure, then \(\mathbf{E}\rho _{L}\mathbf{1}_{\left\langle |\widetilde{\eta }| <(1-\epsilon )\theta \right\rangle }\rightarrow 0\) as \(L\rightarrow \infty\). Moreover, taking into account that \(\mathbf{p}_{\eta }(z)\le 1/2\theta\), we get

$$\begin{aligned} \mathbf{P}\left( (1-\epsilon )\theta \le |\widetilde{\eta }|\le (1+\epsilon )\theta )\right) \leqslant 2\epsilon = \frac{2}{\sqrt{L}} \rightarrow 0 \quad \text{ as }\quad L \rightarrow \infty \,. \end{aligned}$$

Further, we have

$$\begin{aligned} \mathbf{P}\left( |\widetilde{\eta }|>(1+\epsilon )\theta \right) \le \mathbf{P}(|{\bar{\xi }}|>\sqrt{L}) \rightarrow 0 \quad \text{ as }\quad L \rightarrow \infty \,. \end{aligned}$$

Hence, Lemma 5. \(\square\)

Lemma 6

Let \(\zeta\) and \(\xi\) be two independent random variables, such that \(\mathbf{P}(\zeta =-\theta )=\mathbf{P}(\zeta =\theta )=1/2\) for some \(\theta >0\) and \(\xi\) is Gaussian with the parameters \((\nu ,\sigma ^{2})\), and let \(\eta =\zeta +\xi\). Then

$$\begin{aligned} \lim _{L\rightarrow 0}\, \sup _{\nu \in {{\mathbb {R}}}} \left| \frac{\mathbf{E}(\zeta -\mathbf{E}(\zeta |\eta ))^2}{\theta ^2} -1 \right| =0 \,, \end{aligned}$$
(75)

where \(L=\theta /\sigma\).

Proof

First, note that in this case

$$\begin{aligned} \mathbf{E}(\zeta |\eta )=\theta \rho _{L}(\widetilde{\eta }) \quad \text{ and }\quad \rho _{L}(x) = \frac{\phi \left( x-L \right) - \phi \left( x+L \right) }{\phi \left( x-L \right) + \phi \left( x+L \right) } \,, \end{aligned}$$

where \(\widetilde{\eta }=(\eta -\nu )/\sigma\) and \(\phi\) is the (0, 1)-Gaussian density. It is clear that \(\vert \rho _{L}(x)\vert \le 1\) and

$$\begin{aligned} \lim _{L\rightarrow 0} \sup _{\vert x\vert \le M}\,\vert \rho _{L}(x)\vert =0 \quad \text{ for } \text{ any }\quad M>0\,. \end{aligned}$$

Therefore,

$$\begin{aligned} \left| \frac{\mathbf{E}(\zeta -\mathbf{E}(\zeta |\eta ))^2}{\theta ^2} - 1 \right| = \mathbf{E}\rho ^{2}_{L}(\widetilde{\eta }) \le \sup _{\vert x\vert \le M}\,\rho ^{2}_{L}(x) + \mathbf{P}(\vert \widetilde{\eta }\vert >M) \,. \end{aligned}$$

Taking into account here that \(\mathbf{E}\,\widetilde{\eta }^{2}\le 2L^{2}+2\) and passing to the limit as \(\lim _{M\rightarrow \infty }\lim _{L\rightarrow 0}\), we obtain (75). Hence, Lemma 6. \(\square\)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pchelintsev, E., Pergamenshchikov, S. & Povzun, M. Efficient estimation methods for non-Gaussian regression models in continuous time. Ann Inst Stat Math 74, 113–142 (2022). https://doi.org/10.1007/s10463-021-00790-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-021-00790-7

Keywords

Navigation