Skip to main content
Log in

Robust adaptive efficient estimation for semi-Markov nonparametric regression models

  • Published:
Statistical Inference for Stochastic Processes Aims and scope Submit manuscript

Abstract

We consider the nonparametric robust estimation problem for regression models in continuous time with semi-Markov noises. An adaptive model selection procedure is proposed. Under general moment conditions on the noise distribution a sharp non-asymptotic oracle inequality for the robust risks is obtained and the robust efficiency is shown. It turns out that for semi-Markov models the robust minimax convergence rate may be faster or slower than the classical one.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Barbu VS, Limnios N (2008) Semi-Markov Chains and Hidden Semi-Markov Models toward Applications - Their use in Reliability and DNA Analysis. Lecture Notes in Statistics, 191, Springer, New York

  • Barndorff-Nielsen OE, Shephard N (2001) Non-Gaussian Ornstein-Uhlenbeck-based models and some of their uses in financial mathematics. J R Stat Soc B 63:167–241

    Article  MATH  Google Scholar 

  • Bichteler K, Jacod J (1983) Calcul de Malliavin pour les diffusions avec sauts: existence d’une densité dans le cas unidimensionnel. Séminaire de probabilité, vol XVII, lecture notes in Math., 986, Springer, Berlin, pp 132–157

  • Cont R, Tankov P (2004) Financial modelling with jump processes. Chapman & Hall, London

    MATH  Google Scholar 

  • Goldie CM (1991) Implicit renewal theory and tails of solutions of random equations. Ann Appl Probab 1(1):126–166

    Article  MathSciNet  MATH  Google Scholar 

  • Höpfner R, Kutoyants YuA (2009) On LAN for parametrized continuous periodic signals in a time inhomogeneous diffusion. Stat Decis 27(4):309–326

    MathSciNet  MATH  Google Scholar 

  • Höpfner R, Kutoyants YuA (2010) Estimating discontinuous periodic signals in a time inhomogeneous diffusion. Stat Infer Stoch Process 13(3):193–230

    Article  MathSciNet  MATH  Google Scholar 

  • Ibragimov IA, Khasminskii RZ (1981) Statistical estimation: asymptotic theory. Springer, Berlin

    Book  Google Scholar 

  • Jacod J, Shiryaev AN (2002) Limit theorems for stochastic processes, 2nd edn. Springer, Berlin

    MATH  Google Scholar 

  • Konev VV, Pergamenshchikov SM (2003) Sequential estimation of the parameters in a trigonometric regression model with the gaussian coloured noise. Stat Infer Stoch Process 6:215–235

    Article  MathSciNet  MATH  Google Scholar 

  • Konev VV, Pergamenshchikov SM (2009) Nonparametric estimation in a semimartingale regression model. Part 1. Oracle inequalities. J Math Mech Tomsk State Univ 3:23–41

    Google Scholar 

  • Konev VV, Pergamenshchikov SM (2009) Nonparametric estimation in a semimartingale regression model. Part 2. Robust asymptotic efficiency. J Math Mech Tomsk State Univ 4:31–45

    Google Scholar 

  • Konev VV, Pergamenshchikov SM (2010) General model selection estimation of a periodic regression with a Gaussian noise. Ann Inst Stat Math 62:1083–1111

    Article  MathSciNet  MATH  Google Scholar 

  • Konev VV, Pergamenshchikov SM (2012) Efficient robust nonparametric estimation in a semimartingale regression model. Ann Inst Henri Poincaré Probab Stat 48(4):1217–1244

    Article  MathSciNet  MATH  Google Scholar 

  • Konev VV, Pergamenshchikov SM (2015) Robust model selection for a semimartingale continuous time regression from discrete data. Stoch Processes Their Appl 125:294–326

    Article  MathSciNet  MATH  Google Scholar 

  • Limnios N, Oprisan G (2001) Semi-Markov processes and reliability. Birkhäuser, Boston

    Book  MATH  Google Scholar 

  • Liptser R Sh, Shiryaev AN (1986) Theory of martingales. Springer, Berlin

    MATH  Google Scholar 

  • Marinelli C, Röckner M (2014) On maximal inequalities for purely discontinuous martingales in infinite dimensions. Séminaire de Probabilités, Lect. Notes Math., vol XLVI, pp 293–315

  • Mikosch T (2004) Non-life insurance mathematics. An introduction with stochastic processes. Springer, Berlin

    MATH  Google Scholar 

  • Novikov AA (1975) On discontinuous martingales. Theory Probab Appl 20(1):11–26

    Article  MathSciNet  MATH  Google Scholar 

  • Nussbaum M (1985) Spline smoothing in regression models and asymptotic efficiency in \({{ L}}_2\). Ann Statist 13:984–997

    Article  MathSciNet  MATH  Google Scholar 

  • Pinsker MS (1981) Optimal filtration of square integrable signals in gaussian white noise. Probl Transm Inf 17:120–133

    Google Scholar 

  • Sveshnikov AG, Tikhonov AN (1978) The theory of functions of a complex variable, Translated from the Russian, English translation, Mir Publishers

Download references

Acknowledgements

The last author is partially supported by RFBR Grant 16-01-00121, by the Ministry of Education and Science of the Russian Federation in the framework of the research Project No 2.3208.2017/4.6 and by the Russian Federal Professor program (Project No 1.472.2016/1.4, Ministry of Education and Science of the Russian Federation).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergey Pergamenshchikov.

Additional information

This work was done under financial support of the RSF Grant Number 14-49-00079 (National Research University “MPEI” 14 Krasnokazarmennaya, 111250 Moscow, Russia) and by the project XterM - Feder, University of Rouen, France.

Appendix

Appendix

1.1 A.1 Inequalities for the purely discontinuous martingales

Now let us recall the Novikov inequalities, Novikov (1975), also referred to as the Bichteler–Jacod inequalities (see Bichteler and Jacod 1983; Marinelli and Röckner 2014)

Lemma A.1

Let \(\mu \) be a jump measure and \(\widetilde{\mu }\) its compensator on the \({{\mathbb {R}}}_{*}={{\mathbb {R}}}\setminus \{0\}\). Then for for any \(T>0\), any predictable function h and any \(p\ge 2\)

$$\begin{aligned} \mathbf{E}\sup _{0 \le t\le T} \left| \int _{[0,T]\times {{\mathbb {R}}}_{*}} h \;\mathrm {d}(\mu -\widetilde{\mu }) \right| ^{p} \le C^{*}_{p} \mathbf{E}_{Q}\check{J}_{p,T}(h)\,, \end{aligned}$$
(A.1)

where \(C^{*}_{p}\) is some positive constant and

$$\begin{aligned} \check{J}_{p,T}(h)= \left( \int _{[0,T]\times {{\mathbb {R}}}_{*}} h^2 \; \mathrm {d}\widetilde{\mu } \right) ^{p/2} + \int _{[0,T]\times {{\mathbb {R}}}_{*}} h^p \, \mathrm {d}\widetilde{\mu } \,. \end{aligned}$$

1.2 A.2 Property of the penalty term

Lemma A.2

For any \(n\ge \,1\) and \(\lambda \in \Lambda \),

$$\begin{aligned} P^{0}_n(\lambda ) \le \mathbf{E}_{Q} \text{ Err }_n(\lambda )+\frac{\mathbf{C}_{1,Q,n}}{n}, \end{aligned}$$

where the coefficient \(P^{0}_n(\lambda )\) was defined in (9.2).

Proof

By the definition of \(\text{ Err }_n(\lambda )\) one has

$$\begin{aligned} \mathbf{E}_{Q}\text{ Err }_n(\lambda )&= \mathbf{E}_{Q}\sum _{j=1}^{n} \left( (\lambda (j)-1) \theta _{j}+ \frac{\lambda (j)}{\sqrt{n}}\xi _{j,n} \right) ^2\\&= \sum _{j=1}^{n} (\lambda (j)-1)^{2} \theta ^{2}_{j}+ \frac{1}{n} \sum _{j=1}^{n} \lambda ^{2}(j)\mathbf{E}_{Q} \xi ^{2}_{j,n} \,. \end{aligned}$$

So, denoting by \(\lambda ^{2}=(\lambda ^{2}(j))_{1\le j\le n}\) we obtain that

$$\begin{aligned} \mathbf{E}_{Q}\text{ Err }_n(\lambda )\ge \frac{1}{n} \sum _{j=1}^{n} \lambda ^{2}(j)\mathbf{E}_{Q} \xi ^{2}_{j,n} = P^{0}_n(\lambda )+\mathbf{B}_{1,Q,n}(\lambda ^{2})\,. \end{aligned}$$

where \(\lambda ^{2}=(\lambda ^{2}(j))_{1\le j\le n}\). Now Proposition 7.1 implies the desired result, i.e.

$$\begin{aligned} \mathbf{E}_{Q}\, \text{ Err }_n(\lambda ) \ge \, P^{0}_n(\lambda )-\frac{\mathbf{C}_{1,Q,n}}{n}\,. \end{aligned}$$

Hence lemma A.2. \(\square \)

1.3 A.3 Properties of the Fourier transform

Theorem A.3

Cauchy (1825) Let U be a simply connected open subset of \({{\mathbb {C}}},\) let \(g : U \rightarrow {{\mathbb {C}}}\) be a holomorphic function, and let \(\gamma \) be a rectifiable path in U whose start point is equal to its end point. Then

$$\begin{aligned} \oint _{\gamma }\,g(z)\mathrm {d}z=0\,. \end{aligned}$$

Proposition A.4

Let \(g : {{\mathbb {C}}}\rightarrow {{\mathbb {C}}}\) be a holomorphic function in \(U=\left\{ z\in {{\mathbb {C}}}\,:\,-\beta _{1} < \text{ Im }\right. \)\(\left. z< \beta _{2}\right\} \) for some \(0<\beta _{1}\le \infty \) and \(0<\beta _{2}\le \infty \). Assume that, for any \(-\beta _{1}< t\le 0,\)

$$\begin{aligned} \int _{{{\mathbb {R}}}}\,\vert g(\theta +i t)\vert \,\mathrm {d}\theta <\infty \quad \text{ and }\quad \lim _{\vert \theta \vert \rightarrow \infty }\,g(\theta +i t)\,=0\,. \end{aligned}$$
(A.2)

Then, for any \(x\in {{\mathbb {R}}}\) and for any \(0<\beta <\beta _{1},\)

$$\begin{aligned} \int _{{{\mathbb {R}}}}\,e^{i\theta x} g(\theta )\,\mathrm {d}\theta =e^{-\beta x} \int _{{{\mathbb {R}}}}\,e^{i\theta x} g(\theta -i\beta )\,\mathrm {d}\theta . \end{aligned}$$
(A.3)

Proof

First note that the conditions of this theorem imply that

$$\begin{aligned} \int _{{{\mathbb {R}}}}\,e^{i\theta x} g(\theta )\,\mathrm {d}\theta = \lim _{N\rightarrow \infty }\, \int ^{N}_{-N}\,e^{i\theta x} g(\theta )\,\mathrm {d}\theta \,. \end{aligned}$$

We fix now \(0<\beta <\beta _{1}\) and we set for any \(N\ge 1\)

$$\begin{aligned} \gamma&=\{z\in {{\mathbb {C}}}:-N\le \text{ Re }z\le N\,,\,\text{ Im }z=0\} \cup \{z\in {{\mathbb {C}}}:-N\le \text{ Im }z\le N\,,\,\text{ Re }z=N\}\\&\cup \{z\in {{\mathbb {C}}}:-N\le \text{ Re }z\le N\,,\,\text{ Im }z=-\beta \} \cup \{z\in {{\mathbb {C}}}:-\beta \le \text{ Im }z\le 0\,,\,\text{ Re }z=-N\}\,. \end{aligned}$$

Now, in view of the Cauchy theorem, we obtain that for any \(N\ge 1\)

$$\begin{aligned} \oint _{\gamma }\,e^{iz x}\,g(z)\mathrm {d}z&= \int ^{N}_{-N}\,e^{i\theta x} g(\theta )\,\mathrm {d}\theta + \int ^{-\beta }_{0}\,e^{i(N+i t)x} g(N+i t)\,\mathrm {d}t \nonumber \\&\quad + \int ^{-N}_{N} e^{i(-i\beta +\theta ) x} g(-i\beta +\theta )\mathrm {d}\theta \nonumber \\&\quad + \int ^{0}_{-\beta }e^{i(-N+i t)x} g(-N+i t)\mathrm {d}t =0\,. \end{aligned}$$
(A.4)

The conditions (A.2) provide that

$$\begin{aligned} \lim _{N\rightarrow \infty }\,\int ^{-\beta }_{0}\,e^{i(N+i t)x} g(N+i t)\,\mathrm {d}t= \lim _{N\rightarrow \infty }\,\int ^{0}_{-\beta }\,e^{i(-N+i t)x} g(-N+i t)\,\mathrm {d}t=0\,. \end{aligned}$$

Therefore, letting \(N\rightarrow \infty \) in (A.4) we obtain (A.3). Hence we get the desired result. \(\square \)

The following technical lemma is also needed in this paper.

Lemma A.5

Let \(g : [a,b]\rightarrow {{\mathbb {R}}}\) be a function from \(\mathbf{L}_{1}[a,b]\). Then, for any fixed \(-\infty \le a<b\le +\infty ,\)

$$\begin{aligned} \lim _{N\rightarrow \infty }\,\int ^{b}_{a}\,g(x)\,\sin (Nx)\mathrm {d}x=0 \quad \text{ and }\quad \lim _{N\rightarrow \infty }\,\int ^{b}_{a}\,g(x)\,\cos (Nx)\mathrm {d}x=0 \,. \end{aligned}$$
(A.5)

Proof

Let first \(-\infty< a<b< +\infty \). Assume that g is continuously differentiable, i.e. \(g\in \mathbf{C}^{1}[a,b]\). Then integrating by parts gives us

$$\begin{aligned} \int ^{b}_{a}\,g(x)\,\sin (Nx)\,\mathrm {d}x =\frac{1}{N}\left( g(b)\,\sin (Nb)\ - g(a)\,\sin (Na)\ - \int ^{b}_{a}\,g^{'}(x)\,\cos (Nx)\,\mathrm {d}x \right) \,. \end{aligned}$$

So, from this we obtain that

$$\begin{aligned} \left| \int ^{b}_{a}\,g(x)\,\sin (Nx)\,\mathrm {d}x \right| \le \frac{\vert g(b)\vert +\vert g(a)\vert +(b-a)\max _{a\le x\le b}\vert g^{'}(x)\vert }{N} \,. \end{aligned}$$

This implies the first limit in (A.5) for this case. The second one is obtained similarly. Let now g be any absolutely integrated function on [ab], i.e. \(g\in \mathbf{L}_{1}[a,b]\). In this case there exists a sequence \(g_{n}\in \mathbf{C}^{1}[a,b]\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty }\int ^{b}_{a}\vert g(x)-g_{n}(x)\vert \mathrm {d}x=0\,. \end{aligned}$$

Therefore, taking into account that for any \(n\ge 1\)

$$\begin{aligned} \lim _{N\rightarrow \infty }\,\int ^{b}_{a}\,g_{n}(x)\,\sin (Nx)\mathrm {d}x=0\,, \end{aligned}$$

we obtain that

$$\begin{aligned} \limsup _{n\rightarrow \infty }\,\vert \int ^{b}_{a}\,g(x)\,\sin (Nx)\mathrm {d}x\vert \le \int ^{b}_{a}\vert g(x)-g_{n}(x)\vert \mathrm {d}x\,. \end{aligned}$$

So, letting in this inequality \(n\rightarrow \infty \) we obtain the first limit in (A.5) and, similarly, we obtain the second one. Let now \(b=+\infty \) and \(a=-\infty \). In this case we obtain that for any \(-\infty<a<b<+\infty \)

$$\begin{aligned} \left| \int ^{+\infty }_{-\infty }\,g(x)\,\sin (Nx)\mathrm {d}x \right| \le&\left| \int ^{+\infty }_{-\infty }\,g(x)\,\sin (Nx)\mathrm {d}x \right| + \int ^{+\infty }_{b}\,\vert g(x)\,\vert \mathrm {d}x\\&+ \int ^{a}_{-\infty }\,\vert g(x)\,\vert \mathrm {d}x \,. \end{aligned}$$

Using here the previous results we obtain that for any \(-\infty<a<b<+\infty \)

$$\begin{aligned} \limsup _{N\rightarrow \infty } \left| \int ^{+\infty }_{-\infty }\,g(x)\,\sin (Nx)\mathrm {d}x \right| \le \int ^{+\infty }_{b}\,\vert g(x)\,\vert \mathrm {d}x + \int ^{a}_{-\infty }\,\vert g(x)\,\vert \mathrm {d}x \,. \end{aligned}$$

Passing here to limit as \(b\rightarrow +\infty \) and \(a\rightarrow -\infty \) we obtain the first limit in (A.5). Similarly, we can obtain the second one. \(\square \)

Let us now study the inverse Fourier transform. To this end, we need the following local Dini condition.

(\(\mathbf{D}\)) Assume that, for some fixed\(x\in {{\mathbb {R}}},\)there exist the finite limits

$$\begin{aligned} g(x-)=\lim _{z\rightarrow x-}g(z) \quad \text{ and }\quad g(x+)=\lim _{z\rightarrow x+}g(z) \end{aligned}$$

and there exists\(\delta =\delta (x)>0\)for which

$$\begin{aligned} \int ^{\delta }_{0}\, \frac{ \vert \widetilde{g}(x,t) - \widetilde{g}_{0}(x) \vert }{t} \mathrm {d}t\,<\,\infty \,, \end{aligned}$$

where\(\widetilde{g}(x,t)=g((x+t)+g(x-t))/2\)and\(\widetilde{g}_{0}(x)=(g(x+)+g(x-))/2\).

Remark 10.1

Note that the condition \((\mathbf{H}_{1}\)) is the “uniform” version of the condition \(\mathbf{D}\)).

Proposition A.6

Let \(g : {{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) be a function from \(\mathbf{L}_{1}({{\mathbb {R}}})\). If, for some \(x\in {{\mathbb {R}}},\) this function satisfies Condition \(\mathbf{D}\), then

$$\begin{aligned} \widetilde{g}_{0}(x)= \frac{g(x+)+g(x-)}{2} = \frac{1}{2\pi } \int _{{{\mathbb {R}}}}\,e^{-i\theta x} \widehat{g}(\theta )\,\mathrm {d}\theta \,, \end{aligned}$$
(A.6)

where \(\widehat{g}(\theta )=\int _{{{\mathbb {R}}}}\,e^{i\theta t}\,g(t)\,\mathrm {d}t\).

Proof

First, for any fixed \(N>0\) we set

$$\begin{aligned} J_{N}(x)=\frac{1}{2\pi }\, \int ^{N}_{-N}\,e^{-i\theta x} \widehat{g}(\theta )\,\mathrm {d}\theta =\frac{1}{\pi }\, \int _{{{\mathbb {R}}}}\,g(z)\, \int ^{N}_{0}\,\cos (\theta (z- x))\,\mathrm {d}\theta \mathrm {d}z\,. \end{aligned}$$

Here, the inner integral may be represented as

$$\begin{aligned} J_{N}(x)= \frac{1}{\pi }\, \int _{{{\mathbb {R}}}}\,g(z)\, \frac{\sin (N(z- x))}{z- x} \, \mathrm {d}z = \frac{2}{\pi }\, \int ^{\infty }_{0}\, \widetilde{g}(x,t) \, \frac{\sin (N t)}{t} \, \mathrm {d}t\,. \end{aligned}$$

Taking into account that for any \(N>0\) the integral

$$\begin{aligned} \frac{2}{\pi }\, \int ^{\infty }_{0}\, \frac{\sin (N t)}{t} \, \mathrm {d}t =1 \end{aligned}$$
(A.7)

we obtain that

$$\begin{aligned} J_{N}(x)-\widetilde{g}_{0}(x) = \frac{2}{\pi }\, \int ^{\infty }_{0}\, \frac{(\widetilde{g}(x,t)-\widetilde{g}_{0}(x))\sin (N t)}{t} \, \mathrm {d}t \,. \end{aligned}$$

Now we represent the last integral as

$$\begin{aligned} \int ^{\infty }_{0}\, \frac{(\widetilde{g}(x,t)-\widetilde{g}_{0}(x))\sin (N t)}{t} \, \mathrm {d}t =I_{1,N} + I_{2,N} - \widetilde{g}_{0}(x) I_{3,N}\,, \end{aligned}$$

where \(I_{1,N}=\int ^{\delta }_{0}\,(\widetilde{g}(x,t)-\widetilde{g}_{0}(x))\,\sin (N t)\,t^{-1}\,\mathrm {d}t\),

$$\begin{aligned} I_{2,N}=\int ^{\infty }_{\delta }\, t^{-1}\,\widetilde{g}(x,t)\sin (N t)\mathrm {d}t \quad \text{ and } \quad I_{3,N}=\int ^{\infty }_{\delta }\,t^{-1}\,\sin (N t)\,\mathrm {d}t \,. \end{aligned}$$

Condition \(\mathbf{D}\)) and Lemma A.5 imply that \(I_{1,N}\rightarrow 0\) as \(N\rightarrow \infty \). Now note that, since \(g\in \mathbf{L}_{1}({{\mathbb {R}}}),\) then the function \(t^{-1}\,\widetilde{g}(x,t)\) is absolutely integrated. Therefore, in view of Lemma A.5, \(I_{2,N}\rightarrow 0\) as \(N\rightarrow \infty \). As to the last integral we use the property (A.7), i.e., the changing of the variables gives

$$\begin{aligned} I_{3,N}=\int ^{\infty }_{\delta N}\, \frac{\sin t}{t}\,\mathrm {d}t \rightarrow 0 \quad \text{ as }\quad N\rightarrow \infty \,. \end{aligned}$$

Hence we have the desired result. \(\square \)

1.4 A.4 Properties of the analytic functions

We need the following identity theorem (see, for example, Sveshnikov and Tikhonov 1978, Corollary 1, p. 78)

Theorem A.7

Let function \(f(z)\not \equiv 0\) and be analytic in the domain D. In any closed bounded subdomain \(D'\subset D\) it has only a finite numbers of zeros.

Lemma A.8

Assume that the condition \((\mathbf{H}_{3}\))–\((\mathbf{H}_{4}\)) hold. Then there exists some \(0<\beta _{*}<\beta \) such that the function \(\check{G}\) defined in (5.9) is holomorphic in

$$\begin{aligned} U= \{\theta \in {{\mathbb {C}}}\,:\, \text{ Im }(\theta )\ge -\beta _{*}\} \end{aligned}$$

and for any \(x\ge -\beta _{*}\)

$$\begin{aligned} \sup _{\theta \in {{\mathbb {R}}}}\, \vert \check{G}(\theta +i x) \vert <\infty \,. \end{aligned}$$
(A.8)

Proof

Firstly, we remind that thanks to the condition \((\mathbf{H}_{3}\)) the function \(\widehat{g}(\theta )\) is holomorphic in \(D=\{\theta \in {{\mathbb {C}}}\,:\,\text{ Im }(\theta )>-\beta \}\). Therefore, the function \(\check{G}(\theta )\) is holomorphic for any \(\theta \) from \(D\cap \{\theta \in {{\mathbb {C}}}\,:\,\widehat{g}(\theta )\ne 1\}\). In this case its derivative can be calculated as

$$\begin{aligned} \check{G}^{\prime }(\theta )=\frac{\widehat{g}^{\prime }(\theta )}{(1-\widehat{g}(\theta ))^{2}} -\frac{1}{i{\check{\tau }}\theta ^{2}} \,, \end{aligned}$$

where \(\widehat{g}^{\prime }(\theta )\) is the derivative of the function \(\widehat{g}(\theta )\). Moreover, using the Taylor expansion for \(e^{z}\), the function \(\widehat{g}(\theta )\) and its derivative \(\widehat{g}^{\prime }(\theta )\) can be represent as

$$\begin{aligned} \widehat{g}(\theta )&= \mathbf{E}_{Q}e^{i \tau \theta }\, = 1+i {\check{\tau }} \theta -\frac{{\check{\tau }}_{1} \theta ^{2}}{2} +\theta ^3 A_{1}(\theta )\,,\nonumber \\ \widehat{g}^{\prime }(\theta )&= i\mathbf{E}_{Q}\tau \,e^{i \tau \theta }\, = i{\check{\tau }} -{\check{\tau }}_{1} \theta + \theta ^2 A_{2}(\theta )\,, \end{aligned}$$
(A.9)

where \({\check{\tau }}_{1}=\mathbf{E}_{Q}\tau ^{2}_{1}\) and the terms \(A_{1}(\theta )\) and \(A_{2}(\theta )\) are such that for any \(L>0\)

$$\begin{aligned} \sup _{\vert \theta \vert \le L}\vert A_{1}(\theta )\vert<\infty \quad \text{ and }\quad \sup _{\vert \theta \vert \le L}\vert A_{2}(\theta )\vert <\infty \,. \end{aligned}$$

Therefore, there exists \(0<\delta <1\) for which

$$\begin{aligned} \inf _{\vert \theta \vert \le \delta }\, \frac{\vert 1 - \widehat{g}(\theta ) \vert }{\vert \theta \vert } >0\,. \end{aligned}$$
(A.10)

So, the expansion (A.9) implies, that the function \(\check{G}(\theta )\) is holomorphic at the point \(\theta =0\) and for such \(\delta >0\)

$$\begin{aligned} \sup _{\vert \theta \vert \le \delta }\, \vert \check{G}(\theta ) \vert<\infty \quad \text{ and }\quad \sup _{\vert \theta \vert \le \delta }\, \vert \check{G}^{\prime }(\theta ) \vert <\infty \,. \end{aligned}$$

Moreover, note that in view of lemma A.5 for any \(0<r<\beta \)

$$\begin{aligned} \lim _{\vert \text{ Re }(\theta )\vert \rightarrow \infty ,\vert \text{ Im }(\theta )\vert \le r}\,\widehat{g}(\theta )\, =0\,. \end{aligned}$$
(A.11)

Taking into account here that, thanks to (5.6), the module \(\vert \widehat{g}(\theta ) \vert < 1\) for any \(\theta \in {{\mathbb {R}}}\setminus \{ 0\}\), we obtain

$$\begin{aligned} \mathbf{g}_{\delta }= \sup _{\vert \theta \vert> \delta } \vert \widehat{g}(\theta ) \vert < 1 \quad \text{ for } \text{ any }\quad \delta >0 \,. \end{aligned}$$
(A.12)

Therefore, the function \(\check{G}(\theta )\) is holomorphic at the line \(\{z\in {{\mathbb {C}}}\,:\,\text{ Im }(z)=0\}\) and

$$\begin{aligned} \sup _{\theta \in {{\mathbb {R}}}}\, \vert \check{G}(\theta ) \vert <\infty \,. \end{aligned}$$
(A.13)

In the general case note that, due to Lemma 5.1, the function \(1-\widehat{g}(\theta )\) has no zeros on the line \(\left\{ z\in {{\mathbb {C}}}\,:\,\text{ Im }(z)=-\beta _{1} \right\} \) for some \(0<\beta _{1}<\beta \). Moreover, in view of Theorem A.7 for any \(N>1\) there can be only finitely many zeros of the function \(1-\widehat{g}(\theta )\) in

$$\begin{aligned} \left\{ z\in {{\mathbb {C}}}\,:\,-\beta _{1}\le \text{ Im }(z)\le 0\,,\,\vert \text{ Re }(z)\vert \le N \right\} \subset D\,. \end{aligned}$$

The limiting equality (A.11) implies that there exists \(N>0\) such that the function \(1-\widehat{g}(\theta )\ne 0\) on the set

$$\begin{aligned} \left\{ z\in {{\mathbb {C}}}\,:\,-\beta _{1}\le \text{ Im }(z)\le 0\,,\,\vert \text{ Re }(z)\vert > N \right\} \,. \end{aligned}$$

So, there can be only finitely many zeros of the function \(1-\widehat{g}(\theta )\) in

$$\begin{aligned} \left\{ z\in {{\mathbb {C}}}\,:\,-\beta _{1}\le \text{ Im }(z)\le 0 \right\} \,. \end{aligned}$$

Moreover, taking into account the Taylor expansion (A.9) one can check that \(\theta =0\) is an isolated zero of the \(1-\widehat{g}(\theta )\). Therefore, there exists some \(\beta _{*}>0\) for which the function \(1-\widehat{g}(\theta )\) has no zeros in \(\left\{ z\in {{\mathbb {C}}}\,:\,-\beta _{*}\le Im(z)<0 \right\} \). Note also that the function \(1-\widehat{g}(\theta )\) has no zeros in \(\left\{ z\in {{\mathbb {C}}}\,:\,\text{ Im }(z)>0 \right\} \). Since we already shown that the function \(\check{G}(\theta )\) is holomorphic at the line \(\{z\in {{\mathbb {C}}}\,:\,\text{ Im }(z)=0\}\) we obtain that it is holomorphic for any \(\theta \in {{\mathbb {C}}}\) with \(\text{ Im }(\theta )\ge -\beta _{*}\) and in view of (A.13) we obtain the upper bound (A.8). Hence Lemma A.8. \(\square \)

1.5 A.5 Proof of (5.8)

Fist note, that for any \(\vert \theta \vert > \delta \) the inequality (A.12) implies

$$\begin{aligned} \vert G_{\epsilon }(\theta ) \vert \le \, \frac{1}{\vert 1-(1-\epsilon )\widehat{g}(\theta ) \vert } + \frac{\vert 1-i{\check{\tau }}\theta \vert }{\vert \epsilon -i{\check{\tau }}\theta \vert } \, \le \frac{1}{1-\mathbf{g}_{\delta }} +\frac{1}{{\check{\tau }} \delta } \,, \end{aligned}$$

i.e.

$$\begin{aligned} \sup _{0<\epsilon<1}\, \sup _{\vert \theta \vert > \delta } \vert G_{\epsilon }(\theta ) \vert <\infty \,. \end{aligned}$$

Moreover, from (A.9) we can obtain that

$$\begin{aligned} G_{\epsilon }(\theta ) =\frac{-\epsilon _{1}+\epsilon _{1}(1-i{\check{\tau }}\theta ) \widehat{g}(\theta )}{(1-\epsilon _{1}\widehat{g}(\theta ))(\epsilon -i{\check{\tau }}\theta )} =\frac{\epsilon _{1}{\check{\tau }}^2 \theta ^2+\epsilon _{1}\theta ^2 (1-i{\check{\tau }}\theta )A_{0}(\theta ) }{(1-\epsilon _{1}\widehat{g}(\theta ))(\epsilon -i{\check{\tau }}\theta )} \,, \end{aligned}$$

where \(\epsilon _{1}=1-\epsilon \) and \(A_{0}(\theta )=-{\check{\tau }}_{1}/2+\theta A_{1}(\theta )\). Note that \(G_{\epsilon }(0)=0\) for any \(\epsilon >0\). Let now \(0<\vert \theta \vert \le \delta \) for some \(\delta \) for which the inequality (A.10) holds. In this case for some \(0<C_{*}<\infty \)

$$\begin{aligned} \vert G_{\epsilon }(\theta ) \vert = \frac{\vert \epsilon _{1}{\check{\tau }}^2 \theta ^2+\epsilon _{1} \theta ^2(1-i{\check{\tau }}\theta ) A_{0}(\theta )\vert }{\vert 1-\epsilon _{1} \widehat{g}(\theta )\vert \sqrt{\epsilon ^{2}+{\check{\tau }}^{2}\theta ^{2}}} \le \, \frac{C_{*}\vert \theta \vert }{\vert 1-\epsilon _{1} \widehat{g}(\theta )\vert }\,. \end{aligned}$$

Moreover, taking into account here the lower bound (A.10) and that

$$\begin{aligned} \vert 1-\epsilon _{1} \widehat{g}(\theta )\vert =\sqrt{(1-\epsilon _{1} \text{ Re }(\widehat{g}(\theta )) )^2+\epsilon _{1}^2 ( \text{ Im }(\widehat{g}(\theta )))^2} \ge \epsilon _{1} \left| \text{ Im }(\widehat{g}(\theta )) \right| \,, \end{aligned}$$

we obtain that

$$\begin{aligned} \sup _{0<\epsilon<1}\, \sup _{\vert \theta \vert \le \delta } \vert G_{\epsilon }(\theta ) \vert < \infty \,. \end{aligned}$$

Hence the inequality (5.8). \(\square \)

1.6 A.6 Lower bound for the robust risks

In this section we give the lower bound obtained in Konev and Pergamenshchikov (2009b) (Theorem 6.1) for the robust risks (1.3) for the “signal+white noise” model, i.e. the model (1.1) in which the noise process \((\xi _{t})_{0\le t\le n}\) is defined as \(\xi _{t}=\varsigma ^{*}w_{t}\) for \(0\le t\le n\). To study the efficiency properties we need to obtain the lower bound for the mean square accuracy in the set of all possible estimators \(\widehat{S}_{n}\) denoted by \(\Pi _{n}\).

Theorem A.9

Under Conditions (2.12)

$$\begin{aligned} \liminf _{n\rightarrow \infty }\, \left( \frac{n}{\varsigma ^{*}}\right) ^{2k/(2k+1)} \, \inf _{\widehat{S}_{n}\in \Pi _{n}}\, \sup _{S\in W^{k}_{\mathbf{r}}} \,\mathbf{E}_{Q_{0},S}\Vert \widehat{S}_{n}-S\Vert ^{2} \ge \mathbf{r}^{*}_{k}\,, \end{aligned}$$
(A.14)

where \(Q_{0}\) is the distribution of the noise process \((\varsigma ^{*}w_{t})_{0\le t\le n}\) and \(\mathbf{r}^{*}_{k}\) is given in (4.13).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barbu, V.S., Beltaief, S. & Pergamenshchikov, S. Robust adaptive efficient estimation for semi-Markov nonparametric regression models. Stat Inference Stoch Process 22, 187–231 (2019). https://doi.org/10.1007/s11203-018-9186-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11203-018-9186-8

Keywords

Mathematics Subject Classification

Navigation