Skip to main content
Log in

Nonparametric estimation in a mixed-effect Ornstein–Uhlenbeck model

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

Two adaptive nonparametric procedures are proposed to estimate the density of the random effects in a mixed-effect Ornstein–Uhlenbeck model. First a kernel estimator is introduced with a new bandwidth selection method developed recently by Goldenshluger and Lepski (Ann Stat 39:1608–1632, 2011). Then, we adapt an estimator from Comte et al. (Stoch Process Appl 7:2522–2551, 2013) in the framework of small time interval of observation. More precisely, we propose an estimator that uses deconvolution tools and depends on two tuning parameters to be chosen in a data-driven way. The selection of these two parameters is achieved through a two-dimensional penalized criterion. For both adaptive estimators, risk bounds are provided in terms of integrated \(\mathbb {L}^2\)-error. The estimators are evaluated on simulations and show good results. Finally, these nonparametric estimators are applied to neuronal data and are compared with previous parametric estimations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. We insist that this bad estimation is not due to the fact that the noise is Gaussian. Indeed even if Fan (1991) proves the rates to be logarithmic in that case, the rates are improved and can be polynomial when the density under estimation is of the same type as the noise (see Lacour 2006; Comte et al. 2006).

References

  • Birgé L, Massart P (1997) From model selection to adaptive estimation. Springer, New York

    Book  MATH  Google Scholar 

  • Birgé L, Massart P (1998) Minimum contrast estimators on sieves: exponential bounds and rates of convergence. Bernoulli 4:329–375

    Article  MathSciNet  MATH  Google Scholar 

  • Bissantz N, Dümbgen L, Holzmann H, Munk A (2007) Nonparametric confidence bands in deconvolution density estimation. J R Stat Soc Series B (Stat Methodol) 69:483–506

    Article  Google Scholar 

  • Briane M, Pagès G (2006) Théorie de l’intégration. Vuibert, Paris

    Google Scholar 

  • Butucea C, Tsybakov A (2007) Sharp optimality in density deconvolution with dominating bias II. Teor Veroyatnost i Primenen 52:336–349

    Article  MathSciNet  MATH  Google Scholar 

  • Carroll R, Hall P (1988) Optimal rates of convergence for deconvolving a density. J Am Stat Assoc 83:1184–1186 ISSN 01621459

    Article  MathSciNet  MATH  Google Scholar 

  • Chagny G (2013) Warped bases for conditional density estimation. Math Methods Stat 22:253–282

    Article  MathSciNet  MATH  Google Scholar 

  • Comte F, Genon-Catalot V, Rozenholc Y (2007) Penalized nonparametric mean square estimation of the coefficients of diffusion processes. Bernoulli 13:514–543

    Article  MathSciNet  MATH  Google Scholar 

  • Comte F, Genon-Catalot V, Samson A (2013) Nonparametric estimation for stochastic differential equation with random effects. Stoch Process Appl 7:2522–2551

    Article  MathSciNet  MATH  Google Scholar 

  • Comte F, Johannes J (2012) Adaptive functional linear regression. Ann Stat 40:2765–2797

    Article  MathSciNet  MATH  Google Scholar 

  • Comte F, Rozenholc Y, Taupin M-L (2006) Penalized contrast estimator for adaptive density deconvolution. Can J Stat 34:431–452

    Article  MathSciNet  MATH  Google Scholar 

  • Comte F, Samson A (2012) Nonparametric estimation of random-effects densities in linear mixed-effects model. J Nonparametr Stat 24:951–975

    Article  MathSciNet  MATH  Google Scholar 

  • Davidian M, Giltinan D (1995) Nonlinear models for repeated measurement data. CRC press

  • Delattre M, Genon-Catalot V, Samson A (2015) Estimation of population parameters in stochastic differential equations with random effects in the diffusion coefficient. ESAIM Probab Stat 19:671–688

    Article  MathSciNet  MATH  Google Scholar 

  • Delattre M, Genon-Catalot V, Samson A (2016) Mixtures of stochastic differential equations with random effects: application to data clustering. J Stat Plan Inference 173:109–124

    Article  MathSciNet  MATH  Google Scholar 

  • Delattre M, Lavielle M (2013) Coupling the SAEM algorithm and the extended Kalman filter for maximum likelihood estimation in mixed-effects diffusion models. Stat Interface 6:519–532

    Article  MathSciNet  MATH  Google Scholar 

  • Diggle P, Heagerty P, Liang K, Zeger S (2002) Analysis of longitudinal data. Oxford statistical science series

  • Dion C, Genon-Catalot V (2015) Bidimensional random effect estimation in mixed stochastic differential model. Stoch Inference Stoch Process 18(3):1–28

    MATH  Google Scholar 

  • Donnet S, Foulley J, Samson A (2010) Bayesian analysis of growth curves using mixed models defined by stochastic differential equations. Biometrics 66:733–741

    Article  MathSciNet  MATH  Google Scholar 

  • Donnet S, Samson A (2008) Parametric inference for mixed models defined by stochastic differential equations. ESAIM Prob Stat 12:196–218

    Article  MathSciNet  MATH  Google Scholar 

  • Donnet S, Samson A (2013) A review on estimation of stochastic differential equations for pharmacokinetic–pharmacodynamic models. Adv Drug Deliv Rev 65:929–939

    Article  Google Scholar 

  • Donnet S, Samson A (2014) Using PMCMC in EM algorithm for stochastic mixed models: theoretical and practical issues. J Soc Fr Stat 155:49–72

    MathSciNet  MATH  Google Scholar 

  • Fan J (1991) On the optimal rates of convergence for nonparametric deconvolution problems. Ann Statist 19:1257–1272

    Article  MathSciNet  MATH  Google Scholar 

  • Genon-Catalot V, Jacod J (1993) On the estimation of the diffusion coefficient for multi-dimensional diffusion processes. Ann Inst Henri Poincaré B Probab Stat 29:119–151

    MathSciNet  MATH  Google Scholar 

  • Genon-Catalot V, Larédo C (2016) Estimation for stochastic differential equations with mixed effects. Statistics. doi:10.1080/02331888.2016.1141910

    MathSciNet  MATH  Google Scholar 

  • Goldenshluger A, Lepski O (2011) Bandwidth selection in kernel density estimation: oracle inequalities and adaptive minimax optimality. Ann Stat 39:1608–1632

    Article  MathSciNet  MATH  Google Scholar 

  • Hoffmann M (1999) Adaptive estimation in diffusion processes. Stoch Process Appl 79:135–163

    Article  MathSciNet  MATH  Google Scholar 

  • Klein T, Rio E (2005) Concentration around the mean for maxima of empirical processes. Ann Probab 33:1060–1077

    Article  MathSciNet  MATH  Google Scholar 

  • Kutoyants Y (2004) Statistical inference for ergodic diffusion processes. Springer, London

    Book  MATH  Google Scholar 

  • Lacour C (2006) Rates of convergence for nonparametric deconvolution. C R Math Acad Sci Paris 342:877–882

    Article  MathSciNet  MATH  Google Scholar 

  • Lacour C, Massart P (2016) Minimal penalty for Goldenshluger–Lepski method. Stoch Processes Appl. doi:10.1016/j.spa.2016.04.015

  • Lansky P, Sanda P, He J (2006) The parameters of the stochastic leaky integrate-and-fire neuronal model. J Comput Neurosci 21:211–223

    Article  MathSciNet  MATH  Google Scholar 

  • Picchini U, De Gaetano A, Ditlevsen S (2010) Stochastic differential mixed-effects models. Scand J Stat 37:67–90

    Article  MathSciNet  MATH  Google Scholar 

  • Picchini U, Ditlevsen S (2011) Practicle estimation of high dimensional stochastic differential mixed-effects models. Comput Stat Data Anal 55:1426–1444

    Article  MathSciNet  MATH  Google Scholar 

  • Picchini U, Ditlevsen S, De Gaetano A, Lansky P (2008) Parameters of the diffusion leaky integrate-and-fire neuronal model for a slowly fluctuating signal. Neural Comput 20:2696–2714

    Article  MATH  Google Scholar 

  • Pinheiro J, Bates D (2000) Mixed-effect models in S and Splus. Springer, New York

    Book  Google Scholar 

  • Talagrand M (1996) New concentration inequalities in product spaces. Invent Math 126:505–563

    Article  MathSciNet  MATH  Google Scholar 

  • Yu Y, Xiong Y, Chan Y, He J (2004) Corticofugal gating of auditory information in the thalamus: an in vivo intracellular recording study. J Neurosci 24:3060–3069

    Article  Google Scholar 

Download references

Acknowledgments

The author would like to thank Fabienne Comte and Adeline Samson for very useful discussions and advice.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charlotte Dion.

Appendices

Appendix 1: Proofs

1.1 Proof of Theorem 3.1

Given \(h \in {\mathcal {H}}_{N,T}\), we denote:

$$\begin{aligned} V(h)=\kappa _1\frac{\Vert K\Vert ^2_1\Vert K\Vert ^2}{Nh}+\kappa _2 \frac{\sigma ^4\Vert K\Vert ^2_1\Vert K''\Vert ^2}{T^2h^5}{=:}V_1(h)+V_2(h). \end{aligned}$$

Using the definition of A(h) and of \({\widehat{h}}\) we obtain

$$\begin{aligned} \Vert {\widehat{f}}_{{\widehat{h}}}-f\Vert ^2\le & {} 3\Vert {\widehat{f}}_{{\widehat{h}}} -{\widehat{f}}_{h,{\widehat{h}}}\Vert ^2+3\Vert {\widehat{f}}_{h,{\widehat{h}}} -{\widehat{f}}_{h}\Vert ^2+3\Vert {\widehat{f}}_{h}-f\Vert ^2\\\le & {} 3\left( A(h)+V({\widehat{h}}) \right) + 3\left( A({\widehat{h}})+V(h) \right) +3 \Vert {\widehat{f}}_h-f\Vert ^2\\\le & {} 6A(h)+6V(h)+3 \Vert {\widehat{f}}_h-f\Vert ^2. \end{aligned}$$

Thus,

$$\begin{aligned} \mathbb {E}[\Vert {\widehat{f}}_{{\widehat{h}}}-f\Vert ^2] \le 6\mathbb {E}[A(h)]+6V(h)+3 \mathbb {E}[ \Vert {\widehat{f}}_h-f\Vert ^2], \end{aligned}$$

hence, we only have to study the term \(\mathbb {E}[A(h)]\). We can decompose \(\Vert {\widehat{f}}_{h,h'}-{\widehat{f}}_{h'}\Vert ^2\) as follows:

$$\begin{aligned}&\Vert {\widehat{f}}_{h,h'}-{\widehat{f}}_{h'}\Vert ^2 \le 5\Vert {\widehat{f}}_{h,h'}-\mathbb {E}[{\widehat{f}}_{h,h'}]\Vert ^2+ 5\Vert \mathbb {E}[{\widehat{f}}_{h,h'}]-{f}_{h,h'}\Vert ^2+5\Vert {f}_{h,h'}\\&\quad -{f}_{h'}\Vert ^2+5\Vert {f}_{h'}-\mathbb {E}[{\widehat{f}}_{h'}]\Vert ^2+ 5\Vert \mathbb {E}[{\widehat{f}}_{h'}]-{\widehat{f}}_{h'}\Vert ^2 \end{aligned}$$

thus

$$\begin{aligned} A(h)\le 5(D_1+D_2+D_3+D_4+D_5) \end{aligned}$$

with:

$$\begin{aligned} D_1:= & {} \underset{h' \in {\mathcal {H}}_{N,T}}{\sup } \Vert {f}_{h,h'}-{f}_{h'}\Vert ^2,\\ D_2:= & {} \underset{h' \in {\mathcal {H}}_{N,T}}{\sup } \left( \Vert {\widehat{f}}_{h'}-\mathbb {E}[{\widehat{f}}_{h'}]\Vert ^2-\frac{V_1(h')}{10}\right) _+,\\ ~~D_3:= & {} \underset{h' \in {\mathcal {H}}_{N,T}}{\sup } \left( \Vert {\widehat{f}}_{h,h'}-\mathbb {E}[{\widehat{f}}_{h,h'}]\Vert ^2-\frac{V_1(h')}{10}\right) _+\\ D_4:= & {} \underset{h' \in {\mathcal {H}}_{N,T}}{\sup } \left( \Vert \mathbb {E}[{\widehat{f}}_{h'}]-{f}_{h'}\Vert ^2-\frac{V_2(h')}{10}\right) _+,\\ ~~D_5:= & {} \underset{h' \in {\mathcal {H}}_{N,T}}{\sup } \left( \Vert \mathbb {E}[{\widehat{f}}_{h,h'}]-{f}_{h,h'}\Vert ^2-\frac{V_2(h')}{10}\right) _+. \end{aligned}$$

According to Young’s inequality (see Theorem 9.1), we obtain

$$\begin{aligned} \Vert f_{h,h'}-f_{h'}\Vert ^2=\Vert K_{h'}{\star } (f_{h}-f)\Vert ^2\le \Vert K_{h'}\Vert _1^2 \Vert f_{h}-f\Vert ^2= \Vert K\Vert _1^2 \Vert f_{h}-f\Vert ^2 \end{aligned}$$

thus

$$\begin{aligned} D_1\le \Vert K\Vert _1^2 \Vert f_{h}-f\Vert ^2. \end{aligned}$$
(18)

Let us study the term \(D_2\). We denote \({\mathcal {B}}(1)=\{g\in \mathbb {L}^2(\mathbb {R}), \Vert g\Vert =1\}\). We define

$$\begin{aligned} \nu _{N,h}(g):= \langle g,{\widehat{f}}_{h}-\mathbb {E}[{\widehat{f}}_{h}]\rangle \end{aligned}$$

then \(|\nu _{N,h}(g)|\le \Vert g\Vert \Vert {\widehat{f}}_{h}-\mathbb {E}[{\widehat{f}}_{h}]\Vert \) thus, the estimator \({\widehat{f}}_{h}\) satisfies:

$$\begin{aligned} \Vert {\widehat{f}}_{h}-\mathbb {E}[{\widehat{f}}_{h}]\Vert ^2=\underset{g\in {\mathcal {B}}(1)}{\sup }(\nu _{N,h}(g))^2. \end{aligned}$$

We can also compute the scalar product which defines \(\nu _{N,h}\) and we obtain

$$\begin{aligned} \nu _{N,h}(g)=\frac{1}{N}\sum _{j=1}^N \left( g{\star } K_h^-(Z_{j,T})-\mathbb {E}[g{\star } K_h^-(Z_{j,T})] \right) \end{aligned}$$
(19)

with \(K_h^-(x):=K_h(-x)\). This finally conducts to:

$$\begin{aligned} \mathbb {E}[D_2] \le \underset{h' \in {\mathcal {H}}_{N,T}}{\sum } \mathbb {E}\left[ \underset{g \in {\mathcal {B}}(1)}{\sup } (\nu _{N,h'}(g))^2-\frac{V_1(h')}{10} \right] _+. \end{aligned}$$

This bound and Eq. (19) leads to apply Talagrand’s Theorem (9.2). We have to compute 3 quantities: M, \(H^2\) and v.

First:

$$\begin{aligned} \underset{g \in {\mathcal {B}}(1)}{\sup } \Vert g{\star } K_{h'}^-\Vert _{\infty }= & {} \underset{g \in {\mathcal {B}}(1)}{\sup }\underset{x\in \mathbb {R}}{\sup } \left| \int g(y)K_{h'}^-(x-y)dy\right| \nonumber \\= & {} \underset{g \in {\mathcal {B}}(1)}{\sup }\underset{x\in \mathbb {R}}{\sup } |\langle g,K_{h'}^-(.-x)\rangle | \nonumber \\\le & {} \underset{g \in {\mathcal {B}}(1)}{\sup }\Vert g\Vert \Vert K_{h'}\Vert =\frac{\Vert K\Vert }{\sqrt{h'}}:=M. \end{aligned}$$
(20)

Secondly, the bound of Proposition 3.1 gives

$$\begin{aligned} \mathbb {E}\left[ \underset{g \in {\mathcal {B}}(1)}{\sup } (\nu _{N,h}(g))^2\right] =\mathbb {E}\left[ \Vert {\widehat{f}}_{h}-\mathbb {E}[{\widehat{f}}_{h}]\Vert ^2\right] \le \frac{\Vert K\Vert ^2}{Nh}:=H^2. \end{aligned}$$
(21)

Thirdly:

$$\begin{aligned} \underset{g \in {\mathcal {B}}(1)}{\sup } \left( \mathrm {Var} ( g{\star } K_{h'}^-(Z_{1,T}) ) \right)\le & {} \underset{g \in {\mathcal {B}}(1)}{\sup } \mathbb {E}[ (g{\star } K_{h'}^-(Z_{1,T}))^2 ]\\\le & {} 2\underset{g \in {\mathcal {B}}(1)}{\sup } \mathbb {E}[ (g{\star } K_{h'}^-(\phi _1))^2 ]\\&+\,2\underset{g \in {\mathcal {B}}(1)}{\sup } \mathbb {E}[ (g{\star } (K_{h'}^-(Z_{1,T})-K_{h'}^-(\phi _1))^2 ]. \end{aligned}$$

Let us investigate the two terms separately. Young’s inequality gives:

$$\begin{aligned} \mathbb {E}\left[ (g{\star } K_{h'}^-(\phi _1))^2 \right]= & {} \int \left( g{\star } K_{h'}^-(x)\right) ^2 f(x)dx \le \Vert f\Vert \Vert g{\star } K_{h'}^-\Vert ^2_4\nonumber \\= & {} \frac{\Vert f\Vert \Vert K\Vert ^2_{4/3}}{\sqrt{h'}}:=v_1. \end{aligned}$$
(22)

Then, one can write: \(K_{h'}(x-Z_{1,T})-K_{h'}(x-\phi _1)=(\phi _1-Z_{1,T}) \int _0^1 (K_{h'})'(x-\phi _1+u(\phi _1-Z_{1,T}))du\), thus

$$\begin{aligned}&(g{\star } K_{h'}^-(Z_{1,T})-g{\star } K_{h'}(\phi _1))^2\\&\qquad =(\phi _1-Z_{1,T})^2 \left( \int g(x) \int _0^1 (K_{h'})'(x-\phi _1+u(\phi _1-Z_{1,T}))dudx \right) ^2\\&\qquad \le (\phi _1-Z_{1,T})^2 \int g^2(x)\left( \int _0^1 (K_{h'})'^2(x-\phi _1+u(\phi _1-Z_{1,T}))du \right) dx\\&\qquad \le (\phi _1-Z_{1,T})^2 \Vert g\Vert ^2 \int (K_{h'})'^2(y)dy= (\phi _1-Z_{1,T})^2\Vert (K_{h'})'\Vert ^2. \end{aligned}$$

With \(\mathbb {E}[(\phi _1-Z_{1,T})^2]=\frac{\sigma ^2}{T^2}\mathbb {E}[W_1(T)^2] =\frac{\sigma ^2}{T}\), the assumption \(T^{-1}\le h^{5/2}\) leads to

$$\begin{aligned} \mathbb {E}\left[ (g{\star } K_{h'}^-(Z_{1,T})-g{\star } K_{h'}(\phi _1))^2\right] \le \frac{\Vert K'\Vert ^2 \sigma ^2}{h'^3T}\le \frac{\Vert K'\Vert ^2 \sigma ^2}{\sqrt{h'}}:=v_2. \end{aligned}$$
(23)

Finally \(v=v_1+v_2=A_0/\sqrt{h'}\) with \(A_0=\Vert f\Vert \Vert K\Vert ^2_{4/3}+\Vert K'\Vert ^2\sigma ^2\).

If \(\kappa _1 \Vert K\Vert _1^2 \ge 40\), with the assumption \(1/(Nh) \le 1\), Talagrand’s inequality (under the assumptions of the Theorem 3.1) gives

$$\begin{aligned} \mathbb {E}\left( \underset{g \in {\mathcal {B}}(1)}{\sup } (\nu _{N,h'}(g))^2-\frac{V_1(h')}{10}\right) _+\le & {} \frac{C_1}{N\sqrt{h'}} e^{-C_2/\sqrt{h'}}+C_3\frac{1}{h' N^2} e^{-C_4 \sqrt{N}}\\\le & {} \frac{C_5}{N} \sum _{h'\in {\mathcal {H}}_{N,T}} \frac{1}{\sqrt{h'}} e^{-C_6/\sqrt{h'}} \le \frac{C_5 S(C_6)}{N}. \end{aligned}$$

One can lead the study of \(D_3\) as we have done for \(D_2\), using the same steps and tools. However \(K_h {\star } K_{h'}\) instead of \(K_{h'}\), adds \(\Vert K\Vert _1\) in M and \(\Vert K\Vert ^2_1\) in \(H^2\) and v.

Then, let us study the term \(D_4\). If \(\kappa _2 \ge 10/(3\Vert K\Vert _1^2)\), the bound (9) leads us to

$$\begin{aligned} D_4= & {} \underset{h' \in {\mathcal {H}}_{N,T}}{\sup } \left( \Vert \mathbb {E}[{\widehat{f}}_{h'}]-{f}_{h'}\Vert ^2-\frac{V_2(h')}{10}\right) _+\\\le & {} \underset{h' \in {\mathcal {H}}_{N,T}}{\sup } \left( \frac{\Vert K''\Vert ^2\sigma ^4}{3h'^5T^2}-\frac{\kappa _2 \Vert K\Vert _1^2 \Vert K''\Vert ^2\sigma ^4}{10 T^2 h^{'5}}\right) _+=0 \end{aligned}$$

thus \(D_4=0\). Finally, similarly, if \( \kappa _2\ge 10/3\), we obtain

$$\begin{aligned} D_5= & {} \underset{h' \in {\mathcal {H}}_{N,T}}{\sup } \left( \Vert \mathbb {E}[{\widehat{f}}_{h,h'}]-{f}_{h,h'}\Vert ^2-\frac{V_2(h')}{10}\right) _+\\\le & {} \underset{h' \in {\mathcal {H}}_{N,T}}{\sup } \left( \frac{\Vert K''\Vert ^2 \Vert K\Vert _1^2\sigma ^4}{3h^5T^2}-\frac{\kappa _2 \Vert K\Vert _1^2 \Vert K''\Vert ^2\sigma ^4}{10 T^2 h^{'5}}\right) _+=0. \end{aligned}$$

Thus finally we obtained that:

$$\begin{aligned} \mathbb {E}[A(h)] \le 5 \left( \Vert K\Vert _1^2 \Vert f_h-f\Vert ^2 +\frac{c}{N} \right) \end{aligned}$$
(24)

with c a constant depending on \(\Vert f\Vert ,\Vert K\Vert _1,\Vert K\Vert ,\Vert K\Vert _{4/3}\). Finally we have shown that for all \(h \in {\mathcal {H}}_{N,T}\):

$$\begin{aligned} \mathbb {E}[\Vert {\widehat{f}}_{{\widehat{h}}}-f\Vert ^2]\le & {} 6\kappa _1 \frac{ \Vert K\Vert _1^2 \Vert K\Vert ^2}{Nh}+ 6\kappa _2 \frac{\Vert K\Vert _1^2 \Vert K''\Vert ^2 \sigma ^4}{T^2h^5}\\&+\,3 \left( 2\Vert f-f_h\Vert ^2+ \frac{ \Vert K\Vert ^2 }{Nh} + \frac{ 2\Vert K''\Vert ^2 \sigma ^4}{3T^2h^5} \right) \\&+ \,30\left( \Vert K\Vert ^2_1 \Vert f-f_h\Vert ^2+\frac{c}{N} \right) \\\le & {} \left( 6+ \frac{3}{\Vert K\Vert _1^2 \kappa _1} \right) V_1(h)+\left( 6+ \frac{9}{2\Vert K\Vert _1^2 \kappa _2} \right) V_2(h)\\&+ \,(30\Vert K\Vert _1^2+6) \Vert f_h-f\Vert ^2 +\frac{C}{N}\\\le & {} C_1 \underset{h\in {\mathcal {H}}_{N,T}}{\inf } \{\Vert f-f_h\Vert ^2+ V(h)\} +\frac{C_2}{N}. \end{aligned}$$

where \(C_1= \max (7, 30 \Vert K\Vert _1^2+6)\) and \(C_2\) depends on \(\Vert f\Vert ,\Vert K\Vert _1,\Vert K\Vert ,\Vert K\Vert _{4/3}\). \(\square \)

1.2 Proof of Proposition 4.1

The bias term is \(\Vert f-\mathbb {E}[{\widetilde{f}}_{m,s}]\Vert ^2\). Let us compute \(\mathbb {E}[{\widetilde{f}}_{m,s}]\). As the \(Z_{j,\tau }\) are i.i.d. when \(\tau \) is fixed and due to the independence of \(\phi _1\) and \(W_1\), we obtain:

$$\begin{aligned} \mathbb {E}[{\widetilde{f}}_{m,s}(x)]= & {} \frac{1}{2\pi } \int _{-m}^{m} e^{-iux} \mathbb {E}\left[ e^{iuZ_{1,m^2/s^2}+u^2\sigma ^2s^2/(2m^2) }\right] du\\= & {} \frac{1}{2\pi } \int _{-m}^{m} e^{-iux} \mathbb {E}\left[ e^{iu\phi _1+ iu\sigma W_1(m^2/s^2)s^2/m^2 + u^2 \sigma ^2s^2/(2m^2)}\right] du\\= & {} \frac{1}{2\pi } \int _{-m}^{m} e^{-iux+u^2 \sigma ^2 s^2/(2m^2)} f^*(u) \mathbb {E}\left[ e^{iu\sigma W_1(m^2/s^2) s^2/m^2}\right] du\\= & {} \frac{1}{2\pi } \int _{-m}^{m} e^{-iux+u^2 \sigma ^2 s^2/(2m^2)} f^*(u) e^{-u^2\sigma ^2 s^2/(2m^2)} du\\= & {} \frac{1}{2\pi } \int _{-m}^{m} e^{-iux} f^*(u) du{=:} f_{m}(x). \end{aligned}$$

Therefore this gives \(\mathbb {E}[{\widetilde{f}}_{m,s}(x)]={f}_{m}(x)\), and \(\Vert f-\mathbb {E}[{\widetilde{f}}_{m,s}]\Vert ^2=\Vert f-{f}_{m}\Vert ^2=\frac{1}{2\pi } \int _{|u| \ge m}|f^*(u)|^2du\).

The variance term is:

$$\begin{aligned} \mathbb {E}\left[ \Vert {\widetilde{f}}_{m,s}-{f}_{m}\Vert ^2 \right]= & {} \frac{1}{2\pi } \mathbb {E}\left[ \int _{-m}^{m} \left| \frac{1}{N} \sum _{j=1}^{N} e^{iuZ_{j,m^2/s^2} } e^{\frac{u^2 \sigma ^2 s^2}{2m^2}}-f^*(u)\right| ^2du \right] \\= & {} \frac{1}{2\pi N} \int _{-m}^{m} e^{\frac{u^2 \sigma ^2 s^2}{m^2}} \mathrm {Var}\left( e^{iuZ_{1,m^2/s^2}} \right) du \\\le & {} \frac{1}{2\pi N} \int _{-m}^{m} e^{\frac{u^2 \sigma ^2 s^2}{m^2}} du = \frac{m}{\pi N} \int _{0}^{1} e^{s^2 \sigma ^2 v^2} du. \end{aligned}$$

\(\Box \)

1.3 Proof of Theorem 4.1

Let us study the term \(\Vert {\widetilde{f}}_{{\widetilde{m}},{\widetilde{s}}}-f\Vert ^2\). We decompose it into a sum of three terms and the definition of \(({\widetilde{m}},{\widetilde{s}})\) (15) implies for all \((m,s) \in {\mathcal {C}}\)

$$\begin{aligned} \Vert {\widetilde{f}}_{{\widetilde{m}},{\widetilde{s}}}-f\Vert ^2\le & {} 3\left( \Vert {\widetilde{f}}_{{\widetilde{m}},{\widetilde{s}}}-{\widetilde{f}}_{({\widetilde{m}},{\widetilde{s}})\wedge (m,s)}\Vert ^2+\Vert {\widetilde{f}}_{({\widetilde{m}},{\widetilde{s}})\wedge (m,s)}-{\widetilde{f}}_{m,s}\Vert ^2+\Vert {\widetilde{f}}_{m,s}-f\Vert ^2 \right) \nonumber \\\le & {} 3\left( \varGamma _{m,s}+\text {pen}({\widetilde{m}},{\widetilde{s}}) \right) + 3\left( \varGamma _{{\widetilde{m}},{\widetilde{s}}}+\text {pen}(m,s)\right) +3 \Vert {\widetilde{f}}_{m,s}-f\Vert ^2 \nonumber \\\le & {} 6\varGamma _{m,s}+6\text {pen}(m,s)+3 \Vert {\widetilde{f}}_{m,s}-f\Vert ^2 \end{aligned}$$
(25)

Now we study \({\varGamma }_{m,s}\). First:

$$\begin{aligned}&\Vert {\widetilde{f}}_{(m,s)\wedge (m',s')}-{\widetilde{f}}_{m',s'}\Vert ^2\\&\quad \le 3 \left( \Vert {\widetilde{f}}_{m',s'}-f_{m'}\Vert ^2+\Vert f_{m'}-f_{m\wedge m'}\Vert ^2 +\Vert f_{m \wedge m'}- {\widetilde{f}}_{(m',s')\wedge (m,s)}\Vert ^2 \right) . \end{aligned}$$

Thus:

$$\begin{aligned} \varGamma _{m,s}\le & {} \underset{(m',s')\in {\mathcal {C}}}{\max } \left( 3\Vert {\widetilde{f}}_{m',s'}-f_{m'}\Vert ^2 + 3\Vert f_{m'}-f_{m \wedge m'}\Vert ^2 +3\Vert f_{m\wedge m'}\right. \\&\left. - \,{\widetilde{f}}_{(m',s')\wedge (m,s)}\Vert ^2 -\text {pen}({m',s'}) \right) _+\\\le & {} 3\underset{(m',s')\in {\mathcal {C}}}{\max } \left( \Vert {\widetilde{f}}_{m',s'}-f_{m'}\Vert ^2-\frac{1}{6}\text {pen}({m',s'})\right) _+\\&+ \,3\underset{(m',s')\in {\mathcal {C}}}{\max } \left( \Vert {\widetilde{f}}_{(m',s')\wedge (m,s)}-f_{m \wedge m'} \Vert ^2 -\frac{1}{6}\text {pen}({m',s'}) \right) _+\\&+\, 3 \underset{{m'}\in {\mathcal {M}}}{\max } \Vert f_{m'}-f_{m\wedge m'}\Vert ^2. \end{aligned}$$

The last maximum can be explicit. If \(m'\le m\), then \(\Vert f_{m'}-f_{m \wedge m'}\Vert ^2=\Vert f_{m'}-f_{m'}\Vert ^2=0\). Otherwise,

$$\begin{aligned} \Vert f_{m'}-f_{m \wedge m'}\Vert ^2=\Vert f_{m'}-f_{m}\Vert ^2=\int _{m \le |u|\le m'} |f^*(u)|^2du \le \Vert f-f_{m}\Vert ^2. \end{aligned}$$

Finally:

$$\begin{aligned} \underset{{m'}\in {\mathcal {M}}}{\max } \Vert f_{m'}-f_{m\wedge m'}\Vert ^2 \le \Vert f-f_{m}\Vert ^2. \end{aligned}$$

We get the following bound for \(\varGamma _{m,s}\):

$$\begin{aligned} \varGamma _{m,s}\le & {} 3\underset{(m',s')\in {\mathcal {C}}}{\max } \left( \Vert {\widetilde{f}}_{m',s'}-f_{m'}\Vert ^2-\frac{1}{6}\text {pen}({m',s'})\right) _+ \nonumber \\&+\, 3\underset{(m',s')\in {\mathcal {C}}}{\max } \left( \Vert {\widetilde{f}}_{(m',s')\wedge (m,s)}-f_{m \wedge m'} \Vert ^2 -\frac{1}{6}\text {pen}({m',s'}) \right) _+ \nonumber \\&+\, 3 \Vert f-f_{m}\Vert ^2. \end{aligned}$$
(26)

Then we gather Eqs. (25) and (26):

$$\begin{aligned} \Vert {\widetilde{f}}_{{\widetilde{m}},{\widetilde{s}}}-f\Vert ^2\le & {} 6\text {pen}(m,s)+3\Vert {\widetilde{f}}_{m,s}-f\Vert ^2 + 18 \Vert f-f_{m}\Vert ^2\\&+\, \underset{(m',s')\in {\mathcal {C}}}{\max } 18\left( \Vert {\widetilde{f}}_{m',s'}-f_{m'}\Vert ^2-\frac{1}{6}\text {pen}({m',s'})\right) _+\\&+\, \underset{(m',s')\in {\mathcal {C}}}{\max }18 \left( \Vert {\widetilde{f}}_{(m',s')\wedge (m,s)}-f_{m \wedge m'} \Vert ^2 -\frac{1}{6}\text {pen}({m',s'}) \right) _+ . \end{aligned}$$

We first notice that our penalty function is increasing in s and m, thus we get the following bound for the last term:

$$\begin{aligned}&\mathbb {E}\left[ \underset{(m',s')\in {\mathcal {C}}}{\max } \left( \Vert {\widetilde{f}}_{(m',s')\wedge (m,s)}-f_{m \wedge m'} \Vert ^2 -\frac{1}{6}\text {pen}((m',s')\wedge (m,s)) \right) _+\right] \\&\quad \le \mathbb {E}\left[ \underset{m' \le m,s'\le s}{\max } \left( \Vert {\widetilde{f}}_{m',s'}-f_{m'} \Vert ^2 -\frac{1}{6}\text {pen}(m',s') \right) _+\right] \\&\qquad +\,\mathbb {E}\left[ \underset{ m \le m',s\le s' }{\max } \left( \Vert {\widetilde{f}}_{m,s}-f_{m} \Vert ^2 -\frac{1}{6}\text {pen}(m,s) \right) _+\right] \\&\qquad +\,\mathbb {E}\left[ \underset{ m \le m' ,s'\le s }{\max } \left( \Vert {\widetilde{f}}_{m,s'}-f_{m} \Vert ^2 -\frac{1}{6}\text {pen}(m,s') \right) _+\right] \\&\qquad +\,\mathbb {E}\left[ \underset{ m' \le m, s\le s'}{\max } \left( \Vert {\widetilde{f}}_{m',s}-f_{ m'} \Vert ^2 -\frac{1}{6}\text {pen}(m',s) \right) _+\right] \\&\quad \le 4 \sum _{m' \in {\mathcal {M}}} \sum _{s' \in {\mathcal {S}}}\mathbb {E}\left[ \Vert {\widetilde{f}}_{m',s'}-f_{m'}\Vert ^2 -\frac{1}{6}\text {pen}({m',s'})\right] _+. \end{aligned}$$

Moreover, according to Proposition 4.1 and using the inequality \(\int _0^1 e^{\sigma ^2s^2v^2}dv \le e^{\sigma ^2s^2}\), we obtain, for all \( (m,s) \in {\mathcal {C}}\),

$$\begin{aligned} \mathbb {E}\left[ \Vert {\widetilde{f}}_{{\widetilde{m}},{\widetilde{s}}}-f\Vert ^2\right]\le & {} 5\times 18 \sum _{m' \in {\mathcal {M}}}\sum _{s' \in {\mathcal {S}}} \mathbb {E}\left[ \Vert {\widetilde{f}}_{m',s'}-f_{m'}\Vert ^2-\frac{1}{6}\text {pen}({m',s'})\right] _+ + 6\text {pen}(m,s)\\&+ \,3\frac{m}{\pi N} e^{\sigma ^2s^2} + 21 \Vert f-f_{m}\Vert ^2 . \end{aligned}$$

Then we obtain the announced result with the following Lemma.

Lemma 8.1

There exists a constant \(C'>0\) such that for \(\text {pen}(m,s)\) defined by \(\text {pen}(m,s)=\kappa \frac{m}{ N} e^{\sigma ^2s^2}\),

$$\begin{aligned} \sum _{m' \in {\mathcal {M}}} \sum _{s' \in {\mathcal {S}}}\mathbb {E}\left[ \Vert {\widetilde{f}}_{m',s'}-f_{m'}\Vert ^2 -\frac{1}{6}\text {pen}({m',s'})\right] _+ \le \frac{C'(P+1)}{N}. \end{aligned}$$

According to Lemma 8.1, to be proved next, we choose \(\text {pen}(m,s)=\kappa \frac{m}{ N} e^{\sigma ^2s^2}\), thus, there exist two constants \(C=145,~C'>0\) such that,

$$\begin{aligned} \mathbb {E}\left[ \Vert {\widetilde{f}}_{{\widetilde{m}},{\widetilde{s}}}-f\Vert ^2\right]\le & {} 5\times 18 \sum _{m' \in {\mathcal {M}}} \sum _{s' \in {\mathcal {S}}}\mathbb {E}\left[ \Vert {\widetilde{f}}_{m',s'}-f_{m'}\Vert ^2 -\frac{1}{6}\text {pen}({m',s'})\right] _+\\&+ \left( 6\kappa +\frac{3}{\pi }\right) \frac{m}{ N} e^{\sigma ^2s^2} + 21 \Vert f-f_{m}\Vert ^2 \\\le & {} C \underset{(m,s)\in {\mathcal {C}}}{\inf }\left\{ \Vert f-f_{m}\Vert ^2 +\frac{m}{N} e^{\sigma ^2 s^2 } \right\} + \frac{C'}{N}.~~ \end{aligned}$$

\(\Box \)

Proof of Lemma 8.1

For a couple \((m,s)\in {\mathcal {C}}\) fixed, let us consider the subset \(S_{m}:= \{t \in \mathbb {L}^1(\mathbb {R})\cap \mathbb {L}^2(\mathbb {R}), \text {supp}(t^*)=[-m,m] \}\). For \(t \in S_{m}\),

$$\begin{aligned} \nu _N(t)=\frac{1}{N} \sum _{j=1}^{N} \left( \varphi _t(Z_{j,m^2/s^2})-\mathbb {E}\left[ \varphi _t\left( Z_{j,m^2/s^2}\right) \right] \right) \end{aligned}$$

with \(\varphi _t(x):=\frac{1}{2\pi } \int {\overline{t^*(u)}}e^{iux+\sigma ^2u^2s^2/(2m^2)}du\), then \(\nu _N(t)=\frac{1}{2\pi } \langle t^*,({\widetilde{f}}_{m,s}-f_{m})^*\rangle \). This leads to

$$\begin{aligned} \Vert {\widetilde{f}}_{m,s}-f_{m}\Vert ^2= \underset{t \in S_{m},~ \Vert t\Vert =1}{\sup } |\nu _N(t)|^2. \end{aligned}$$
(27)

We also have by Cauchy–Schwarz inequality

$$\begin{aligned} \Vert \varphi _t\Vert _{\infty }\le & {} \frac{1}{2\pi } \int |t^*(u)|e^{\sigma ^2u^2s^2/(2m^2)}du \le \frac{1}{2\pi } \left( \int _{-m}^{m} |t^*(u)|^2du\right) ^{1/2} \left( \int _{-m}^{m} e^{\sigma ^2u^2s^2/m^2}du \right) ^{1/2} \\\le & {} \frac{\sqrt{2m}}{\sqrt{2\pi }}e^{\sigma ^2{s}^2/2} \end{aligned}$$

thus

$$\begin{aligned} \underset{t\in S_{m},\Vert t\Vert =1}{\sup } \Vert \varphi _t\Vert _{\infty } \le \frac{\sqrt{m}}{\sqrt{\pi }}e^{\sigma ^2{s}^2/2}:=M. \end{aligned}$$

Then, by Proposition 4.1,

$$\begin{aligned} \mathbb {E}\left[ \underset{t\in S_{m},\Vert t\Vert =1}{\sup } |\nu _N(t)|^2\right] = \mathbb {E}\left[ \Vert {\widetilde{f}}_{m,s}-f_{m}\Vert ^2\right] \le \frac{m}{\pi N} \int _0^1 e^{\sigma ^2s^2v^2}dv\le \frac{m}{\pi N} e^{\sigma ^2s^2}:=H^2. \end{aligned}$$

Using Fubini and Cauchy–Schwarz inequalities we obtain for all \((m,s)\in {\mathcal {C}}\):

$$\begin{aligned} 4\pi \underset{t\in S_{m},~\Vert t\Vert =1}{\sup } \mathrm {Var}(\varphi _t(Z_{j,m^2/s^2}))\le & {} \underset{t\in S_{m},\Vert t\Vert =1}{\sup } \iint t^*(u)t^*(-v)\mathbb {E}\left[ e^{i(u-v)Z_{j,m^2/s^2}}\right] \\&\qquad \times e^{(u^2+v^2)\sigma ^2s^2/(2m^2)} dudv\\\le & {} 2\pi \left( \iint _{[-m,m]^2} |f^*(u-v)|^2 e^{(u^2+v^2)\sigma ^2s^2/m^2} dudv \right) ^{1/2}\\\le & {} 2\pi \left( e^{2\sigma ^2s^2} \iint _{[-m,m]^2} |f^*(u-v)|^2 dudv \right) ^{1/2}\\\le & {} 2\pi e^{\sigma ^2s^2} \sqrt{2m} \left( \int _{-2m}^{2m} |f^*(z)|^2dz\right) ^{1/2}\\\le & {} 2\sqrt{2m}\sqrt{2}\pi \sqrt{\pi } e^{\sigma ^2 s^2} \Vert f\Vert {=:} 4\pi ^2v,\\ v:= & {} \frac{\sqrt{m}e^{\sigma ^2s^2} \Vert f\Vert }{\sqrt{\pi }}. \end{aligned}$$

Finally using that \(m \le N\), \(s \le 2/\sigma \) and \(\sum _{s \in {\mathcal {S}}} s=(4/\sigma )(1-(1/2)^{P+1})< 4/\sigma \), the Talagrand’s inequality with \(\alpha =1/2\) if \(4H^2 \le \text {pen}(m,s)/6\) implies,

$$\begin{aligned}&\sum _{s\in {\mathcal {S}}} \sum _{m \in {\mathcal {M}}} \mathbb {E} \left[ \Vert {\widetilde{f}}_{m,s}-f_{m}\Vert ^2-\frac{1}{6}\text {pen}({s,m})\right] _+\\&\quad \le \sum _{s\in {\mathcal {S}}} \sum _{m \in {\mathcal {M}}} \left( \frac{C_1 \Vert f\Vert }{N} e^{\sigma ^2 s^2} \sqrt{m} e^{-C_2 \frac{\sqrt{m}}{\Vert f\Vert }} + C_3\frac{m}{N^2} e^{\sigma ^2 s^2} e^{-C_4\sqrt{N}} \right) \\&\quad \le \sum _{s\in {\mathcal {S}}} \frac{C_1 \Vert f\Vert }{N} e^{\sigma ^2 s^2} \left( \sum _{m \in {\mathcal {M}}} \sqrt{m} e^{-C_2 \frac{\sqrt{m}}{\Vert f\Vert }} \right) + \sum _{s\in {\mathcal {S}}} \sum _{m \in {\mathcal {M}}} C_3 e^{4} \frac{1}{N} e^{-C_4\sqrt{m}}\\&\quad \le \frac{C_1 \Vert f\Vert (P+1) e^{4}}{N} \left( \sum _{m \in {\mathcal {M}}} \sqrt{m} e^{-C_2 \frac{\sqrt{m}}{\Vert f\Vert }} \right) + C_3 e^{4}\frac{P+1}{N} \sum _{m \in {\mathcal {M}}} e^{-C_4\sqrt{m}}\\&\quad \le \frac{C'(P+1)}{N} \end{aligned}$$

because with the definition of \({\mathcal {M}}\), \( \sum _{m \in {\mathcal {M}}} \sqrt{m} e^{-C_2 \frac{\sqrt{m}}{\Vert f\Vert }}\le a_1 \sum _{k \in \mathbb {N}} k^{1/4} e^{-a_2 k^{1/4}} < +\infty \), and \(\sum _{m \in {\mathcal {M}}} e^{-C_4 m^{1/2}}\le \sum _{k\in \mathbb {N}} e^{-a_3 k^{1/4}} <+\infty \), with \(a_1, a_2, a_3\) three positive constants. Notice that \(C'>0\) depends on \(\sigma , \Vert f\Vert \), \(\varDelta \).

We choose \(\text {pen}(m,s)=\kappa m e^{\sigma ^2 s^2} /N\) with \(\kappa \ge 24\). \(\square \)

Appendix 2

1.1 Young inequality

This inequality can be found in Briane and Pagès (2006) for example.

Theorem 9.1

Let f be a function belonging to \(\mathbb {L}^p(\mathbb {R})\) and g belonging to \(\mathbb {L}^q(\mathbb {R})\), let pqr be real numbers in \([1, +\infty ]\) and such that

$$\begin{aligned} \frac{1}{p}+\frac{1}{q}=\frac{1}{r}+1. \end{aligned}$$

Then,

$$\begin{aligned} \Vert f{\star } g\Vert _r \le \Vert f\Vert _p \Vert g\Vert _q. \end{aligned}$$

1.2 Talagrand’s inequality

The following result is a consequence of the Talagrand concentration inequality Talagrand (1996) given in Birgé and Massart (1997).

Theorem 9.2

Consider \(n \in \mathbb {N}^*\), \({\mathcal {F}}\) a class at most countable of measurable functions, and \((X_i)_{i\in \{1,\ldots ,N\}}\) a family of real independent random variables. One defines, for all \(f\in {\mathcal {F}}\),

$$\begin{aligned} \nu _N(f) =\frac{1}{N}\sum _{i=1}^{N} (f(X_i)-\mathbb {E}[f(X_i)]). \end{aligned}$$

Supposing there are three positive constants M, H and v such that \(\underset{f\in {\mathcal {F}}}{\sup } \Vert f\Vert _{\infty } \le M\),

\(\mathbb {E}[\underset{f\in {\mathcal {F}}}{\sup } |\nu _Nf| ] \le H\), and \(\underset{f\in {\mathcal {F}}}{\sup } ({1}/{N})\sum _{i=1}^{N} \mathrm {Var}(f(X_i)) \le v\), then for all \(\alpha >0\),

$$\begin{aligned} \mathbb {E}\left[ \left( \underset{f\in {\mathcal {F}}}{\sup } |\nu _N(f)|^2-2(1+2\alpha )H^2 \right) _+ \right]\le & {} \frac{4}{a} \left( \frac{v}{N} \exp \left( -a \alpha \frac{N H^2}{v} \right) \right. \\&+ \left. \frac{49M^2}{a C^2(\alpha )N^2} \exp \left( -\frac{\sqrt{2}a C(\alpha )\sqrt{\alpha }}{7}\frac{NH}{M} \right) \right) \end{aligned}$$

with \(C(\alpha )=(\sqrt{1+\alpha }-1) \wedge 1\), and \(a=\frac{1}{6}\).

1.3 Discretization

Indeed, if we assume that the times of observations are the \(t_k=k\delta , k=1,\ldots ,N\) and \(0<\delta <1\), we must study the error applied by discretization of the \(Z_{j,\tau }\). Then, for any \(0<m^2/s^2 \le T\) we use:

$$\begin{aligned} {\widehat{Z}}_{j,m^2/s^2}=\frac{s^2}{m^2}\left[ X_j(\delta [ m^2/(s^2\delta )])-X_j(0)+\frac{\delta }{\alpha }\sum _{k=1}^{[ m^2/(s^2\delta )]} X_j((k-1)\delta ) \right] \end{aligned}$$
(28)

to approximate \(Z_{j,m^2/s^2}\) given by (2). The corresponding estimator of f is

$$\begin{aligned} {\widehat{{\widetilde{f}}}}_{m,s}(x)=\frac{1}{2\pi } \int _{-m}^{m} e^{-iux} \frac{1}{N} \sum _{j=1}^N e^{iu{\widehat{Z}}_{j,m^2/s^2}} e^{\frac{u^2\sigma ^2s^2}{2m^2}} du. \end{aligned}$$
(29)

We investigate the error:

$$\begin{aligned} \mathbb {E}[\Vert {\widehat{{\widetilde{f}}}}_{m,s}-f\Vert ^2] \le 2\mathbb {E}[\Vert {\widehat{{\widetilde{f}}}}_{m,s}-{\widetilde{f}}_{m,s}\Vert ^2] +2 \mathbb {E}\left[ \Vert {\widetilde{f}}_{m,s}-f\Vert ^2\right] \end{aligned}$$

where the second term of the right hand side is bounded by Proposition 4.1. Then, Plancherel–Parseval’s Theorem implies:

$$\begin{aligned} \mathbb {E}[\Vert {\widehat{{\widetilde{f}}}}_{m,s}-{\widetilde{f}}_{m,s}\Vert ^2]\le & {} \frac{1}{2\pi }\mathbb {E}\left[ \int _{-m}^{m} \frac{1}{N}\sum _{j=1}^N e^{u^2\sigma ^2s^2/m^2} \left| e^{iu{\widehat{Z}}_{j,m^2/s^2}}-e^{iu{Z}_{j,m^2/s^2}}\right| ^2du\right] \\\le & {} \frac{1}{2\pi }\int _{-m}^{m} e^{u^2\sigma ^2s^2/m^2} \mathbb {E}\left[ \left| e^{iu{\widehat{Z}}_{1,m^2/s^2}} -e^{iu{Z}_{1,m^2/s^2}}\right| ^2\right] du \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}\left[ \left| e^{iu{\widehat{Z}}_{1,m^2/s^2}} -e^{iu{Z}_{1,m^2/s^2}}\right| ^2\right] \le |u|^2\mathbb {E}\left[ \left| {\widehat{Z}}_{1,m^2/s^2}-{Z}_{1,m^2/s^2}\right| ^2\right] \end{aligned}$$

thus we study the last term. For all \((m,s) \in {\mathcal {C}}\), \(m^2/s^2 \le T\),

$$\begin{aligned} {Z}_{1,m^2/s^2}-{\widehat{Z}}_{1,m^2/s^2}= & {} \frac{s^2}{m^2}\left( X_j(m^2/s^2) -X_j(\delta [ m^2/(s^2\delta ) ]) \right) \\&+\frac{s^2}{\alpha m^2} \sum _{k=1}^{[ m^2/(s^2\delta ) ]} \int _{(k-1)\delta }^{k\delta }(X_j(s)-X_j((k-1)\delta ))ds \end{aligned}$$

then by Cauchy–Schwarz’s inequality we obtain

$$\begin{aligned} ({Z}_{1,m^2/s^2}-{\widehat{Z}}_{1,m^2/s^2})^2\le & {} \frac{2 s^4}{m^4}\left( X_j(m^2/s^2)-X_j(\delta [ m^2/(s^2\delta ) ]) \right) ^2\\&+\frac{2s^4}{\alpha ^2 m^4} \left[ \sum _{k=1}^{[ m^2/(s^2\delta ) ]}\int _{(k-1)\delta }^{k\delta }(X_j(s)-X_j((k-1)\delta ))ds\right] ^2. \end{aligned}$$

Höder’s inequality yields

$$\begin{aligned} \left[ \sum _{k=1}^{\left[ \frac{m^2}{s^2\delta } \right] }\int _{(k-1)\delta }^{k\delta }(X_j(s)-X_j((k-1)\delta ))ds\right] ^2\le & {} \sum _{k=1}^{\left[ \frac{m^2}{s^2\delta } \right] }\left[ \int _{(k-1)\delta }^{k\delta }(X_j(s)-X_j((k-1)\delta ))ds\right] ^2 \left[ \frac{m^2}{s^2\delta } \right] \\\le & {} \left[ \frac{m^2}{s^2\delta } \right] \delta \sum _{k=1}^{\left[ \frac{m^2}{s^2\delta } \right] }\int _{(k-1)\delta }^{k\delta }(X_j(s)-X_j((k-1)\delta ))^2 ds . \end{aligned}$$

Let us study \(\mathbb {E}[(X_j(s)-X_j((k-1)\delta ))^2]\), for \((k-1)\delta \le s \le k\delta \):

$$\begin{aligned} X_j(s)-X_j((k-1)\delta )=\int _{(k-1)\delta }^s \left( \phi _j-\frac{X_j(u)}{\alpha }\right) du+\int _{(k-1)\delta }^s \sigma dW_j(u) \end{aligned}$$

and Cauchy–Schwarz’s inequality gives

$$\begin{aligned} \mathbb {E}[(X_j(s)-X_j((k-1)\delta ))^2]\le & {} 2\mathbb {E}\left[ \left( \int _{(k-1)\delta }^s \left( \phi _j-\frac{X_j(u)}{\alpha }\right) du\right) ^2\right] + 2\mathbb {E}\left[ \left( \int _{(k-1)\delta }^s \sigma dW_j(u)\right) ^2\right] \nonumber \\\le & {} 2\mathbb {E}\left[ \int _{(k-1)\delta }^s \left( \phi _j-\frac{X_j(u)}{\alpha } \right) ^2du \right] + 2\delta \sigma ^2 \nonumber \\\le & {} 4\delta ^2 \left( \mathbb {E}(\phi _j^2)+\frac{1}{\alpha ^2} \underset{s\ge 0}{\sup }\mathbb {E}[ X_j(s)^2]\right) +2\delta \sigma ^2. \end{aligned}$$
(30)

Finally, after simplification and using for all \(x\in \mathbb {R}^+\), \([ x] \le x\),

$$\begin{aligned} \mathbb {E}\left[ ({Z}_{1,m^2/s^2}-{\widehat{Z}}_{1,m^2/s^2})^2\right]\le & {} \frac{2s^4}{m^4}\mathbb {E}[\left( X_j(m^2/s^2)-X_j(\delta [ m^2/(s^2\delta )]) \right) ^2]\\&+\frac{2}{\alpha ^2} \left( 4\delta ^2 \left( \mathbb {E}(\phi _j^2)+\frac{1}{\alpha ^2} \underset{s\ge 0}{\sup }\mathbb {E}[ X_j(s)^2]\right) +2\delta \sigma ^2\right) \end{aligned}$$

and we can deal with the term \(\mathbb {E}[\left( X_j(m^2/s^2)-X_j(\delta [ m^2/(s^2\delta )]) \right) ^2]\) using formula (30) and \(m^2/s^2-\delta [ m^2/(s^2\delta )] \le \delta \). Thus:

$$\begin{aligned} \mathbb {E}\left[ ({Z}_{1,m^2/s^2}-{\widehat{Z}}_{1,m^2/s^2})^2\right]\le & {} \left( \frac{2s^4}{m^4}+\frac{2}{\alpha ^2}\right) \left( 4\delta ^2 \left( \mathbb {E}(\phi _j^2)+\frac{1}{\alpha ^2} \underset{s\ge 0}{\sup }\mathbb {E}[ X_j(s)^2]\right) +2\delta \sigma ^2\right) . \end{aligned}$$

Besides, for model (1), Eq. (17) implies \(\mathbb {E}[X_j(s)^2]\le 3x_j^2+3\alpha ^2\mathbb {E}[\phi _j^2]+3\sigma ^2\), and \(0<\delta <1\) implies

$$\begin{aligned} \mathbb {E}\left[ ({Z}_{1,m^2/s^2}-{\widehat{Z}}_{1,m^2/s^2})^2\right]\le & {} C \delta \left( \frac{2s^4}{m^4}+\frac{2}{\alpha ^2}\right) \end{aligned}$$

with C a positive constant which does not depend on \(\delta \) or \(m^2/s^2\). Finally,

$$\begin{aligned} \mathbb {E}[\Vert {\widehat{{\widetilde{f}}}}_{m,s}-{\widetilde{f}}_{m,s}\Vert ^2]\le & {} C \delta \left( \frac{2s^4}{m^4}+\frac{2}{\alpha ^2}\right) \frac{1}{2\pi }\int _{-m}^{m} u^2 e^{u^2\sigma ^2s^2/m^2} du\\\le & {} C'\delta \left( \int _{0}^{1} v^2 e^{v^2\sigma ^2s^2} dv \right) \left( \frac{s^4}{m}+\frac{ m^3}{\alpha ^2}\right) . \end{aligned}$$

But \(s \le 2/\sigma \) and \(m=\sqrt{k\varDelta }/\sigma \), with \(k\in \mathbb {N}^*\) and \(0<\varDelta <1\), thus we obtain

$$\begin{aligned} \mathbb {E}[\Vert {\widehat{{\widetilde{f}}}}_{m,s}-{\widetilde{f}}_{m,s}\Vert ^2]\le & {} \frac{C'}{\sigma ^3}\left( \int _{0}^{1} v^2 e^{v^2\sigma ^2s^2} dv \right) \left( 2^4\sqrt{k} \left( \frac{\delta }{\sqrt{\varDelta }}\right) +\frac{k^{3/2}}{ \alpha ^2}\left( \delta \varDelta ^{3/2}\right) \right) . \end{aligned}$$

Proposition 9.1

Under (A), assuming \(\mathbb {E}[\phi _j^2]<+\infty \), the estimator \({\widehat{{\widetilde{f}}}}_{m,s}\) given by (29) satisfies

$$\begin{aligned} \mathbb {E}\left[ \Vert {\widetilde{f}}_{m,s}-f\Vert ^2 \right] \le \Vert {f}_{m}-f\Vert ^2+\frac{\sqrt{k\varDelta }}{\sigma \pi N} e^{\sigma ^2s^2} +\frac{C'}{\sigma ^3} \frac{e^{\sigma ^2s^2}}{2\sigma ^2 s^2} \left( 2^4\sqrt{k} \left( \frac{\delta }{\sqrt{\varDelta }}\right) +\frac{k^{3/2}}{ \alpha ^2}\left( \delta \varDelta ^{3/2}\right) \right) . \end{aligned}$$

Finally if \(\varDelta \) is fixed and \(\delta \) is small, the error is acceptable. For example if \(\delta =\varDelta \) the error is of order \(\sqrt{\delta }\).

For study on the kernel estimator we refer to Comte et al. (2013).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dion, C. Nonparametric estimation in a mixed-effect Ornstein–Uhlenbeck model. Metrika 79, 919–951 (2016). https://doi.org/10.1007/s00184-016-0583-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-016-0583-y

Keywords

Mathematics Subject Classification

Navigation