Skip to main content
Log in

Abstract

The aim of this paper is to study the asymptotic properties of the maximum likelihood estimator (MLE) of the drift coefficient for fractional stochastic heat equation driven by an additive space-time noise. We consider the traditional for stochastic partial differential equations statistical experiment when the measurements are performed in the spectral domain, and in contrast to the existing literature, we study the asymptotic properties of the maximum likelihood (type) estimators (MLE) when both, the number of Fourier modes and the time go to infinity. In the first part of the paper we consider the usual setup of continuous time observations of the Fourier coefficients of the solutions, and show that the MLE is consistent, asymptotically normal and optimal in the mean-square sense. In the second part of the paper we investigate the natural time discretization of the MLE, by assuming that the first N Fourier modes are measured at M time grid points, uniformly spaced over the time interval [0, T]. We provide a rigorous asymptotic analysis of the proposed estimators when \(N\rightarrow \infty \) and/or \(T,M\rightarrow \infty \). We establish sufficient conditions on the growth rates of NM and T, that guarantee consistency and asymptotic normality of these estimators.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Whenever convenient, we will also use the notation ‘\(\overset{d}{\longrightarrow }\)’ to denote the convergence in distribution of random variables.

  2. Throughout the text we will use the notation \({\mathcal {N}}(\mu _0,\sigma _0^2)\) to denote a Gaussian random variable with mean \(\mu _0\) and variance \(\sigma _0^2\).

  3. We will denote by C with subindexes generic constants that may change from line to line.

  4. Throughout we will use the following version of the Chebyshev inequality: \({\mathbb {P}}(|X|>a)\le {\mathbb {E}}(X^2)/a^2\), for \(a>0\).

References

  1. Bishwal, J.P.N.: Parameter Estimation in Stochastic Differential Equations. Lecture Notes in Mathematics, vol. 1923. Springer, Berlin (2008)

    Book  MATH  Google Scholar 

  2. Bibinger, M., Trabs, M.: On central limit theorems for power variations of the solution to the stochastic heat equation. In: Proceedings in Mathematics and Statistics. Springer (Forthcoming) (2019)

  3. Bibinger, M., Trabs, M.: Volatility estimation for stochastic PDEs using high-frequency observations. In: Stochastic Processes and their Applications (2019). https://doi.org/10.1016/j.spa.2019.09.002

  4. Cheng, Z., Cialenco, I., Gong, R.: Bayesian estimations for diagonalizable bilinear SPDEs. In: Stochastic Processes and their Applications. (Forthcoming) (2019). https://doi.org/10.1016/j.spa.2019.03.020

  5. Cialenco, I., Glatt-Holtz, N.: Parameter estimation for the stochastically perturbed Navier–Stokes equations. Stoch. Process. Appl. 121(4), 701–724 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  6. Cialenco, I., Gong, R., Huang, Y.: Trajectory fitting estimators for SPDEs driven by additive noise. Stat. Inference Stoch. Process. 21(1), 1–19 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  7. Cialenco, I., Huang, Y.: A note on parameter estimation for discretely sampled SPDEs. In: Stochastics and Dynamics (2019). https://doi.org/10.1142/S0219493720500161

  8. Chow, P.: Stochastic Partial Differential Equations. Applied Mathematics and Nonlinear Science Series. Chapman & Hall, Boca Raton (2007)

    Book  MATH  Google Scholar 

  9. Chong, C.: High-frequency analysis of parabolic stochastic PDEs. arXiv:1806.06959 (2019)

  10. Cialenco, I.: Statistical inference for SPDEs: an overview. Stat. Inference Stoch. Process. 21(2), 309–329 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  11. Cialenco, I., Lototsky, S.V., Pospíšil, J.: Asymptotic properties of the maximum likelihood estimator for stochastic parabolic equations with additive fractional Brownian motion. Stoch. Dyn. 9(2), 169–185 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  12. Cialenco, I., Xu, L.: Hypothesis testing for stochastic PDEs driven by additive noise. Stoch. Process. Appl. 125(3), 819–866 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  13. Lototsky, S.V.: Statistical inference for stochastic parabolic equations: a spectral approach. Publ. Mat. 53(1), 3–45 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  14. Lototsky, S.V., Rozovsky, B.L.: Stochastic Partial Differential Equations. Springer, New York (2017)

    Book  MATH  Google Scholar 

  15. Lototsky, S.V., Rozovsky, B.L.: Stochastic Evolution Systems. Linear Theory and Applications to Non-linear Filtering. Probability Theory and Stochastic Modelling, vol. 89, 2nd edn. Springer, New York (2018)

    MATH  Google Scholar 

  16. Markussen, B.: Likelihood inference for a discretely observed stochastic partial differential equation. Bernoulli 9(5), 745–762 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  17. Nourdin, I., Peccati, G.: Normal Approximations with Malliavin Calculus, from Stein’s Method to Universality. Cambridge Tracts in Mathematics, vol. 192. Cambridge University Press, Cambridge (2012)

    Book  MATH  Google Scholar 

  18. Nualart, D.: The Malliavin Calculus and Related Topics, Probability and Its Applications (New York), 2nd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  19. Piterbarg, L.I., Rozovskii, B.L.: On asymptotic problems of parameter estimation in stochastic PDE’s: discrete time sampling. Math. Methods Stat. 6(2), 200–223 (1997)

    MathSciNet  MATH  Google Scholar 

  20. Prakasa Rao, B.L.S.: Nonparametric inference for a class of stochastic partial differential equations based on discrete observations. Sankhyā Ser. A 64(1), 1–15 (2002)

    MathSciNet  MATH  Google Scholar 

  21. Prakasa Rao, B.L.S.: Estimation for some stochastic partial differential equations based on discrete observations. II. Calcutta Stat. Assoc. Bull. 54(215–216), 129–141 (2003)

    MathSciNet  MATH  Google Scholar 

  22. Pasemann, G., Stannat, W.: Drift estimation for stochastic reaction–diffusion systems. Preprint arXiv:1904.04774v1 (2019)

  23. Pospíšil, J., Tribe, R.: Parameter estimates and exact variations for stochastic heat equations driven by space-time white noise. Stoch. Anal. Appl. 25(3), 593–611 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  24. Shiryaev, A.N.: Probability. Graduate Texts in Mathematics, vol. 2, 2nd edn. Springer, New York (1996)

    Book  MATH  Google Scholar 

  25. Shubin, M.A.: Pseudodifferential Operators and Spectral Theory, 2nd edn. Springer, Berlin (2001)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

The second author would like to thank David Nualart for fruitful discussions and advice that improved an early version of this paper. The authors are also grateful to the editors and the anonymous referees for their helpful comments and suggestions which helped to improve greatly the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyun-Jung Kim.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proofs of technical Lemmas

Proof (Proof of Lemma 1)

We start by computing the Malliavin derivative of \(\widehat{F}_{N,T}\). If \(r\le t\) and for \(1\le k\le N\), then

$$\begin{aligned} D_{r,k}u_k(t)&= \sigma \lambda _k^{-\gamma } D_{r,k} \int _0^t e^{-\theta _0 \lambda _k^{2\beta }(t-s)}d \!w_k(s)\\&= \sigma \lambda _k^{-\gamma } e^{-\theta _0 \lambda _k^{2\beta }(t-r)}. \end{aligned}$$

Moreover, one has that \(D_{r,j}u_k(t)=0\) if \(j\ne k\) or \(r>t\). Therefore, for \(r\le T\) and \(1\le j\le N\), we have by (45),

$$\begin{aligned} D_{r,j}\widehat{F}_{N,T}&= -\frac{\sigma }{C_{N,T}^2} \lambda _j^{2\beta +\gamma } u_j(r)- \frac{\sigma }{C_{N,T}^2} \sum _{k=1}^N \lambda _k^{2\beta +\gamma } \int _r^{T} D_{r,j}u_k(t)d \!w_k(t) \nonumber \\&= -\frac{\sigma }{C_{N,T}^2} \lambda _j^{2\beta +\gamma } u_j(r)-\frac{\sigma }{C_{N,T}^2} \lambda _j^{2\beta +\gamma } \int _r^{T} D_{r,j}u_j(t)d \!w_j(t) \nonumber \\&= -\frac{\sigma }{C_{N,T}^2} \lambda _j^{2\beta +\gamma } u_j(r)- \frac{\sigma ^2}{C_{N,T}^2} \lambda _j^{2\beta } \int _r^{T} e^{-\theta _0 \lambda _j^{2\beta }(t-r)} d \!w_j(t). \end{aligned}$$
(33)

We continue by setting

$$\begin{aligned} A:=\left\| C_{N,T} D\widehat{F}_{N,T}\right\| _{{\mathcal {H}}}^2 = C_{N,T}^2\Vert D\widehat{F}_{N,T}\Vert _{{\mathcal {H}}}^2, \end{aligned}$$

and in view of (33), we obtain

$$\begin{aligned} A&=C_{N,T}^2\int _0^T \sum _{k=1}^N \left[ \frac{\sigma }{C_{N,T}^2} \lambda _k^{2\beta +\gamma } u_k(r)+ \frac{\sigma ^2}{C_{N,T}^2} \lambda _k^{2\beta } \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta } (t-r)} dw_k(t)\right] ^2 d \!r \nonumber \\&= \int _0^{T}\sum _{k=1}^N \Bigg [\frac{\sigma ^2}{C_{N,T}^2} \lambda _k^{4\beta +2\gamma } u_k^2(r) + 2\frac{\sigma ^3}{C_{N,T}^2} \lambda _k^{4\beta +\gamma } u_k(r) \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta }(t-r)} d \!w_k(t)\\ \nonumber&\qquad \qquad \qquad \qquad + \frac{\sigma ^4}{C_{N,T}^2} \lambda _k^{4\beta } \left( \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta } (t-r)} d \!w_k(t)\right) ^2 \Bigg ]d \!r \nonumber \\&=:A_1+ A_2+A_3. \end{aligned}$$

It is easy to see that for any process \(\varPhi =\{\varPhi (s), s\in [0,t]\}\) such that \(\sqrt{{\mathbb {V}}\mathrm {ar}(\varPhi (s))}\) is integrable on [0, t], it holds that \(\sqrt{ {\mathbb {V}}\mathrm {ar}\left( \int _0^t \varPhi _s d \!s \right) } \le \int _0^t \sqrt{{\mathbb {V}}\mathrm {ar}(\varPhi _s)} d \!s.\) Therefore, we have

$$\begin{aligned} \sqrt{ {\mathbb {V}}\mathrm {ar}\left( \frac{1}{2} \Vert C_{N,T} D\widehat{F}_{N,T}\Vert _{{\mathcal {H}}}^2 \right) }&\le \frac{1}{2}\left( \sqrt{ {\mathbb {V}}\mathrm {ar}(A_1) } + \sqrt{ {\mathbb {V}}\mathrm {ar}(A_2) } + \sqrt{{\mathbb {V}}\mathrm {ar}(A_3) } \right) \\&\le \frac{1}{2}\left( B_1+B_2+B_3\right) , \end{aligned}$$

where

$$\begin{aligned} B_1&:=\frac{\sigma ^2}{C_{N,T}^2}\int _0^{T} \left[ {\mathbb {V}}\mathrm {ar}\left( \sum _{k=1}^N \lambda _k^{4\beta +2\gamma } u_k^2(r)\right) \right] ^{1/2} d \!r\\ B_2&:=2\frac{\sigma ^3}{C_{N,T}^2}\int _0^{T} \left[ {\mathbb {V}}\mathrm {ar}\left( \sum _{k=1}^N \lambda _k^{4\beta +\gamma } u_k(r) \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta }(t-r)} dw_k(t)\right) \right] ^{1/2} d \!r\\ B_3&:= \frac{\sigma ^4}{C_{N,T}^2} \int _0^{T} \left[ {\mathbb {V}}\mathrm {ar}\left( \sum _{k=1}^N \lambda _k^{4\beta } \left( \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta }(t-r)} dw_k(t)\right) ^2\right) \right] ^{1/2} d \!r. \end{aligned}$$

Note that by (10) and (11),

$$\begin{aligned} {\mathbb {V}}\mathrm {ar}\left( u_k^2(r)\right)&={\mathbb {E}}\left( u_k^4(r)\right) -{\mathbb {E}}^2\left( u_k^2(r)\right) =\frac{\sigma ^4 \lambda _k^{-4\beta -4\gamma }}{2\theta _0^2} \left( 1-e^{-2\theta _0 \lambda _k^{2\beta } r}\right) ^2. \end{aligned}$$
(34)

For \(B_1\), by the independence of \(\{u_k\}_{k\ge 1}\) and (34),

$$\begin{aligned} B_1&=\frac{\sigma ^2}{C_{N,T}^2}\int _0^{T} \left( \sum _{k=1}^N \lambda _k^{8\beta +4\gamma } {\mathbb {V}}\mathrm {ar}\left( u_k^2(r)\right) \right) ^{1/2}d \!r\nonumber \\&=\frac{\sigma ^4}{\sqrt{2}\theta _0C_{N,T}^2}\int _0^{T} \left( \sum _{k=1}^N \lambda _k^{4\beta } \left( 1-e^{-2\theta _0 \lambda _k^{2\beta } r}\right) ^2 \right) ^{1/2} d \!r \nonumber \\&\simeq \frac{\sigma ^4T}{\sqrt{2}\theta _0C_{N,T}^2}\left( \sum _{k=1}^N\lambda _k^{4\beta }\right) ^{1/2}\quad \text{ as }\ T\rightarrow \infty \nonumber \\&\sim \frac{1}{N^{1/2}}\rightarrow 0, \quad \text{ as }\ N,T \rightarrow \infty . \end{aligned}$$
(35)

For \(B_2\), we note that \(u_k\) and \(W_l\) are independent if \(k\ne l\). Therefore, we rewrite \(B_2\) as

$$\begin{aligned} B_2&=\frac{2\sigma ^3}{C_{N,T}^2}\int _0^{T} \left[ \sum _{k=1}^n \lambda _k^{8\beta +2\gamma } {\mathbb {V}}\mathrm {ar}\left( u_k(r) \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta }(t-r)} d \!W_k(t) \right) \right] ^{1/2} d \!r. \end{aligned}$$

By straightforward calculations, we have that

$$\begin{aligned}&{\mathbb {V}}\mathrm {ar}\left( u_k(r) \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta } (t-r)} d \!w_k(t) \right) \\&\le {\mathbb {E}}\left[ u_k^2(r)\left[ \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta } (t-r)} d \!w_k(t)\right) ^2 \right] \nonumber \\&= {\mathbb {E}}\left[ u^2_k(r){\mathbb {E}}\left[ \left( \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta } (t-r)} d \!w_k(t)\right) ^2\Big | {\mathcal {F}}_r \right] \right] \nonumber \\&= \frac{\sigma ^2 \lambda _k^{-2\beta -2\gamma } }{2\theta _0} \left( 1-e^{-2\theta _0 \lambda _k^{2\beta }r}\right) \int _r^{T} e^{-2\theta _0 \lambda _k^{2\beta }(t-r)} d \!t\nonumber \\&\le \frac{\sigma ^2\lambda _k^{-4\beta -2\gamma } }{4\theta ^2_0}. \end{aligned}$$

Therefore, we get

$$\begin{aligned} B_2\le \frac{\sigma ^4}{\theta _0 C_{N,T}^2}\int _0^{T} \left( \sum _{k=1}^N \lambda _k^{4\beta } \right) ^{1/2} d \!r \sim \frac{1}{N^{1/2}}\rightarrow 0, \quad \text{ as } N,T\rightarrow \infty . \end{aligned}$$
(36)

Let us now consider \(B_3\). Since \(w_k\) and \(w_j\) are independents for \(k\ne j\), we have

$$\begin{aligned} B_3&= \frac{\sigma ^4}{C_{N,T}^2} \int _0^{T} \left[ {\mathbb {V}}\mathrm {ar}\left( \sum _{k=1}^N \lambda _k^{4\beta } \left( \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta }(t-r)} d \!w_k(t)\right) ^2\right) \right] ^{1/2} d \!r\nonumber \\&= \frac{\sigma ^4}{C_{N,T}^2} \int _0^{T} \left[ \sum _{k=1}^N \lambda _k^{8\beta } {\mathbb {V}}\mathrm {ar}\left( \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta }(t-r)} d \!w_k(t)\right) ^2 \right] ^{1/2} d \!r\nonumber \\&\le \frac{\sigma ^4}{C_{N,T}^2} \int _0^{T} \left[ \sum _{k=1}^N \lambda _k^{8\beta } {\mathbb {E}}\left( \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta }(t-r)} d \!w_k(t)\right) ^4 \right] ^{1/2} d \!r\nonumber \\&= \frac{\sigma ^4}{C_{N,T}^2} \int _0^{T} \left[ 3 \sum _{k=1}^N \lambda _k^{8\beta } \left[ {\mathbb {E}}\left( \int _r^{T} e^{-\theta _0 \lambda _k^{2\beta }(t-r)} d \!w_k(t)\right) ^2\right] ^2 \right] ^{1/2} d \!r\nonumber \\&\le \frac{\sqrt{3}\sigma ^4}{2\theta _0 C_{N,T}^2} T \left( \sum _{k=1}^N \lambda _k^{4\beta } \right) ^{1/2}\sim \frac{1}{N^{1/2}} \rightarrow 0, \text{ as } N,T \rightarrow \infty . \end{aligned}$$
(37)

Finally, combining (35), (36) and (37), we have that for every \(\varepsilon >0\), there exist two independent constants \(N_0,T_0>0\) such that for all \(N\ge N_0\) and \(T\ge T_0\),

$$\begin{aligned} B_1+B_2+B_3 < \varepsilon . \end{aligned}$$

This completes the proof. \(\square \)

Proof (Proof of Lemma 2)

Using (9), (21) follows by direct evaluations. As far as (22) and (23), since \(u_k(t)-u_k(s)\) is a Gaussian random variable, it is enough to prove (22) and (23) for \(l=1\). We note that for \(t<s\), using (21), we deduce that

$$\begin{aligned}&{\mathbb {E}}|u_k(t)-u_k(s)|^2\\&\quad =\frac{\sigma ^2\lambda _k^{-2\gamma -2\beta }}{2\theta _0} \left[ 2(1-e^{-\theta _0 \lambda _k^{2\beta }(s-t)}) + (e^{-\theta _0 \lambda _k^{2\beta } t}+e^{-\theta _0 \lambda _k^{2\beta }s}) (e^{-\theta _0 \lambda _k^{2\beta } s}-e^{-\theta _0 \lambda _k^{2\beta } t})\right] \\&\quad \le C\sigma ^2\lambda _k^{-2\gamma }|t-s|, \end{aligned}$$

for some \(C>0\), and where in the last inequality we used the fact that \(e^{-x}\) is Lipschitz continuous on \([0,\infty )\). Hence, (22) is proved. Similarly, we have

$$\begin{aligned} {\mathbb {E}}|u_k(t)+u_k(s)|^2&\le C\sigma ^2\lambda _k^{-2\gamma -2\beta }, \end{aligned}$$

for some \(C>0\), which implies (23). The proof is complete. \(\square \)

Proof (Proof of Lemma 3)

Since \(u_k, \ k \ge 1\), are independent, and taking into account that

$$\begin{aligned} \int _0^T u_k(t)d \!w_k(t)=\sum _{i=1}^M \int _{t_{i-1}}^{t_i}u_k(t)d \!w_k(t), \end{aligned}$$

we have that

$$\begin{aligned}&{\mathbb {E}}|Y_{N,M,T}-Y_{N,T}|^2 \\&\quad ={\mathbb {E}}\left| \sum _{k=1}^N \lambda _k^{2\beta +\gamma } \left[ \sum _{i=1}^M u_k(t_{i-1})\left( w_k(t_i)-w_k(t_{i-1})\right) -\int _0^T u_k(t)d \!w_k(t)\right] \right| ^2\\&\quad =\sum _{k=1}^N \lambda _k^{4\beta +2\gamma }{\mathbb {E}} \left| \sum _{i=1}^M u_k(t_{i-1})\left( w_k(t_i)-w_k(t_{i-1})\right) -\int _0^T u_k(t)d \!w_k(t)\right| ^2 \\&\quad =\sum _{k=1}^N \lambda _k^{4\beta +2\gamma } {\mathbb {E}} \left| \sum _{i=1}^M \int _{t_{i-1}}^{t_i} \left( u_k(t_{i-1})-u_k(t)\right) d \!w_k(t) \right| ^2\\&\quad =\sum _{k=1}^N \lambda _k^{4\beta +2\gamma }\sum _{i=1}^M \int _{t_{i-1}}^{t_i} {\mathbb {E}}\left( u_k(t_{i-1})-u_k(t)\right) ^2d \!t \end{aligned}$$

and hence, by (22), there exist constants \(C_1,C_2>0\), such that

$$\begin{aligned} {\mathbb {E}}|Y_{N,M,T}-Y_{N,T}|^2&\le C_1\sum _{k=1}^N \lambda _k^{4\beta }\sum _{i=1}^M \int _{t_{i-1}}^{t_i} |t_{i-1}-t|d \!t\\&=C_1\left( \sum _{k=1}^N \lambda _k^{4\beta }\right) \frac{T^2}{2M} \le C_2\frac{T^2 N^{\frac{4\beta }{d}+1}}{M}. \end{aligned}$$

Hence, (24) follows at once.

Next we will prove (25). We note that

$$\begin{aligned} {\mathbb {E}}|I_{N,M,T}-I_{N,T}|^2&= {\mathbb {E}}\left| \sum _{k=1}^N \lambda _k^{4\beta +2\gamma }\left( \sum _{i=1}^M u_k^2(t_{i-1})(t_i-t_{i-1}) -\int _0^T u_k^2(t)d \!t\right) \right| ^2\\&=\sum _{k=1}^N \lambda _k^{8\beta +4\gamma } {\mathbb {E}}\left| \sum _{i=1}^M \int _{t_{i-1}}^{t_i} \left( u_k^2(t_{i-1})-u_k^2(t)\right) d \!t \right| ^2. \end{aligned}$$

Consequently, letting \(U_i(t):=u_k^2(t_{i-1})-u_k^2(t), \ k\ge 1\), we continue

$$\begin{aligned} {\mathbb {E}}|I_{N,M,T}-I_{N,T}|^2&=\sum _{k=1}^N \lambda _k^{8\beta +4\gamma } \sum _{i=1}^M {\mathbb {E}}\left| \int _{t_{i-1}}^{t_i} U_i(t)d \!t \right| ^2\\&\quad +\,2\sum _{k=1}^N \lambda _k^{8\beta +4\gamma } \sum _{i<j}{\mathbb {E}} \int _{t_{i-1}}^{t_i}\int _{t_{j-1}}^{t_j} U_i(t)U_j(s) d \!sd \!t =: I_1+I_2. \end{aligned}$$

Note that by Cauchy-Schwartz inequality,

$$\begin{aligned} {\mathbb {E}}|U_i^2(t)|&={\mathbb {E}}|u_k^2(t_{i-1})-u_k^2(t)|^2 ={\mathbb {E}}|u_k(t_{i-1})-u_k(t)|^2|u_k(t_{i-1})+u_k(t)|^2\\&\le \left( {\mathbb {E}}|u_k(t_{i-1})-u_k(t)|^4\right) ^{1/2} \left( {\mathbb {E}}|u_k(t_{i-1})+u_k(t)|^4\right) ^{1/2}. \end{aligned}$$

Moreover, by (22) and (23),

$$\begin{aligned} {\mathbb {E}}|U_i^2(t)|\le c_1\lambda _k^{-4\gamma -2\beta }|t-t_{i-1}|,\quad \text{ for } \text{ some }\ c_1>0. \end{aligned}$$
(38)

Again by Cauchy-Schwartz inequality and (38), we have that

$$\begin{aligned} I_1&=\sum _{k=1}^N \lambda _k^{8\beta +4\gamma } \sum _{i=1}^M {\mathbb {E}}\left| \int _{t_{i-1}}^{t_i} U_i(t)d \!t \right| ^2 \le \sum _{k=1}^N \lambda _k^{8\beta +4\gamma } \sum _{i=1}^M (t_i-t_{i-1}) \int _{t_{i-1}}^{t_i} {\mathbb {E}}|U_i^2(t)|d \!t\\&\le c_1\sum _{k=1}^N \lambda _k^{6\beta } \sum _{i=1}^M (t_i-t_{i-1}) \int _{t_{i-1}}^{t_i} (t-t_{i-1})d \!t =c_1\sum _{k=1}^N \lambda _k^{6\beta } \frac{T^3}{2M^2}. \end{aligned}$$

Turning to \(I_2\), we first notice that

$$\begin{aligned}&{\mathbb {E}}|U_i(t)U_j(s)|\\&\quad ={\mathbb {E}}\Big [\left( u_k^2(t_{i-1}){-}u_k^2(t) \right) \left( u_k^2(t_{j-1}){-}u_k^2(s) \right) \Big ]\\&\quad ={\mathbb {E}}\Big [\left( u_k(t_{i-1})-u_k(t)\right) \left( u_k(t_{i-1}){+}u_k(t)\right) \left( u_k(t_{j-1}){-}u_k(s)\right) \left( u_k(t_{j-1}){+}u_k(s)\right) \Big ]\\&\quad ={\mathbb {E}}\Big [\left( u_k(t_{i-1})-u_k(t)\right) \left( u_k(t_{j-1})-u_k(s)\right) u_k(t_{i-1})u_k(t_{j-1})\Big ]\\&\qquad +{\mathbb {E}}\Big [\left( u_k(t_{i-1})-u_k(t)\right) \left( u_k(t_{j-1})-u_k(s)\right) (u_k(t_{i-1})u_k(s)\Big ]\\&\qquad +{\mathbb {E}}\Big [\left( u_k(t_{i-1})-u_k(t)\right) \left( u_k(t_{j-1})-u_k(s)\right) u_k(t)u_k(t_{j-1})\Big ]\\&\qquad +{\mathbb {E}}\Big [\left( u_k(t_{i-1})-u_k(t)\right) \left( u_k(t_{j-1})-u_k(s)\right) u_k(t)u_k(s)\Big ]. \end{aligned}$$

By the Wick’s Lemma [1, Lemma 3.1], we continue

$$\begin{aligned}&{\mathbb {E}}|U_i(t)U_j(s)|\\&\quad ={\mathbb {E}}\left[ \left( u_k(t_{i-1})-u_k(t)\right) \left( u_k(t_{i-1})+u_k(t)\right) \right] {\mathbb {E}}\left[ \left( u_k(t_{j-1})\right. \right. \\&\qquad \left. \left. -\,u_k(s)\right) \left( u_k(t_{j-1})+u_k(s)\right) \right] \\&\qquad +{\mathbb {E}}\left[ \left( u_k(t_{i-1})-u_k(t)\right) \left( u_k(t_{j-1})-u_k(s)\right) \right] {\mathbb {E}}\left[ \left( u_k(t_{i-1})\right. \right. \\&\qquad \left. \left. +\,u_k(t)\right) \left( u_k(t_{j-1})+u_k(s)\right) \right] \\&\qquad +{\mathbb {E}}\left[ \left( u_k(t_{i-1})-u_k(t)\right) \left( u_k(t_{j-1})+u_k(s)\right) \right] {\mathbb {E}}\left[ \left( u_k(t_{i-1})\right. \right. \\&\qquad \left. \left. +\,u_k(t)\right) \left( u_k(t_{j-1})-u_k(s)\right) \right] \\&\quad =:J_1+J_2+J_3. \end{aligned}$$

For \(J_2\), we have

$$\begin{aligned} {\mathbb {E}}\left( u_k(t_{i-1})-u_k(t)\right) \left( u_k(t_{j-1})-u_k(s)\right)&= {\mathbb {E}}\left( u_k(t_{i-1})u_k(t_{j-1})\right) -{\mathbb {E}}\left( u_k(t_{i-1})u_k(s)\right) \\&\quad -{\mathbb {E}}\left( u_k(t)u_k(t_{j-1})\right) +{\mathbb {E}}\left( u_k(t)u_k(s)\right) . \end{aligned}$$

By (21), for \(i<j\) and \(t<s\),

$$\begin{aligned}&{\mathbb {E}}\left( u_k(t_{i-1})-u_k(t)\right) \left( u_k(t_{j-1})-u_k(s)\right) \\&\quad = \frac{\sigma ^2\lambda _k^{-2\gamma -2\beta }}{2\theta _0} \Bigg [e^{-\theta _0 \lambda _k^{2\beta }(t_{j-1}-t_{i-1})}-e^{-\theta _0 \lambda _k^{2\beta }(t_{j-1}+t_{i-1})} -e^{-\theta _0 \lambda _k^{2\beta }(s-t_{i-1})}\\&\qquad +e^{-\theta _0 \lambda _k^{2\beta }(t_{i-1}+s)} -e^{-\theta _0 \lambda _k^{2\beta }(t_{j-1}-t)}+e^{-\theta _0 \lambda _k^{2\beta }(t_{j-1}+t)} +e^{-\theta _0 \lambda _k^{2\beta }(s-t)}-e^{-\theta _0 \lambda _k^{2\beta }(s+t)} \Bigg ] \\&\quad = \frac{\sigma ^2\lambda _k^{-2\gamma -2\beta }}{2\theta _0} \Bigg [\left( e^{-\theta _0 \lambda _k^{2\beta }(t_{j-1}-t_{i-1})} -e^{-\theta _0 \lambda _k^{2\beta }(t_{j-1}-t)}\right) \\&\qquad +\left( e^{-\theta _0 \lambda _k^{2\beta }(t_{j-1}+t)} -e^{-\theta _0 \lambda _k^{2\beta }(t_{j-1}+t_{i-1})}\right) +\left( e^{-\theta _0 \lambda _k^{2\beta }(s-t)} -e^{-\theta _0 \lambda _k^{2\beta }(s-t_{i-1})}\right) \\&\qquad + \left( e^{-\theta _0 \lambda _k^{2\beta }(t_{i-1}+s)} -e^{-\theta _0 \lambda _k^{2\beta }(s+t)}\right) \Bigg ] \le c_2 \lambda _k^{-2\gamma }(t-t_{i-1}), \end{aligned}$$

for some \(c_2>0\). By similar arguments, we also obtain

$$\begin{aligned} {\mathbb {E}}\left( u_k(t_{i-1})+u_k(t)\right) \left( u_k(t_{j-1})+u_k(s)\right) \le c_3\lambda _k^{-4\gamma }(s-t_{j-1}), \end{aligned}$$

for some \(c_3>0\). Thus,

$$\begin{aligned} J_2 \le c_4\lambda _k^{-4\gamma }(t-t_{i-1})(s-t_{j-1}), \end{aligned}$$

for some \(c_4>0\). By analogy, one can treat \(J_1\) and \(J_3\), and derive the following upper bounds:

$$\begin{aligned} J_1\le c_5\lambda _k^{-4\gamma }(t-t_{i-1})(s-t_{j-1}), \qquad J_3 \le c_6\lambda _k^{-4\gamma }(t-t_{i-1})(s-t_{j-1}), \end{aligned}$$

for some \(c_5,c_6>0\). Finally, combining the above, we have

$$\begin{aligned} I_2\le & {} c_7\sum _{k=1}^N \lambda _k^{8\beta } \sum _{i<j} \int _{t_{j-1}}^{t_j}\int _{t_{i-1}}^{t_i}(t-t_{i-1})(s-t_{j-1})d \!t d \!s \\\le & {} c_8\sum _{k=1}^N \lambda _k^{8\beta } \frac{T^4}{M^2}, \quad \text{ for } \text{ some }\ c_8,c_9>0. \end{aligned}$$

Thus, using the estimates for \(I_1,I_2\), and the fact that \(\lambda _k\sim k^{1/d}\), we conclude that

$$\begin{aligned} I_1+I_2 \le c_9\sum _{k=1}^N \lambda _k^{8\beta } \frac{T^4}{M^2} \le C_2 \frac{T^4 N^{\frac{8\beta }{d}+1}}{M^2}, \end{aligned}$$

and hence (25) is proved. The estimate (26) is proved by similar arguments, and we omit the details here. This completes the proof. \(\square \)

1.1 Auxiliary results

For reader’s convenience, we present here some simple, or well-known, results from probability. Let XYZ be random variables, and assume that \(Z>0\) a.s.. For any \(\varepsilon >0\) and \(\delta \in (0,\varepsilon /2)\), the following inequalities hold true.

$$\begin{aligned}&{\mathbb {P}}(|Y/Z|>\varepsilon ) \le {\mathbb {P}}(|Y|>\delta ) + {\mathbb {P}}(|Z-1|> (\varepsilon -\delta )/\varepsilon ), \end{aligned}$$
(39)
$$\begin{aligned}&\sup _{x\in {\mathbb {R}}} \Big | {\mathbb {P}}(X+Y\le x) - \varPhi (x)\Big | \le \sup _{x\in {\mathbb {R}}} \Big | {\mathbb {P}}(X\le x) - \varPhi (x)\Big | + {\mathbb {P}}(|Y|>\varepsilon ) + \varepsilon , \end{aligned}$$
(40)
$$\begin{aligned}&\sup _{x\in {\mathbb {R}}} \Big | {\mathbb {P}}(Y/Z\le x) - \varPhi (x)\Big | \le \sup _{x\in {\mathbb {R}}} \Big | {\mathbb {P}}(Y\le x) - \varPhi (x)\Big | + {\mathbb {P}}(|Z-1|>\varepsilon ) + \varepsilon , \end{aligned}$$
(41)

where \(\varPhi \) denotes the probability function of a standard Gaussian random variable.

Elements of Malliavin calculus

In this section, we recall some facts from Malliavin calculus associated with a Gaussian process, that we use in Sect. 3.1. For more details, we refer to [18]. Toward this end, let \(T>0\) be given. We consider the space \({\mathcal {H}}=L^2\left( [0,T]\times {\mathcal {M}}\right) \), where \({\mathcal {M}}\) is the counting measure on \({\mathbb {N}}\), namely, for \(v\in {\mathcal {H}}\),

$$\begin{aligned} v(t)=\sum _{k=1}^\infty v_k(t). \end{aligned}$$

We endow \({\mathcal {H}}\) with the inner product and the norm

$$\begin{aligned} \langle u,v\rangle _{{\mathcal {H}}}:= \sum _{k=1}^\infty \int _0^T u_k(t) v_k(t) d \!t,\quad \text{ and }\quad \Vert v\Vert _{{\mathcal {H}}}:= \sqrt{\langle v,v\rangle _{{\mathcal {H}}}}, \quad \ u,v\in {\mathcal {H}}. \end{aligned}$$
(42)

We fix an isonormal Gaussian process \(W=\{W(h)\}_{h\in {\mathcal {H}}}\) on \({\mathcal {H}}\), defined on a suitable probability space \((\varOmega , {\mathscr {F}},{\mathbb {P}})\), such that \({\mathscr {F}}=\sigma (W)\) is the \(\sigma \)-algebra generated by W. Denote by \(C_p^{\infty }({\mathbb {R}}^n)\), the space of all smooth functions on \({\mathbb {R}}^n\) with at most polynomial growth partial derivatives. Let \({\mathcal {S}}\) be the space of simple functionals of the form

$$\begin{aligned} F = f(W(h_1), \dots , W(h_n)),\quad f\in C_p^{\infty }({\mathbb {R}}^n), \ h_i \in {\mathcal {H}},\ 1\le i \le n. \end{aligned}$$

As usual, we define the Malliavin derivative D on \({\mathcal {S}}\) by

$$\begin{aligned} DF=\sum _{i=1}^n \frac{\partial f}{\partial x_i} (W(h_1), \dots , W(h_n)) h_i, \quad F\in {\mathcal {S}}. \end{aligned}$$
(43)

We note that the derivative operator D is a closable operator from \(L^p(\varOmega )\) into \(L^p(\varOmega ; {\mathcal {H}})\), for any \(p \ge 1\). Let \({\mathbb {D}}^{1,p}\), \(p \ge 1\), be the completion of \({\mathcal {S}}\) with respect to the norm

$$\begin{aligned} \Vert F\Vert _{1,p} = \left( {\mathbb {E}}\big [ |F|^p \big ] + {\mathbb {E}}\big [ \Vert D F\Vert ^p_{\mathcal {H}}\big ] \right) ^{1/p}. \end{aligned}$$

Also, for F of the form

$$\begin{aligned} F=f\left( W\left( \mathbb {1}_{[0,t_1]}\right) ,\dots ,W\left( \mathbb {1}_{[0,t_n]}\right) \right) , \quad t_1,\dots ,t_n\in [0,T], \end{aligned}$$

we define the Malliavin derivative of F at the point t as

$$\begin{aligned} D_t F=\sum _{i=1}^n \frac{\partial f}{\partial x_i}\left( W\left( \mathbb {1}_{[0,t_1]}\right) ,\dots ,W\left( \mathbb {1}_{[0,t_n]}\right) \right) \mathbb {1}_{[0,t_i]}(t),\quad t\in [0,T], \end{aligned}$$

where \(\mathbb {1}_{A}\) denotes the indicator function of set A. For simplicity, from now on, we define \(W(t):=W\left( \mathbb {1}_{[0,t]}\right) \), \(t\in [0,T]\), to represent a standard Brownian motion on [0, T]. If \({\mathscr {F}}\) is generated by a collection of independent standard Brownian motions \(\{W_k,\ k\ge 1\}\) on [0, T], we define the Malliavin derivative of F at the point t by

$$\begin{aligned} D_tF:=\sum _{k=1}^{\infty }D_{t,k}F:=\sum _{k=1}^{\infty }\sum _{i=1}^n \frac{\partial f}{\partial x_i}\left( W_k(t_1),\dots ,W_k(t_n)\right) \mathbb {1}_{[0,t_i]}(t),\quad t\in [0,T].\nonumber \\ \end{aligned}$$
(44)

Next, we denote by \(\varvec{\delta }\), the adjoint of the Malliavin derivative D (as defined in (43)) given by the duality formula

$$\begin{aligned} {\mathbb {E}}\left( \varvec{\delta }(v) F\right) = {\mathbb {E}}\left( \langle v, DF \rangle _{\mathcal {H}}\right) , \end{aligned}$$

for \(F \in {\mathbb {D}}^{1,2}\) and \(v\in {\mathcal {D}}(\varvec{\delta })\), where \({\mathcal {D}}(\varvec{\delta })\) is the domain of \(\varvec{\delta }\). If \(v\in L^2(\varOmega ;{\mathcal {H}})\cap {\mathcal {D}}(\varvec{\delta })\) is a square integrable process, then the adjoint \(\varvec{\delta }(v)\) is called the Skorokhod integral of the process v (cf. [18]), and it can be written as

$$\begin{aligned} \varvec{\delta }(v)=\int _0^T v(t) d \!W(t). \end{aligned}$$

Proposition 1

 [18, Theorem 1.3.8] Suppose that \(v \in L^2(\varOmega ;{\mathcal {H}})\) is a square integrable process such that \(v(t)\in {\mathbb {D}}^{1,2}\) for almost all \(t\in [0,T]\). Assume that the two parameter process \(\{D_tv(s)\}\) is square integrable in \(L^2\left( [0,T]\times \varOmega ;{\mathcal {H}}\right) \). Then, \(\varvec{\delta }(v) \in {\mathbb {D}}^{1,2}\) and

$$\begin{aligned} D_t \left( \varvec{\delta }(v)\right) = v(t) + \int _0^{T} D_tv(s) d \!W(s),\quad t\in [0,T]. \end{aligned}$$
(45)

Next, we present a connection between Malliavin calculus and Stein’s method. For symmetric functions \(f\in L^2\big ([0,T]^q\big )\), \(q\ge 1\), let us define the following multiple integral of order q

$$\begin{aligned} {\mathbb {I}}_q(f)=q!\int _0^T d \!W(t_1)\int _0^{t_1} d \!W(t_2)\cdots \int _0^{t_{q-1}} d \!W(t_q) f(t_1,\ldots ,t_q), \end{aligned}$$

with \(0<t_1<t_2<\cdots<t_q<T\). Note that \({\mathbb {I}}_q(f)\) is also called the q-th Wiener chaos [17, Theorem 2.7.7]. Denote by \(d_{TV}(F,G)\), the total variation of two random variables F and G.

Theorem 4

 [17, Corollary 5.2.8] Let \(F_N={\mathbb {I}}_q(f_N)\), \(N\ge 1\), be a sequence of random variables for some fixed integer \(q\ge 2\). Assume that \({\mathbb {E}}\left( F_N^2\right) \rightarrow \sigma ^2>0\), as \(N\rightarrow \infty \). Then, as \(N \rightarrow \infty \), the following assertions are equivalent:

  1. 1.

    \(F_N\overset{d}{\longrightarrow }{\mathcal {N}}:={\mathcal {N}}(0,\sigma ^2)\);

  2. 2.

    \(d_{TV}\left( F_N,{\mathcal {N}}\right) \longrightarrow 0\).

We conclude this section with a result about an upper bound for the total variation of a q-th multiple integral and a Gaussian random variable.

Proposition 2

[17, Theorem 5.2.6] Let \(q\ge 2\) be an integer, and let \(F={\mathbb {I}}_q(f)\) be a multiple integral of order q such that \({\mathbb {E}}(F^2)=\sigma ^2>0\). Then, for \({\mathcal {N}}={\mathcal {N}}(0,\sigma ^2)\),

$$\begin{aligned} d_{TV}(F,{\mathcal {N}})\le \frac{2}{\sigma ^2}\sqrt{{\mathbb {V}}\mathrm {ar}\left( \frac{1}{q}\Vert DF \Vert _{{\mathcal {H}}}^2 \right) }. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cialenco, I., Delgado-Vences, F. & Kim, HJ. Drift estimation for discretely sampled SPDEs. Stoch PDE: Anal Comp 8, 895–920 (2020). https://doi.org/10.1007/s40072-019-00164-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40072-019-00164-4

Keywords

Mathematics Subject Classification

Navigation