Skip to main content

Trajectory fitting estimators for SPDEs driven by additive noise


In this paper we study the problem of estimating the drift/viscosity coefficient for a large class of linear, parabolic stochastic partial differential equations (SPDEs) driven by an additive space-time noise. We propose a new class of estimators, called trajectory fitting estimators (TFEs). The estimators are constructed by fitting the observed trajectory with an artificial one, and can be viewed as an analog to the classical least squares estimators from the time-series analysis. As in the existing literature on statistical inference for SPDEs, we take a spectral approach, and assume that we observe the first N Fourier modes of the solution, and we study the consistency and the asymptotic normality of the TFE, as \(N\rightarrow \infty \).

This is a preview of subscription content, access via your institution.


  1. A diagonalizable SPDE is an equation for which the first N Fourier coefficients of the solution form an N-dimensional decoupled system of ordinary stochastic differential equation. For a formal definition, in terms of the differential operators and the structure of the noise term, see for instance Lototsky (2009).

  2. Without loss of generality, we will assume that \(\nu _{k}\ge 0\), for all \(k\in \mathbb {N}\).

  3. Of course, one can consider at once just \(\lambda _{k}=\nu _{k}\). Our choice to consider m is to put the results on par with the notations from the existing literature. As mentioned later, if \(\mathcal {A}_{0}\) and \(\mathcal {A}_{1}\) are some pseudo-differential operators, then it is convenient to denote by 2m the order of the leading order operator.

  4. The terminology comes from the fact that the estimator is obtained by fitting the observed trajectory with the artificial one.


  • Bishwal JPN (2008) Parameter estimation in stochastic differential equations. Lecture notes in mathematics, vol 1923. Springer, Berlin

  • Chow P (2007) Stochastic partial differential equations. Chapman & Hall/CRC applied mathematics and nonlinear science series. Chapman & Hall/CRC, Boca Raton

  • Cialenco I, Glatt-Holtz N (2011) Parameter estimation for the stochastically perturbed Navier-Stokes equations. Stoch Process Appl 121(4):701–724

    MathSciNet  Article  MATH  Google Scholar 

  • Cialenco I, Lototsky SV (2009) Parameter estimation in diagonalizable bilinear stochastic parabolic equations. Stat Inference Stoch Process 12(3):203–219

    MathSciNet  Article  MATH  Google Scholar 

  • Cialenco I, Xu L (2014) A note on error estimation for hypothesis testing problems for some linear SPDEs. Stoch Partial Differ Equ Anal Comput 2(3):408–431

    MathSciNet  MATH  Google Scholar 

  • Cialenco I, Xu L (2015) Hypothesis testing for stochastic PDEs driven by additive noise. Stoch Process Appl 125(3):819–866

    MathSciNet  Article  MATH  Google Scholar 

  • Huebner M, Rozovskii BL (1995) On asymptotic properties of maximum likelihood estimators for parabolic stochastic PDE’s. Probab Theory Relat Fields 103(2):143–163

    MathSciNet  Article  MATH  Google Scholar 

  • Huebner M, Lototsky SV, Rozovskii BL (1997) Asymptotic properties of an approximate maximum likelihood estimator for stochastic PDEs. In: Statistics and control of stochastic processes (Moscow, 1995/1996). World Scientific Publishing, Singapore, pp 139–155

  • Kutoyants YA (2004) Statistical inference for ergodic diffusion processes. Springer series in statistics. Springer, London

  • Kutoyants YA (1991) Minimum-distance parameter estimation for diffusion-type observations. C R Acad Sci Paris Sér I Math 312(8):637–642

    MathSciNet  MATH  Google Scholar 

  • Lototsky SV (2009) Statistical inference for stochastic parabolic equations: a spectral approach. Publ Mat 53(1):3–45

    MathSciNet  Article  MATH  Google Scholar 

  • Markussen B (2003) Likelihood inference for a discretely observed stochastic partial differential equation. Bernoulli 9(5):745–762

    MathSciNet  Article  MATH  Google Scholar 

  • Mishra MN, Bishwal JPN (1995) Approximate maximum likelihood estimation for diffusion processes from discrete observations. Stoch Stoch Rep 52(1–2):1–13

    MathSciNet  Article  MATH  Google Scholar 

  • Mishra MN, Prakasa Rao BLS (2002) Approximation of maximum likelihood estimator for diffusion processes from discrete observations. Stoch Anal Appl 20(6):1309–1329

    MathSciNet  Article  MATH  Google Scholar 

  • Piterbarg LI, Rozovskii BL (1997) On asymptotic problems of parameter estimation in stochastic PDE’s: discrete time sampling. Math Methods Stat 6(2):200–223

    MathSciNet  MATH  Google Scholar 

  • Prakasa Rao BLS (2003) Estimation for some stochastic partial differential equations based on discrete observations. II. Calcutta Stat Assoc Bull 54(215–216):129–141

    MathSciNet  MATH  Google Scholar 

  • Rozovskii BL (1990) Stochastic evolution systems. Mathematics and its applications (soviet series), vol 35. Kluwer Academic Publishers Group, Dordrecht. Linear theory and applications to nonlinear filtering

  • Shiryaev AN (1996) Probability. Graduate texts in mathematics, vol 95, 2nd edn. Springer, New York

    Google Scholar 

  • Shubin MA (2001) Pseudodifferential operators and spectral theory, 2nd edn. Springer, Berlin

    Book  MATH  Google Scholar 

Download references


Part of the research was performed while Igor Cialenco was visiting the Institute for Pure and Applied Mathematics (IPAM), which is supported by the National Science Foundation. The authors would like to thank the anonymous referees, the associate editor and the editor for their helpful comments and suggestions which improved greatly the final manuscript.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Igor Cialenco.



Proof of Theorem 3.1

Due to the nature of desired asymptotic results, the underlying computations are somehow extensive and tedious. For simplicity of brevity, we will only provide the proof for the special case when \(u_{0}=0\), \(\gamma =0\) and \(\sigma =1\), but the genera case can be verified using similar arguments and the details can be obtained from the authors upon request. Most of the evaluations were performed using symbolic computations in Mathematica. For each \(k\in \mathbb {N}\), when \(u_{0}=0\), \(\gamma =0\) and \(\sigma =1\),

$$\begin{aligned} u_{k}(t)=e^{-\mu _{k}(\theta )t}\int _{0}^{t}e^{\mu _{k}(\theta )s}\,dw_{k}(s),\quad t\in [0,T], \end{aligned}$$

and with the notations introduced in (2.9) and (3.1), we get

$$\begin{aligned} A_{k}=\frac{1}{2}\xi _{k}^{2}-X_{k}+2\mu _{k}(\theta )Z_{k}. \end{aligned}$$

Note that for any \(t\in [0,T]\), \(u_{k}(t)\) is a centered normal random variable with variance \((1-e^{-2\mu _{k}(\theta )t})/(2\mu _{k}(\theta ))\), and thus,

$$\begin{aligned} \mathbb {E}\left( u_{k}^{2n}(t)\right) =(2n-1)!!\cdot \left( \frac{1-e^{-2\mu _{k}(\theta )t}}{2\mu _{k}(\theta )}\right) ^{n}, \quad n\in \mathbb {N}. \end{aligned}$$

We first verify (3.2), which now reduces to

$$\begin{aligned} \mathbb {E}(Z_{k})\asymp \frac{T^{2}}{\mu _{k}^{2}(\theta )},\quad k\rightarrow \infty , \end{aligned}$$

by computing

$$\begin{aligned} \mathbb {E}(Z_{k})=\mathbb {E}\left( \int _{0}^{T}\xi _{k}^{2}(t)dt\right) =\int _{0}^{T}\mathbb {E}\left( \xi _{k}^{2}(t)\right) dt =2\int _{0}^{T}\int _{0}^{t}\mathbb {E}\left( \xi _{k}(s)u_{k}^{2}(s)\right) ds\,dt, \end{aligned}$$

where the last equality follows from the definition of \(\xi _{k}\) in (2.9). By Itô’s formula,

$$\begin{aligned} d\xi _{k}(t)u_{k}^{2}(t)= \left( u_{k}^{4}(t)+\xi _{k}(t)\right) dt-2\mu _{k}(\theta )\xi _{k}(t)u_{k}^{2}(t)\,dt+2u_{k}(t)\xi _{k}(t)dw_{k}(t). \end{aligned}$$

Taking the expectations on both sides above, using the definition of \(\xi _{k}(t)\) in (2.9) and (5.1), we obtain that, for \(t\in [0,T]\),

$$\begin{aligned} \mathbb {E}\left( \xi _{k}(t)u_{k}^{2}(t)\right) =\frac{1-e^{-2\mu _{k}(\theta )t}}{8\mu _{k}^{3}(\theta )}-\frac{5te^{-2\mu _{k}(\theta )t}}{4\mu _{k}^{2}(\theta )}+\frac{3\left( 1-e^{-2\mu _{k}(\theta )t}\right) e^{-2\mu _{k}(\theta )t}}{8\mu _{k}^{3}(\theta )}+\frac{t}{4\mu _{k}^{2}(\theta )}. \end{aligned}$$

Therefore, by (5.3),

$$\begin{aligned} \mathbb {E}(Z_{k})=\frac{35-3e^{-4\mu _{k}(\theta )T}-32e^{-2\mu _{k}(\theta )T}}{64\mu _{k}^{5}(\theta )}-\frac{9T+10T\,e^{-2\mu _{k}(\theta )T}}{16\mu _{k}^{4}(\theta )}+\frac{T^{2}}{8\mu _{k}^{3}(\theta )}+\frac{T^{3}}{12\mu _{k}^{2}(\theta )}, \end{aligned}$$

which leads to (5.2), since by the assumption (ii), the first three terms above all have higher orders than the last term, as \(k\rightarrow \infty \), and since \(T>0\) is a fixed constant.

Next, we study the asymptotic order of \(\text {Var}(Z_{k})\), \(k\rightarrow \infty \), given by (3.3), which now reduces to

$$\begin{aligned} \text {Var}(Z_{k})\asymp \frac{T^{3}}{\mu _{k}^{5}(\theta )},\quad k\rightarrow \infty . \end{aligned}$$

In light of (5.5), we are left to compute \(\mathbb {E}(Z_{k}^{2})\). By Itô’s formula, and since the Itô integral terms have zero expectation, we have

$$\begin{aligned} \mathbb {E}(Z_{k}^{2})&=2\int _{0}^{T}\mathbb {E}\left( Z_{k}(t)\xi _{k}^{2}(t)\right) dt=2\int _{0}^{T}\int _{0}^{t}\mathbb {E}\left( \xi _{k}^{4}(s)\right) ds\,dt\nonumber \\&\quad +\,4\int _{0}^{T}\int _{0}^{t}\mathbb {E}\left( Z_{k}(s)\xi _{k}(s)u_{k}^{2}(s)\right) ds\,dt. \end{aligned}$$

To compute the first expectation in (5.7), by Itô’s formula again, we continue

$$\begin{aligned} d\xi _{k}^{4}(t)= & {} 4\xi _{k}^{3}(t)u_{k}^{2}(t)\,dt,\\ d\left( \xi _{k}^{3}(t)u_{k}^{2}(t)\right)= & {} \left( 3\xi _{k}^{2}(t)u_{k}^{4}(t)+\xi _{k}^{3}(t)-2\mu _{k}(\theta )\xi _{k}^{3}(t)u_{k}^{2}(t)\right) dt+2\xi _{k}^{3}(t)u_{k}(t)\,dw_{k}(t),\\ d\xi _{k}^{3}(t)= & {} 3\xi _{k}^{2}(t)u_{k}^{2}(t)\,dt,\\ d\left( \xi _{k}^{2}(t)u_{k}^{4}(t)\right)= & {} 2\left( \xi _{k}(t)u_{k}^{6}(t)+3\xi _{k}^{2}(t)u_{k}^{2}(t)-2\mu _{k}(\theta )\xi _{k}^{2}(t)u_{k}^{4}(t)\right) dt+4\xi _{k}^{2}(t)u_{k}^{3}(t)\,dw_{k}(t). \end{aligned}$$

Hence, we only need to compute \(\mathbb {E}(\xi _{k}(s)u_{k}^{6}(s))\) and \(\mathbb {E}(\xi _{k}^{2}(s)u_{k}^{2}(s))\). Again by Itô’s formula, we get

$$\begin{aligned} d\xi _{k}(t)u_{k}^{6}(t)= & {} \left( u_{k}^{8}(t)+15\xi _{k}(t)u_{k}^{4}(t)-6\mu _{k}(\theta )\xi _{k}(t)u_{k}^{6}(t)\right) dt+6\xi _{k}(t)u_{k}^{5}(t)\,dw_{k}(t),\\ d\xi _{k}^{2}(t)u_{k}^{2}(t)= & {} \left( 2\xi _{k}(t)u_{k}^{4}(t)+\xi _{k}^{2}(t)-2\mu _{k}(\theta )\xi _{k}^{2}(t)u_{k}^{2}(t)\right) dt+2\xi _{k}^{2}(t)u_{k}(t)\,dw_{k}(t),\\ d\xi _{k}(t)u_{k}^{4}(t)= & {} \left( u_{k}^{6}(t)+6\xi _{k}(t)u_{k}^{2}(t)-4\mu _{k}(\theta )\xi _{k}(t)u_{k}^{4}(t)\right) dt+4\xi _{k}(t)u_{k}^{3}(t)\,dw_{k}(t). \end{aligned}$$

Therefore, by (5.1) and (5.4), we can obtain first \(\mathbb {E}(\xi _{k}(s)u_{k}^{6}(s))\) and \(\mathbb {E}(\xi _{k}^{2}(s)u_{k}^{2}(s))\), and then \(\mathbb {E}(\xi _{k}^{3}(s))\), \(\mathbb {E}(\xi _{k}^{2}(s)u_{k}^{4}(s))\) and \(\mathbb {E}(\xi _{k}^{3}(s)u_{k}^{2}(s))\), and finally \(\mathbb {E}(\xi _{k}^{4}(s))\). A similar argument leads to the computation of the second expectation in (5.7).

To sum up, with the help of Mathematica, we obtain that

$$\begin{aligned} \text {Var}(Z_{k})= & {} -\frac{16917}{512\,\mu _{k}^{10}(\theta )}+\frac{3\,e^{-8\mu _{k}(\theta )T}}{128\,\mu _{k}^{10}(\theta )}+\frac{79\,e^{-6\mu _{k}(\theta )T}}{128\,\mu _{k}^{10}(\theta )}+\frac{2953\,e^{-4\mu _{k}(\theta )T}}{512\,\mu _{k}^{10}(\theta )}+\frac{3409\,e^{-2\mu _{k}(\theta )T}}{128\,\mu _{k}^{10}(\theta )}\\&\quad +\frac{1093T}{32\,\mu _{k}^{9}(\theta )}+\frac{45T\,e^{-6\mu _{k}(\theta )T}}{64\,\mu _{k}^{9}(\theta )}+\frac{1165T\,e^{-4\mu _{k}(\theta )T}}{128\,\mu _{k}^{9}(\theta )}+\frac{2321T\,e^{-2\mu _{k}(\theta )T}}{64\,\mu _{k}^{9}(\theta )}\\&\quad -\frac{659T^{2}}{64\,\mu _{k}^{8}(\theta )}+\frac{53T^{2}e^{-4\mu _{k}(\theta )T}}{16\,\mu _{k}^{8}(\theta )}+\frac{71T^{2}e^{-2\mu _{k}(\theta )T}}{8\,\mu _{k}^{8}(\theta )}-\frac{5T^{3}}{12\,\mu _{k}^{7}(\theta )}-\frac{5T^{3}e^{-4\mu _{k}(\theta )T}}{8\,\mu _{k}^{7}(\theta )}\\&\quad -\frac{113T^{3}e^{-2\mu _{k}(\theta )T}}{24\,\mu _{k}^{7}(\theta )}+\frac{23T^{4}}{48\,\mu _{k}^{6}(\theta )}-\frac{5T^{4}e^{-2\mu _{k}(\theta )T}}{2\mu _{k}^{6}(\theta )}+\frac{T^{5}}{15\mu _{k}^{5}(\theta )}, \end{aligned}$$

which clearly implies (5.6), and thus completes the proof. \(\square \)

Proof of Lemma 3.5

By (2.4) and Cauchy–Schwartz inequality, for any \(0\le s\le t\le T\),

$$\begin{aligned} u_{k}^{2}(s)= & {} e^{-2\mu _{k}(\theta )s}\left( u_{k}(0)+\sigma \lambda _{k}^{-\gamma }\int _{0}^{s}e^{\mu _{k}(\theta )r}\,dw_{k}(r)\right) ^{2}\\\le & {} e^{-2\mu _{k}(\theta )s}\left( u_{k}^{2}(0)+\sigma ^{2}\lambda _{k}^{-2\gamma }t\right) \left( 1+\frac{1}{t}\left( \int _{0}^{s}e^{\mu _{k}(\theta )r}\,dw_{k}(r)\right) ^{2}\right) . \end{aligned}$$

Hence, for any \(t\in [0,T]\), and \(n\in \mathbb {N}\),

$$\begin{aligned} \xi _{k}^{n}(t)\le & {} \left( u_{k}^{2}(0)+\sigma ^{2}\lambda _{k}^{-2\gamma }t\right) ^{n}\left\{ \int _{0}^{t}e^{-2\mu _{k}(\theta )s}\left[ 1+\frac{1}{t}\left( \int _{0}^{s}e^{\mu _{k}(\theta )r}\,dw_{k}(r)\right) ^{2}\right] ds\right\} ^{n}\\= & {} \left( u_{k}^{2}(0)+\sigma ^{2}\lambda _{k}^{-2\gamma }t\right) ^{n}\left[ \frac{1-e^{-2\mu _{k}(\theta )t}}{2\mu _{k}(\theta )}+\frac{1}{t}\int _{0}^{t}e^{-2\mu _{k}(\theta )s}\left( \int _{0}^{s}e^{\mu _{k}(\theta )r}\,dw_{k}(r)\right) ^{2}ds\right] ^{n}\\\le & {} \left( u_{k}^{2}(0)+\sigma ^{2}\lambda _{k}^{-2\gamma }t\right) ^{n}\left\{ \left( \frac{1-e^{-2\mu _{k}(\theta )t}}{\mu _{k}(\theta )}\right) ^{n}\right. \\&\left. +\,\frac{2^{n}}{t^{n}}\left[ \int _{0}^{t}e^{-2\mu _{k}(\theta )s}\left( \int _{0}^{s}e^{\mu _{k}(\theta )r}\,dw_{k}(r)\right) ^{2}ds\right] ^{n}\right\} . \end{aligned}$$

By Lototsky (2009, Theorem 2.1), there exists a constant \(\widetilde{D}_{n}=\widetilde{D}_{n}(t)>0\), such that

$$\begin{aligned} \mathbb {E}\left( \left[ \int _{0}^{t}e^{-2\mu _{k}(\theta )s}\left( \int _{0}^{s}e^{\mu _{k}(\theta )r}\,dw_{k}(r)\right) ^{2}ds\right] ^{n}\right) \le \frac{\widetilde{D}_{n}}{\mu _{k}^{n}(\theta )}. \end{aligned}$$

Therefore, for any \(t\in [0,T]\) and \(n\in \mathbb {N}\),

$$\begin{aligned} \mathbb {E}\left( \xi _{k}^{n}(t)\right) \le \left( u_{k}^{2}(0)+\sigma ^{2}\lambda _{k}^{-2\gamma }t\right) ^{n}\left( \frac{1}{\mu _{k}^{n}(\theta )}+\frac{2^{n}\widetilde{D}_{n}}{t^{n}}\frac{1}{\mu _{k}^{n}(\theta )}\right) =D_{n}\left( \frac{u_{k}^{2}(0)+\sigma \lambda _{k}^{-2\gamma }t}{\mu _{k}(\theta )}\right) ^{n}, \end{aligned}$$

where \(D_{n}=D_{n}(t):=1+2^{n}t^{-n}\widetilde{D}_{n}\). \(\square \)

We conclude the appendix by listing, for sake of completeness, a version of law of large numbers and a version of central limit theorem used in the proofs of the main results in this paper. For the detailed proofs of these results we refer the reader to Shiryaev (1996).

Theorem 5.1

(Strong Law of Large Number) Let \(\{\eta _{n}\}_{n\in \mathbb {N}}\) be a sequence of independent random variables, and let \(\{b_{n}\}_{n\in \mathbb {N}}\) be a sequence of non-decreasing positive numbers such that \(\lim _{n\rightarrow \infty }b_{n}=\infty \). If

$$\begin{aligned} \sum _{n=1}^{\infty }\frac{Var \left( \eta _{n}\right) }{b_{n}^{2}}<\infty , \end{aligned}$$


$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{b_{n}}\sum _{k=1}^{n}\left( \eta _{k}-\mathbb {E}(\eta _{k})\right) =0,\quad \mathbb {P}-\text {a.}\,\text {s}. \end{aligned}$$

Remark 5.2

As an immediate corollary, if \(\{\eta _{n}\}_{n\in \mathbb {N}}\) is a sequence of independent non-negative random variables with

$$\begin{aligned} \sum _{n=1}^{\infty }\mathbb {E}(\eta _{n})=\infty \quad \text {and}\quad \sum _{n=1}^{\infty }\frac{\text {Var}\left( \eta _{n}\right) }{\left( \sum _{k=1}^{n}\mathbb {E}(\eta _{k})\right) ^{2}}<\infty , \end{aligned}$$


$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{k=1}^{n}\eta _{k}}{\sum _{k=1}^{n}\mathbb {E}(\eta _{k})}=1,\quad \mathbb {P}-\text {a.}\,\text {s}. \end{aligned}$$

Theorem 5.3

(Lyapunov Central Limit Theorem) Let \(\{\eta _{n}\}_{n\in \mathbb {N}}\) be a sequence of independent random variables with finite second moments. If there exists some \(\delta >0\), such that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{\left( \sum _{k=1}^{n}{} Var (\eta _{k})\right) ^{2+\delta }}\sum _{k=1}^{n}\mathbb {E}\left( \left| \eta _{k}-\mathbb {E}(\eta _{k})\right| ^{2+\delta }\right) =0, \end{aligned}$$


$$\begin{aligned} \frac{\sum _{k=1}^{n}\left( \eta _{k}-\mathbb {E}(\eta _{k})\right) }{\sqrt{\sum _{k=1}^{n}{} Var (\eta _{k})}}\;{\mathop {\longrightarrow }\limits ^{d}}\;\mathcal {N}(0,1),\quad n\rightarrow \infty . \end{aligned}$$

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cialenco, I., Gong, R. & Huang, Y. Trajectory fitting estimators for SPDEs driven by additive noise. Stat Inference Stoch Process 21, 1–19 (2018).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Stochastic partial differential equations
  • Trajectory fitting estimator
  • Parameter estimation
  • Inverse problems
  • Estimation of viscosity coefficient

Mathematics Subject Classification

  • 60H15
  • 35Q30
  • 65L09