Abstract
In the accompanying paper Kohatsu-Higa et al. (submitted, 2013), we have done a theoretical study of the consistency of a computational intensive parameter estimation method for Markovian models. This method could be considered as an approximate Bayesian estimator method or a filtering problem approximated using particle methods. We showed in Kohatsu-Higa (submitted, 2013) that under certain conditions, which explicitly relate the number of data, the amount of simulations and the size of the kernel window, one obtains the rate of convergence of the method. In this first study, the conditions do not seem easy to verify and for this reason, we show in this paper how to verify these conditions in the toy example of the Ornstein–Uhlenbeck processes. We hope that this article will help the reader understand the theoretical background of our previous studies and how to interpret the required hypotheses.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Note that the solution \(X^{x}_{(m)}(\theta)\) is twice continuously differentiable in θ, since from the definition of the Euler–Maruyama approximation, the OU process is polynomial in θ and the kernel K(x) is infinitely differentiable in x.
References
Ait-Sahalia, Y., Mykland, P.A.: Estimators of diffusions with randomly spaced discrete observations: a general theory. Ann. Stat. 32(5), 2186–2222 (2004)
Bain, A., Crisan, D.: Fundamentals of Stochastic Filtering. Springer, New York (2009)
Cano, J.A., Kessler, M., Salmeron, D.: Approximation of the posterior density for diffusion processes. Stat. Probab. Lett. 76(1), 39–44 (2006)
Del Moral, P., Jacod, J., Protter, P.: The Monte Carlo method for filtering with discrete-time observations. Probab. Theory Relat. Fields 120, 346–368 (2001)
Doukhan, P.: Mixing; Properties and Examples. Lecture Notes in Statistics, vol. 85. Springer, Berlin (1994)
Jacod, J.: Parametric inference for discretely observed non-ergodic diffusions. Bernoulli 12(3), 383–401 (2006)
Kelly, L., Platen, E., Sorensen, M.: Estimation for discretely observed diffusions using transform functions. Stochastic methods and their applications. J. Appl. Probab. 41A, 99–118 (2004)
Kessler, M.: Estimation of an ergodic diffusion from discrete observations. Scand. J. Stat. 24(2), 211–229 (1997)
Kohatsu-Higa, A., Vayatis, N., Yasuda, K.: Tuning of a Bayesian estimator under discrete time observations and unknown transition density (2013, submitted)
Roberts, G.O., Stramer, O.: On inference for partially observed nonlinear diffusion models using the Metropolis-Hastings algorithm. Biometrika 88, 603–621 (2001)
Yoshida, N.: Estimation for diffusion processes from discrete observation. J. Multivar. Anal. 41(2), 220–242 (1992)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Here we give some lemmas, which are used in the parameter tuning sections.
Lemma 6
For c>1, we have
-
(i).
\((x+y)^{2} \le\frac{c}{c-1}x^{2}+cy^{2}\),
-
(ii).
\(\frac{c-1}{c}x^{2}+(c-1)y^{2} \le(x-y)^{2}\).
The proofs are based on Young’s lemma, which follows from simple calculations.
Lemma 7
For m≥2βΔ, we have \(| (1-\frac{\theta\varDelta}{m})^{m} - e^{-\theta\varDelta} | \le e^{-\alpha \varDelta} (\beta\varDelta)^{2} \frac{1}{m}\).
From this lemma, we obtain
Lemma 8
For k=0,1 and m≥2βΔ, we have the following estimations;
-
(i).
\(|\frac{\partial^{k}}{\partial\theta^{k}}( \sigma ^{2}_{\varDelta}(\theta)-\sigma^{2}(m,\theta,h) )|\le C(\alpha,\beta,\varDelta)\{ \frac{1}{m}+h^{2}{\bf1}(k=0)\}\),
-
(ii).
\(|\frac{\partial^{k}}{\partial\theta^{k}}( e^{-\theta \varDelta}-\mu(m,\theta) )|\le C(\alpha,\beta,\varDelta) \frac{1}{m}\),
where C(α,β,Δ) is some positive constant.
Lemma 9
For m>βΔ, we have \(\left( 1-\frac{\theta\varDelta}{m}\right )^{m} \le e^{-\theta\varDelta}\).
Proof
Set \(f(x)=(1+\frac{1}{x})^{x}\). Then f(x) is an increasing function for −∞<x<−1 and lim x→−∞ f(x)=e. The conclusion now follows. □
Lemma 10
For k∈N∪{0},
-
(i).
$$\begin{aligned} \sup_{m\ge\max(\frac{k}{2},\beta\varDelta )}\sup_{\theta\in\varTheta} \left| \frac{\partial^k}{\partial\theta^k}\mu (m,\theta) \right| &= \sup_{m\ge\max(\frac{k}{2},\beta\varDelta)}\sup _{\theta\in\varTheta}\left| \frac{\partial^k}{\partial\theta^k}(1-\frac {\theta\varDelta}{m})^m \right|\\ &\le (2\varDelta)^k3^{2\beta\varDelta} < +\infty, \end{aligned}$$
-
(ii).
$$\begin{aligned} & \sup_{0\le h\le1}\sup_{m\ge\max (\frac{k}{2},\beta\varDelta)}\sup_{\theta\in\varTheta}\left| \frac{\partial ^k}{\partial\theta^k}\sigma^2(m,\theta,h)\right| \\ &\quad = \sup_{0\le h\le1}\sup_{m\ge\max(\frac{k}{2},\beta \varDelta)}\sup_{\theta\in\varTheta} \left| \frac{\partial^k}{\partial\theta ^k}\frac{(1-\frac{\theta\varDelta}{m})^{2m}-1}{\theta(\frac{\theta }{m}-2)} + h^2{\bf1}_{\{k=0\}} \right| \\ &\quad \le C(k,\varDelta,\alpha)+1<+\infty, \end{aligned}$$
-
(iii).
$$\begin{aligned} & \inf_{0\le h\le1}\inf_{m\ge\max (\frac{k}{2},\beta\varDelta)}\inf_{\theta\in\varTheta}\left| \sigma^2(m,\theta ,h) \right| \\ &\quad = \inf_{0\le h\le1}\inf_{m\ge\max(\frac{k}{2},\beta \varDelta)}\inf_{\theta\in\varTheta} \left| \frac{(1-\frac{\theta\varDelta }{m})^{2m}-1}{\theta(\frac{\theta}{m}-2)}+h^2 \right| \ge\frac {2(1-e^{-2\alpha\varDelta})}{3\beta} > 0, \end{aligned}$$
where the positive constant C(k,Δ,α) is defined in the proof.
Proof
Now \(\mu(m,\theta)=(1-\frac{\theta\varDelta}{m})^{m}\) and set \(D^{k}_{\theta}=\frac{\partial^{k}}{\partial\theta^{k}}\). Note that from Lemma 9, we have 0≤μ(m,θ)≤e −θΔ≤sup θ e −θΔ=e −αΔ. Note that
Then
Moreover, for 2m≥k, we have
Hence we obtain (i).
Recall that \(\sigma^{2}(m,\theta)=\frac{\mu(m,\theta)^{2}-1}{\theta(\frac {\theta}{m}-2)}\). From the Leibnitz formula, we have
From the above, the Leibnitz formula and the binomial theorem, we obtain, for i=0,1,…,k,
Moreover, for all i=0,1,…,k, we have, from the binomial theorem,
Then we have
so that (ii) holds.
Finally, for m≥βΔ, we have
and thus (iii) is valid. Here for m≥βΔ, \(0 \le\left( 1-\frac{\theta\varDelta}{m} \right)^{m} \le e^{-\theta\varDelta} \le e^{-\alpha\varDelta}\), and for m≥βΔ,
Thus the proof is complete. □
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Kohatsu-Higa, A., Vayatis, N., Yasuda, K. (2014). Strong Consistency of the Bayesian Estimator for the Ornstein–Uhlenbeck Process. In: Kabanov, Y., Rutkowski, M., Zariphopoulou, T. (eds) Inspired by Finance. Springer, Cham. https://doi.org/10.1007/978-3-319-02069-3_19
Download citation
DOI: https://doi.org/10.1007/978-3-319-02069-3_19
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-02068-6
Online ISBN: 978-3-319-02069-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)