Skip to main content

Strong Consistency of the Bayesian Estimator for the Ornstein–Uhlenbeck Process

  • Chapter
  • 1188 Accesses

Abstract

In the accompanying paper Kohatsu-Higa et al. (submitted, 2013), we have done a theoretical study of the consistency of a computational intensive parameter estimation method for Markovian models. This method could be considered as an approximate Bayesian estimator method or a filtering problem approximated using particle methods. We showed in Kohatsu-Higa (submitted, 2013) that under certain conditions, which explicitly relate the number of data, the amount of simulations and the size of the kernel window, one obtains the rate of convergence of the method. In this first study, the conditions do not seem easy to verify and for this reason, we show in this paper how to verify these conditions in the toy example of the Ornstein–Uhlenbeck processes. We hope that this article will help the reader understand the theoretical background of our previous studies and how to interpret the required hypotheses.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Note that the solution \(X^{x}_{(m)}(\theta)\) is twice continuously differentiable in θ, since from the definition of the Euler–Maruyama approximation, the OU process is polynomial in θ and the kernel K(x) is infinitely differentiable in x.

References

  1. Ait-Sahalia, Y., Mykland, P.A.: Estimators of diffusions with randomly spaced discrete observations: a general theory. Ann. Stat. 32(5), 2186–2222 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bain, A., Crisan, D.: Fundamentals of Stochastic Filtering. Springer, New York (2009)

    Book  MATH  Google Scholar 

  3. Cano, J.A., Kessler, M., Salmeron, D.: Approximation of the posterior density for diffusion processes. Stat. Probab. Lett. 76(1), 39–44 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Del Moral, P., Jacod, J., Protter, P.: The Monte Carlo method for filtering with discrete-time observations. Probab. Theory Relat. Fields 120, 346–368 (2001)

    Article  MATH  Google Scholar 

  5. Doukhan, P.: Mixing; Properties and Examples. Lecture Notes in Statistics, vol. 85. Springer, Berlin (1994)

    MATH  Google Scholar 

  6. Jacod, J.: Parametric inference for discretely observed non-ergodic diffusions. Bernoulli 12(3), 383–401 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  7. Kelly, L., Platen, E., Sorensen, M.: Estimation for discretely observed diffusions using transform functions. Stochastic methods and their applications. J. Appl. Probab. 41A, 99–118 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  8. Kessler, M.: Estimation of an ergodic diffusion from discrete observations. Scand. J. Stat. 24(2), 211–229 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  9. Kohatsu-Higa, A., Vayatis, N., Yasuda, K.: Tuning of a Bayesian estimator under discrete time observations and unknown transition density (2013, submitted)

    Google Scholar 

  10. Roberts, G.O., Stramer, O.: On inference for partially observed nonlinear diffusion models using the Metropolis-Hastings algorithm. Biometrika 88, 603–621 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  11. Yoshida, N.: Estimation for diffusion processes from discrete observation. J. Multivar. Anal. 41(2), 220–242 (1992)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arturo Kohatsu-Higa .

Editor information

Editors and Affiliations

Appendix

Appendix

Here we give some lemmas, which are used in the parameter tuning sections.

Lemma 6

For c>1, we have

  1. (i).

    \((x+y)^{2} \le\frac{c}{c-1}x^{2}+cy^{2}\),

  2. (ii).

    \(\frac{c-1}{c}x^{2}+(c-1)y^{2} \le(x-y)^{2}\).

The proofs are based on Young’s lemma, which follows from simple calculations.

Lemma 7

For m≥2βΔ, we have \(| (1-\frac{\theta\varDelta}{m})^{m} - e^{-\theta\varDelta} | \le e^{-\alpha \varDelta} (\beta\varDelta)^{2} \frac{1}{m}\).

From this lemma, we obtain

Lemma 8

For k=0,1 and m≥2βΔ, we have the following estimations;

  1. (i).

    \(|\frac{\partial^{k}}{\partial\theta^{k}}( \sigma ^{2}_{\varDelta}(\theta)-\sigma^{2}(m,\theta,h) )|\le C(\alpha,\beta,\varDelta)\{ \frac{1}{m}+h^{2}{\bf1}(k=0)\}\),

  2. (ii).

    \(|\frac{\partial^{k}}{\partial\theta^{k}}( e^{-\theta \varDelta}-\mu(m,\theta) )|\le C(\alpha,\beta,\varDelta) \frac{1}{m}\),

where C(α,β,Δ) is some positive constant.

Lemma 9

For m>βΔ, we have \(\left( 1-\frac{\theta\varDelta}{m}\right )^{m} \le e^{-\theta\varDelta}\).

Proof

Set \(f(x)=(1+\frac{1}{x})^{x}\). Then f(x) is an increasing function for −∞<x<−1 and lim x→−∞ f(x)=e. The conclusion now follows. □

Lemma 10

For kN∪{0},

  1. (i).
    $$\begin{aligned} \sup_{m\ge\max(\frac{k}{2},\beta\varDelta )}\sup_{\theta\in\varTheta} \left| \frac{\partial^k}{\partial\theta^k}\mu (m,\theta) \right| &= \sup_{m\ge\max(\frac{k}{2},\beta\varDelta)}\sup _{\theta\in\varTheta}\left| \frac{\partial^k}{\partial\theta^k}(1-\frac {\theta\varDelta}{m})^m \right|\\ &\le (2\varDelta)^k3^{2\beta\varDelta} < +\infty, \end{aligned}$$
  2. (ii).
    $$\begin{aligned} & \sup_{0\le h\le1}\sup_{m\ge\max (\frac{k}{2},\beta\varDelta)}\sup_{\theta\in\varTheta}\left| \frac{\partial ^k}{\partial\theta^k}\sigma^2(m,\theta,h)\right| \\ &\quad = \sup_{0\le h\le1}\sup_{m\ge\max(\frac{k}{2},\beta \varDelta)}\sup_{\theta\in\varTheta} \left| \frac{\partial^k}{\partial\theta ^k}\frac{(1-\frac{\theta\varDelta}{m})^{2m}-1}{\theta(\frac{\theta }{m}-2)} + h^2{\bf1}_{\{k=0\}} \right| \\ &\quad \le C(k,\varDelta,\alpha)+1<+\infty, \end{aligned}$$
  3. (iii).
    $$\begin{aligned} & \inf_{0\le h\le1}\inf_{m\ge\max (\frac{k}{2},\beta\varDelta)}\inf_{\theta\in\varTheta}\left| \sigma^2(m,\theta ,h) \right| \\ &\quad = \inf_{0\le h\le1}\inf_{m\ge\max(\frac{k}{2},\beta \varDelta)}\inf_{\theta\in\varTheta} \left| \frac{(1-\frac{\theta\varDelta }{m})^{2m}-1}{\theta(\frac{\theta}{m}-2)}+h^2 \right| \ge\frac {2(1-e^{-2\alpha\varDelta})}{3\beta} > 0, \end{aligned}$$

where the positive constant C(k,Δ,α) is defined in the proof.

Proof

Now \(\mu(m,\theta)=(1-\frac{\theta\varDelta}{m})^{m}\) and set \(D^{k}_{\theta}=\frac{\partial^{k}}{\partial\theta^{k}}\). Note that from Lemma 9, we have 0≤μ(m,θ)≤e θΔ≤sup θ e θΔ=e αΔ. Note that

$$\begin{aligned} D^k_{\theta}\mu(m,\theta)=(2m)(2m-1)\cdots(2m-(k-1))\left(1-\frac {\theta\varDelta}{m}\right)^{2m-k}\left(-\frac{\varDelta}{m}\right)^k. \end{aligned}$$

Then

$$\begin{aligned} & D^{k+1}_{\theta} \mu(m,\theta) \\ & \quad = (2m)(2m-1)\cdots(2m-(k-1))(2m-k)\left(1-\frac{\theta\varDelta }{m}\right)^{2m-(k+1)}\left(-\frac{\varDelta}{m}\right)^{k+1}. \end{aligned}$$

Moreover, for 2mk, we have

$$\begin{aligned} \sup_{m} \sup_{\theta} | D^k_{\theta}\mu(m,\theta)| \le\sup_{m} \left \{ \frac{(2m\varDelta)^k}{m^k} \biggl( 1+\frac{\beta\varDelta}{m}\biggr)^{2m-k} \right\} \le(2\varDelta)^k3^{2\beta\varDelta}. \end{aligned}$$

Hence we obtain (i).

Recall that \(\sigma^{2}(m,\theta)=\frac{\mu(m,\theta)^{2}-1}{\theta(\frac {\theta}{m}-2)}\). From the Leibnitz formula, we have

$$\begin{aligned} | D^k_{\theta}\sigma^2(m,\theta)| \le&\sum^k_{i=0} C_{k,i} \sup_{m} \sup_{\theta}\bigl| D^i_{\theta}(\mu (m,\theta)^2) \bigr| \sup_{m} \sup_{\theta} \biggl| D^{k-i}_{\theta}\frac {1}{\theta(\frac{\theta}{m}-2)}\biggr|\\ &{} + \sup_{m} \sup_{\theta} \left| D^k_{\theta}\frac{1}{\theta(\frac{\theta}{m}-2)}\right|. \end{aligned}$$

From the above, the Leibnitz formula and the binomial theorem, we obtain, for i=0,1,…,k,

$$\begin{aligned} \sup_{m} \sup_{\theta}\bigl| D^i_{\theta}(\mu(m,\theta)^2)\bigr| \le& \sup_{m} \sup_{\theta} \left| \varDelta^i \biggl(1-\frac{\theta\varDelta}{m}\biggr)^{2m-i} \sum ^i_{j=0}\left( \begin{array}{c} i \\ j \end{array} \right) \right| \\ \le& \varDelta^i e^{-\alpha\varDelta}\sum^i_{j=0}\left( \begin{array}{c} i \\ j \end{array} \right) < \infty. \end{aligned}$$

Moreover, for all i=0,1,…,k, we have, from the binomial theorem,

$$\begin{aligned} \sup_{m} \sup_{\theta} \left| D^i_{\theta}\frac{1}{\theta(\frac{\theta }{m}-2)} \right| \le\sum^i_{j=0} C_{i,j} \frac{j!}{\alpha^{j+1}} \frac {(i-j)!}{2^{i-j}}. \end{aligned}$$

Then we have

$$\begin{aligned} & \sup_{m} \sup_{\theta\in[\alpha,\beta]}\bigl| D^k_{\theta}\sigma ^2(m,\theta)\bigr| \\ &\quad \le\sum^k_{i=0}C_{k,i} \left\{ \varDelta^i e^{-\alpha\varDelta}\sum ^i_{j=0}\left( \begin{array}{c} i \\ j \end{array} \right)\right\} \left\{ \sum^{k-i}_{j=0} C_{k-i,j} \frac{j!}{\alpha ^{j+1}} \frac{(k-i-j)!}{2^{k-i-j}} \right\} \\ &\qquad {} + \sum^k_{j=0} C_{k,j} \frac{j!}{\alpha^{j+1}} \frac{(k-j)!}{2^{k-j}} =: C(k,\varDelta,\alpha) < \infty, \end{aligned}$$
(18)

so that (ii) holds.

Finally, for mβΔ, we have

$$\begin{aligned} \sigma^2(m,\theta) \ge\frac{1-e^{-2\theta\varDelta}}{\theta\left( 2-\frac {\theta}{2\theta}\right)}=\frac{2}{3\theta}\left( 1-e^{-2\theta\varDelta }\right) \ge\frac{2}{3\beta}\left( 1-e^{-2\alpha\varDelta}\right)>0 \end{aligned}$$

and thus (iii) is valid. Here for mβΔ, \(0 \le\left( 1-\frac{\theta\varDelta}{m} \right)^{m} \le e^{-\theta\varDelta} \le e^{-\alpha\varDelta}\), and for mβΔ,

$$\begin{aligned} \frac{2(1-e^{-2\alpha\varDelta})}{3\beta} \le\frac{( 1-\frac{\theta\varDelta }{m} )^{2m}-1}{\theta(\frac{\theta}{m}-2)} \le\frac{1}{\alpha(2-\beta )}. \end{aligned}$$

Thus the proof is complete. □

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Kohatsu-Higa, A., Vayatis, N., Yasuda, K. (2014). Strong Consistency of the Bayesian Estimator for the Ornstein–Uhlenbeck Process. In: Kabanov, Y., Rutkowski, M., Zariphopoulou, T. (eds) Inspired by Finance. Springer, Cham. https://doi.org/10.1007/978-3-319-02069-3_19

Download citation

Publish with us

Policies and ethics