Skip to main content
Log in

Random discretization of stationary continuous time processes

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

This paper investigates second order properties of a stationary continuous time process after random sampling. While a short memory process always gives rise to a short memory one, we prove that long-memory can disappear when the sampling law has very heavy tails. Despite the fact that the normality of the process is not maintained by random sampling, the normalized partial sum process converges to the fractional Brownian motion, at least when the long memory parameter is preserved.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Adorf HM (1995) Interpolation of irregularly sampled data series—a survey. In: Astronomical society of the pacific conference series, vol 77

  • Beran J, Feng Y, Ghosh S, Kulik R (2013) Long-memory processes. Probabilistic properties and statistical methods. Springer, Heidelberg

    Book  Google Scholar 

  • Bingham NH, Goldie CM, Teugels JL (1989) Regular variation. In: Encyclopedia of mathematics and its applications, vol 27, Cambridge University Press, Cambridge

  • Brockwell PJ, Davis RA, Yang Y (2007) Continuous-time Gaussian autoregression. Stat Sin 17(1):63–80

    MATH  Google Scholar 

  • Broersen PM (2007) Time series models for spectral analysis of irregular data far beyond the mean data rate. Meas Sci Technol 19(1):015103

    Article  Google Scholar 

  • Chambers MJ (1996) The estimation of continuous parameter long-memory time series models. Econom Theory 12(2):374–390

    Article  MathSciNet  Google Scholar 

  • Comte F (1996) Simulation and estimation of long memory continuous time models. J Time Ser Anal 17(1):19–36

    Article  MathSciNet  Google Scholar 

  • Comte F, Renault E (1996) Long memory continuous time models. J Econom 73(1):101–149

    Article  MathSciNet  Google Scholar 

  • Davydov YA (1970) The invariance principle for stationary processes. Theory Probab Appl 15:487–498

    Article  MathSciNet  Google Scholar 

  • Duffie D, Glynn P (2004) Estimation of continuous-time markov processes sampled at random time intervals. Econometrica 72:1773–1808

    Article  MathSciNet  Google Scholar 

  • Elorrieta F, Eyheramendy S, Palma W (2019) Discrete-time autoregressive model for unequally spaced time-series observations. Astron Astrophys 627:A120

    Article  Google Scholar 

  • Eyheramendy S, Elorrieta F, Palma W (2018) An irregular discrete time series model to identify residuals with autocorrelation in astronomical light curves. Mon Not R Astron Soc 481:4311–4322

    Article  Google Scholar 

  • Feller W (1966) An introduction to probability theory and its applications, vol 2. Wiley, New York

    MATH  Google Scholar 

  • Friedman M (1962) The interpolation of time series by related series. J Am Stat Assoc 57(300):729–757

    Article  MathSciNet  Google Scholar 

  • Giraitis L, Koul HL, Surgailis D (2012) Large sample inference for long memory processes. Imperial College Press, London

    Book  Google Scholar 

  • Jones RH (1981) Fitting a continuous time autoregression to discrete data. In: Applied time series analysis II, Elsevier, pp 651–682

  • Jones RH, Tryon PV (1987) Continuous time series models for unequally spaced data applied to modeling atomic clocks. SIAM J Sci Stat Comput 8(1):71–81

    Article  MathSciNet  Google Scholar 

  • Li D, Robinson PM, Shang HL (2019) Long-range dependent curve time series. J Am Stat Assoc. https://doi.org/10.1080/01621459.2019.1604362

    Article  MATH  Google Scholar 

  • Mandelbrot BB, Wallis JR (1969) Robustness of the rescaled range r/s in the measurement of noncyclic long run statistical dependence. Water Resour Res 5:967–988

    Article  Google Scholar 

  • Masry E, Lui M-C (1975) A consistent estimate of the spectrum by random sampling of the time series. SIAM J Appl Math 28(4):793–810

    Article  MathSciNet  Google Scholar 

  • Masry K-SLE (1994) Spectral estimation of continuous-time stationary processes from random sampling. Stoch Process Appl 52:39–64

    Article  MathSciNet  Google Scholar 

  • Mayo WT (1978) Spectrum measurements with laser velocimeters. In: Hansen BW (ed) Proceedings of the dynamic flow conference 1978 on dynamic measurements in unsteady flows, Springer, Dordrecht, Netherlands, pp 851–868

  • Mykland YAPA (2003) The effects of random and discrete sampling when estimating continuous-time diffusions. Econometrica 71:483–549

    Article  MathSciNet  Google Scholar 

  • Nieto-Barajas LE, Sinha T (2014) Bayesian interpolation of unequally spaced time series. Stoch Environ Res Risk Assess 29:577–587

    Article  Google Scholar 

  • Philippe A, Viano M-C (2010) Random sampling of long-memory stationary processes. J Stat Plan Inference 140(5):1110–1124

    Article  MathSciNet  Google Scholar 

  • Scargle JD (1982) Studies in astronomical time series analysis. ii—statistical aspects of spectral analysis of unevenly spaced data. Astrophys J 263:835–853

    Article  Google Scholar 

  • Shi X, Wu Y, Liu Y (2010) A note on asymptotic approximations of inverse moments of nonnegative random variables. Stat Probab Lett 80(15–16):1260–1264

    Article  MathSciNet  Google Scholar 

  • Stout WF (1974) Almost sure convergence. In: Probability and mathematical statistics, vol 24, Academic Press, New York

  • Taqqu MS (1975) Weak convergence to fractional Brownian motion and to the Rosenblatt process. Z Wahrscheinlichkeitstheorie Verw Gebiete 31:287–302

    Article  MathSciNet  Google Scholar 

  • Tsai H (2009) On continuous-time autoregressive fractionally integrated moving average processes. Bernoulli 15(1):178–194

    Article  MathSciNet  Google Scholar 

  • Tsai H, Chan KS (2005a) Maximum likelihood estimation of linear continuous time long memory processes with discrete time data. J R Stat Soc Ser B Stat Methodol 67(5):703–716

    Article  MathSciNet  Google Scholar 

  • Tsai H, Chan KS (2005b) Quasi-maximum likelihood estimation for a class of continuous-time long-memory processes. J Time Ser Anal 26(5):691–713

    Article  MathSciNet  Google Scholar 

  • Viano M-C, Deniau C, Oppenheim G (1994) Continuous-time fractional ARMA processes. Stat Probab Lett 21(4):323–336

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We thank the Associate Editor and the referees for valuable comments that led to an improved this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anne Philippe.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

To prove Lemma 4.1, we need the following intermediate result:

Lemma 5.1

If \({\mathbb {E}}[T_1]<\infty \) and \(\mathbf {X}\) has a regularly varying covariance function

$$\begin{aligned} \sigma _X(t) = L(t) t^{-1+2d} \end{aligned}$$

with \(0<d<1/2\) and L slowly varying at infinity and ultimately non-increasing. Then,

$$\begin{aligned} {\mathrm {Var}}( \sigma _X(T_h))=\circ (L(h)^2h^{-2+4d}), \qquad \text {as } h\rightarrow \infty . \end{aligned}$$
(5.1)

Proof

By Theorem 3.1, we have \( {\mathbb {E}}[\sigma _X(T_h)] \underset{h\rightarrow \infty }{\sim }L(h)(h{\mathbb {E}}[T_1])^{-1+2d}\). To get the result, it is enough to prove that

$$\begin{aligned} {\mathbb {E}}[\sigma _X(T_h)^2] \underset{h\rightarrow \infty }{\sim } L(h)^2(h{\mathbb {E}}[T_1])^{-2+4d}. \end{aligned}$$

To prove the asymptotic behavior of \({\mathbb {E}}[\sigma _X(T_h)^2]\), we will follow a similar proof as theorem 3.1:

  • Let \(0<c<{\mathbb {E}}[T_1]\), and \(h\in {\mathbb {N}}\) such that \(ch\ge 1\),

    $$\begin{aligned} {\mathbb {E}}[\sigma _X(T_h)^2]\ge & {} {\mathbb {E}}\left[ \sigma _X(T_h)^2\,\mathbb {I}_{T_h>ch}\right] \ge {\mathbb {E}}\left[ L(T_h)^2T_h^{-2+4d}\,\mathbb {I}_{T_h> ch}\right] \\\ge & {} \inf _{t>ch}\{L(t)^2t^{4d}\}{\mathbb {E}}\left[ \frac{\,\mathbb {I}_{T_h> ch}}{T_h^2}\right] . \end{aligned}$$

    Thanks to Jensen and Hölder inequalities,

    $$\begin{aligned} {\mathbb {E}}\left[ \frac{\,\mathbb {I}_{T_h> ch}}{T_h^2}\right] \ge {\mathbb {E}}\left[ \frac{\,\mathbb {I}_{T_h> ch}}{T_h}\right] ^2 \text { and } P(T_h> ch)^2 \le {\mathbb {E}}[T_h] {\mathbb {E}}\left[ \frac{\,\mathbb {I}_{T_h> ch}}{T_h}\right] , \end{aligned}$$

    that is

    $$\begin{aligned} {\mathbb {E}}\left[ \frac{\,\mathbb {I}_{T_h> ch}}{T_h^2}\right] \ge \frac{P(T_h> ch)^4}{{\mathbb {E}}[T_h]^2}. \end{aligned}$$

    Summarizing,

    $$\begin{aligned} \frac{{\mathbb {E}}[\sigma _X(T_h)^2] }{ L(h)^2(h{\mathbb {E}}[T_1])^{-2+4d}}\ge \frac{\inf _{t>ch}\{L(t)^2t^{4d}\}}{L(h)^2h^{4d}{\mathbb {E}}[T_1]^{4d}}P(T_h> ch)^4. \end{aligned}$$
    (5.2)

    Then, for \(c<{\mathbb {E}}[T_1]\), we have \(P(T_{h}>ch)\rightarrow 1\) and \(\inf _{t>ch}\{L(t)^2t^{4d}\}\sim L(ch)^2(ch)^{4d}\). Finally, for all \(c<{\mathbb {E}}[T_1]\),

    $$\begin{aligned} \liminf _{h\rightarrow \infty }\frac{{\mathbb {E}}[\sigma _X(T_h)^2] }{ L(h)^2(h{\mathbb {E}}[T_1])^{-2+4d}}\ge \left( \frac{c}{{\mathbb {E}}[T_1]}\right) ^{4d}. \end{aligned}$$

    Taking the limit as \(c\rightarrow {\mathbb {E}}[T_1]\), we get

    $$\begin{aligned} \liminf _{h\rightarrow \infty }\frac{{\mathbb {E}}[\sigma _X(T_h)^2] }{ L(h)^2(h{\mathbb {E}}[T_1])^{-2+4d}}\ge 1. \end{aligned}$$
  • Let \(\frac{1}{2}<s<\tau <1\), \(t_0\) such that L(.) is non-increasing and positive on \([t_0,\infty )\) and h such that \(\mu _{h,s}-\mu _{h,s}^\tau \ge t_0\), with the same notation as Theorem 3.1,

    $$\begin{aligned} {\mathbb {E}}[\sigma _X(T_h)^2]= & {} {\mathbb {E}}\left[ L(T_h)^2T_h^{-2+4d} \,\mathbb {I}_{T_{h,s}\ge \mu _{h,s}-\mu _{h,s}^\tau }\right] +{\mathbb {E}}\left[ \sigma (T_h)^2\,\mathbb {I}_{T_{h,s}<\mu _{h,s}-\mu _{h,s}^\tau }\right] \\\le & {} L(\mu _{h,s}-\mu _{h,s}^\tau )^2\left( \mu _{h,s}-\mu _{h,s} ^\tau \right) ^{-2+4d}+\sigma _X(0)^2 P \left( T_{h,s}<\mu _{h,s}-\mu _{h,s}^\tau \right) . \end{aligned}$$

    We get

    $$\begin{aligned} \frac{{\mathbb {E}}[\sigma _X(T_h)^2] }{ L(h)^2(h{\mathbb {E}}[T_1])^{-2+4d}}&\le \left( \frac{L(\mu _{h,s}-\mu _{h,s}^\tau )}{L(h)}\right) ^{2} \left( \frac{\mu _{h,s}-\mu _{h,s}^\tau }{h{\mathbb {E}}[T_1]}\right) ^{-2+4d}\\ {}&\quad +\sigma _X(0)^2 \frac{P \left( T_{h,s}<\mu _{h,s}-\mu _{h,s}^\tau \right) }{L(h)^2(h{\mathbb {E}}[T_1])^{-2+4d}}, \end{aligned}$$

    and finally

    $$\begin{aligned} \limsup _{h\rightarrow \infty }\frac{{\mathbb {E}}[\sigma _X(T_h)^2] }{ L(h)^2(h{\mathbb {E}}[T_1])^{-2+4d}}\le 1. \end{aligned}$$

\(\square \)

Proof of Lemma 4.1:

Denote

$$\begin{aligned} W_n&=L(n)^{-1} n^{-1-2d}\sum _{i=1}^n\sum _{j=1}^n\sigma _X(T_j-T_i) =L(n)^{-1}n^{-1-2d}{\mathrm {Var}}(X_{T_1}\\&\quad +\dots +X_{T_n}\vert T_1~,\dots ,~T_n). \end{aligned}$$

We want to prove that \(W_n\) converges in probability to \(\gamma _d\). To do this, we will show that \({\mathbb {E}}[W_n]\xrightarrow [n\rightarrow \infty ]{} \gamma _d\) and \({\mathrm {Var}}(W_n)\xrightarrow [n\rightarrow \infty ]{} 0\).

  • As \(\mathbf {X}\) is a centered process \(E[W_n]=L(n)^{-1}n^{-1-2d}{\mathrm {Var}}(Y_1+\dots +Y_n)\). By Theorem 3.1, we have

    $$\begin{aligned} \sigma _Y(h) \sim L(h)(h{\mathbb {E}}[T_1])^{-1+2d} \qquad h\rightarrow \infty , \end{aligned}$$

    then

    $$\begin{aligned} L(n)^{-1}n^{-1-2d}{\mathrm {Var}}(Y_1+\dots +Y_n)\xrightarrow [n\rightarrow \infty ]{} \gamma _d, \end{aligned}$$
    (5.3)

    [see Giraitis et al. (2012) Proposition 3.3.1, page 43]. Therefore we obtain

    $$\begin{aligned} E[W_n]\xrightarrow [n\rightarrow \infty ]{} \gamma _d . \end{aligned}$$
  • Furthermore,

    $$\begin{aligned} {\mathrm {Var}}(W_n)&=L(n)^{-2}n^{-2-4d}{\mathrm {Var}}\left( \sum _{i=1}^n\sum _{j=1}^n \sigma _X(T_j-T_i)\right) \\&\le L(n)^{-2}n^{-2-4d}\left( \sum _{i=1}^n\sum _{j=1}^n \sqrt{{\mathrm {Var}}(\sigma _X(T_j-T_i))}\right) ^2\\&=\left( 2n^{-1-2d}L(n)^{-1} \sum _{h=1}^n(n-h)\sqrt{{\mathrm {Var}}(\sigma _X(T_h))}\right) ^2. \end{aligned}$$

    Then, by Lemma 5.1, \(\sqrt{{\mathrm {Var}}(\sigma _X(T_h))}=\circ (L(h)h^{-1+2d})\) and \( 2 \sum _{h=1}^n (n-h)L(h)h^{-1+2d}\sim \frac{L(n)n^{1+2d}}{d(1+2d)}\). We get

    $$\begin{aligned} 2 \sum _{h=1}^n(n-h)\sqrt{{\mathrm {Var}}(\sigma _X(T_h))}=\circ (L(n)n^{1+2d}). \end{aligned}$$

    Finally, \({\mathrm {Var}}(W_n)=\circ (1)\) which means that \({\mathrm {Var}}(W_n)\xrightarrow [n\rightarrow \infty ]{} 0\). We obtain

    $$\begin{aligned} W_{n}\xrightarrow [n\rightarrow \infty ]{L^2,~ p} \gamma _d. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Philippe, A., Robet, C. & Viano, MC. Random discretization of stationary continuous time processes. Metrika 84, 375–400 (2021). https://doi.org/10.1007/s00184-020-00783-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-020-00783-1

Keywords

Navigation