Skip to main content

Exact Sampling for the Maximum of Infinite Memory Gaussian Processes

  • Chapter
  • First Online:
Advances in Modeling and Simulation
  • 439 Accesses

Abstract

We develop an exact sampling algorithm for the all-time maximum of Gaussian processes with negative drift and general covariance structures. In particular, our algorithm can handle non-Markovian processes even with long-range dependence. Our development combines a milestone-event construction with rare-event simulation techniques. This allows us to find a random time beyond which the running time maximum will never be reached again. The complexity of the algorithm is random but has finite moments of all orders. We also test the performance of the algorithm numerically.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Addie, R., Mannersalo, P., Norros, I.: Most probable paths and performance formulae for buffers with Gaussian input traffic. Eur. Trans. Telecommun. 13(3), 183–196 (2002)

    Article  Google Scholar 

  2. Adler, R.J., Blanchet, J.H., Liu, J.: Efficient Monte Carlo for high excursions of Gaussian random fields. Ann. Appl. Probab. 22(3), 1167–1214 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  3. Alvarez-Lacalle, E., Dorow, B., Eckmann, J.P., Moses, E.: Hierarchical structures induce long-range dynamical correlations in written texts. Proc. Natl. Acad. Sci. 103(21), 7956–7961 (2006)

    Article  Google Scholar 

  4. Ambikasaran, S., Foreman-Mackey, D., Greengard, L., Hogg, D.W., O’Neil, M.: Fast direct methods for Gaussian processes. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 252–265 (2016). https://doi.org/10.1109/TPAMI.2015.2448083

    Article  Google Scholar 

  5. Asmussen, S.: Applied Probability and Queues, 2nd edn. Springer (2003)

    Google Scholar 

  6. Bayer, C., Friz, P., Gatheral, J.: Pricing under rough volatility. Quant. Financ. 16(6), 887–904 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  7. Beran, J.: Statistical Methods for Data with Long-Range Dependence. Statistical Science, pp. 404–416 (1992)

    Google Scholar 

  8. Beran, J., Sherman, R., Taqqu, M.S., Willinger, W.: Long-range dependence in variable-bit-rate video traffic. IEEE Trans. Commun. 43(2/3/4), 1566–1579 (1995)

    Google Scholar 

  9. Blanchet, J., Chen, X., Dong, J.: \(\varepsilon \)-Strong simulation for multidimensional stochastic differential equations via rough path analysis. Ann. Appl. Probab. 27(1), 275–336 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  10. Blanchet, J., Dong, J.: Perfect sampling for infinite server and loss systems. Adv. Appl. Probab. 47(3), 761–786 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Blanchet, J., Dong, J., Liu, Z.: Exact sampling of the infinite horizon maximum of a random walk over a nonlinear boundary. J. Appl. Probab. 56(1), 116–138 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  12. Blanchet, J., Li, C.: Efficient simulation for the maximum of infinite horizon discrete-time Gaussian processes. J. Appl. Probab. 48, 467–489 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Blanchet, J., Sigman, K.: On exact sampling of stochastic perpetuities. J. Appl. Probab. 48(A), 165–182 (2011)

    Google Scholar 

  14. Blanchet, J., Wallwater, A.: Exact sampling of stationary and time-reversed queues. ACM Trans. Model. Comput. Simul. 25(4), 26 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  15. Bucklew, J.A., Radeke, R.: On the Monte Carlo simulation of digital communication systems in Gaussian noise. IEEE Trans. Commun. 51(2), 267–274 (2003)

    Article  Google Scholar 

  16. Chen, Y., Dong, J., Ni, H.: \(\varepsilon \)-strong simulation of fractional Brownian motion and related stochastic differential equations. Mathematics of Operations Research (2021)

    Google Scholar 

  17. Devroye, L.: Non-Uniform Random Variate Generation. Springer (1986)

    Google Scholar 

  18. Dieker, A.: Simulation of fractional Brownian motion. Ph.D. thesis, Masters Thesis, Department of Mathematical Sciences, University of Twente (2004)

    Google Scholar 

  19. Dieker, A.B., Mandjes, M.: On spectral simulation of fractional Brownian motion. Probab. Eng. Inf. Sci. 17(3), 417–434 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  20. Dieker, A.B., Mandjes, M.: Fast simulation of overflow probabilities in a queue with Gaussian input. ACM Trans. Model. Comput. Simul. 16(2), 119–151 (2006)

    Article  MATH  Google Scholar 

  21. Dietrich, C., Newsam, G.N.: Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix. SIAM J. Sci. Comput. 18(4), 1088–1107 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  22. Dombry, C., Engelke, S., Oesting, M.: Exact simulation of max-stable processes. Biometrika 103(2), 303–317 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  23. Doukhan, P., Oppenheim, G., Taqqu, M.: Theory and Applications of Long-Range Dependence. Springer Science & Business Media (2002)

    Google Scholar 

  24. Ensor, K., Glynn, P.: Simulating the maximum of a random walk. J. Stat. Plann. Inference 85, 127–135 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  25. Gatheral, J., Jaisson, T., Rosenbaum, M.: Volatility is rough. Quant. Financ. 18(6), 933–949 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  26. Glynn, P.W., Whitt, W.: The asymptotic efficiency of simulation estimators. Oper. Res. 40(3), 505–520 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  27. Heyde, C., Yang, Y.: On defining long-range dependence. J. Appl. Probab. 34, 939–944 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  28. Huang, C., Devetsikiotis, M., Lambadaris, I., Kaye, A.: Fast simulation of queues with long-range dependent traffic. Stoch. Model. 15(3), 429–460 (1999)

    MathSciNet  MATH  Google Scholar 

  29. Hurst, H.E.: Long-term storage capacity of reservoirs. Trans. Am. Soc. Civ. Eng. 116(1), 770–799 (1951)

    Article  Google Scholar 

  30. Jean-Francois, C.: Simulation and identification of the fractional Brownian motion: a bibliographical and comparative study. J. Stat. Softw. 5, 1–53 (2000)

    Google Scholar 

  31. Karagiannis, T., Molle, M., Faloutsos, M.: Long-range dependence ten years of internet traffic modeling. IEEE Internet Comput. 8(5), 57–64 (2004)

    Article  Google Scholar 

  32. Lau, W.C., Erramilli, A., Wang, J.L., Willinger, W.: Self-similar traffic generation: The random midpoint displacement algorithm and its properties. In: Proceedings IEEE International Conference on Communications ICC’95, vol. 1, pp. 466–472. IEEE (1995)

    Google Scholar 

  33. Liu, Z., Blanchet, J., Dieker, A., Mikosch, T.: On logrithmically optimal exact simulation of max-stable and related random fields on a compact set. Bernoulli 25(4A), 2949–2981 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  34. Norros, I., Mannersalo, P., Wang, J.L.: Simulation of fractional Brownian motion with conditionalized random midpoint displacement. Adv. Perform. Anal. 2(1), 77–101 (1999)

    Google Scholar 

  35. Robinson, P.M.: Gaussian Semiparametric Estimation of Long Range Dependence. The Annals of Statistics, pp. 1630–1661 (1995)

    Google Scholar 

  36. Robinson, P.M.: Time Series with Long Memory. Advanced Texts in Econometrics (2003)

    Google Scholar 

  37. Samorodnitsky, G.: Long Range Dependence. now Publishers Inc (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jose Blanchet .

Editor information

Editors and Affiliations

Appendix

Appendix

1.1 Proof of Lemma 1

Proof

Note that \(S_{k}\) conditional on \(\mathcal {S}_{n}\) is still a Gaussian random variable with conditional mean

$$ \mu _{n}(k)=\mathbb {E}[S_{k}|\mathcal {S}_{n}]=-k\mu +\boldsymbol{U}_{nk}^{\top }\boldsymbol{\Sigma }_{n}^{-1}\tilde{\mathcal {S}}_{n}, $$

and conditional variance

$$ \sigma _{n}(k)^{2}={\text {Var}}[S_{k}|\mathcal {S}_{n}]=\sigma ^{2}k^{2H}-\boldsymbol{U}_{nk}^{\top }\boldsymbol{\Sigma }_{n}^{-1}\boldsymbol{U}_{nk}. $$

The proof of the lemma is divided into three steps. We first establish bounds for the conditional mean \(\mu _{n}(k)\). Let \(\tilde{\mu }_{n}(k)=\boldsymbol{U}_{nk}^{\top }\boldsymbol{\Sigma }_{n}^{-1}\tilde{\mathcal {S}}_{n}\). As \(\tilde{\mu }_{n}(k)\) is a linear combination of \(\tilde{\mathcal {S}}_{n}\), it follows a Normal distribution with mean 0 and variance \(\boldsymbol{U}_{nk}^{\top }\boldsymbol{\Sigma }_{n}^{-1}\boldsymbol{U}_{nk}\). By the law of total variance, \(\boldsymbol{U}_{nk}^{\top }\boldsymbol{\Sigma }_{n}^{-1}\boldsymbol{U}_{nk}<\sigma ^{2}k^{2H}\). In this case, for any fixed \(\delta \in (0,\mu )\),

$$\begin{aligned} \mathbb {P}(\tilde{\mu }_{n}(k)>\delta k) \le \mathbb {P}\left( \frac{\tilde{\mu }_{n}(k)}{\sqrt{\boldsymbol{U}_{nk}^{\top }\boldsymbol{\Sigma }_{n}^{-1}\boldsymbol{U}_{nk}}}>\frac{\delta k}{\sigma k^{H}}\right) =\bar{\Phi }\left( \frac{\delta }{\sigma } k^{1-H}\right) . \end{aligned}$$
(11)

Then,

$$ \sum _{n=1}^{\infty }\sum _{k=n}^{\infty }\mathbb {P}(\tilde{\mu }_{n}(k)>\delta k) =\sum _{k=1}^{\infty }\sum _{n=1}^{k}\mathbb {P}(\tilde{\mu }_{n}(k)>\delta k) \le \sum _{k=1}^{\infty } k \bar{\Phi }\left( \frac{\delta }{\sigma }k^{1-H}\right) <\infty . $$

By Borel-Cantelli Lemma, there exits a random number \(L_{0}\ge n\), which is finite almost surely, such that when \(k>L_{0}\), \(\tilde{\mu }_{n}(k)\le \delta k\), which further implies that \(\mu _{n}(k)\le -(\mu -\delta )k\).

We next establish bounds for \(\sum _{n=1}^{\infty }q(n)\). For \(k>L_{0}\), we have \(\mu _{n}(k)\le -(\mu -\delta )k\) and \(\sigma _{n}(k)^2\le \sigma ^{2}k^{2H}\). Thus, for any \(b\ge 0\),

$$\begin{aligned} \mathbb {P}_{n}(S_{k}>b)\le \mathbb {P}_{n}\left( \frac{S_{k}-\mu _{n}(k)}{\sigma _{n}(k)}>\frac{b+(\mu -\delta )k}{\sigma k^{H}}\right) \le \bar{\Phi }\left( \frac{\mu -\delta }{\sigma }k^{1-H}\right) . \end{aligned}$$
(12)

Based on the analysis above, let \(b=\max _{1\le l\le n} S_{l}\). We decompose \(\sum _{n=1}^{\infty }q(n)\) into three parts:

$$ \sum _{n=1}^{\infty }\sum _{k=n}^{\infty }\mathbb {P}_{n}(S_{k}>b) \le \underbrace{\sum _{n=1}^{L_{0}}\sum _{k=n}^{L_{0}}\mathbb {P}_{n}(S_{k}>b)}_{\text{(I) }} + \underbrace{\sum _{n=1}^{L_{0}}\sum _{k=L_{0}}^{\infty }\mathbb {P}_{n}(S_{k}>b)}_{\text{(II) }} + \underbrace{\sum _{n=L_{0}}^{\infty }\sum _{k=n}^{\infty }\mathbb {P}_{n}(S_{k}>b)}_{\text{(III) }}. $$

Part (I) only involves a finite number of terms. For part (II), from (12), we have

$$ \text{(II) } \le L_{0} \sum _{k=L_{0}}^{\infty }\bar{\Phi }\left( \frac{\mu -\delta }{\sigma }k^{1-H}\right) < \infty . $$

Similarly, for part (III), from (12), we have

$$ \text{(III) }=\sum _{k=L_0}^{\infty }\sum _{n=L_0}^{k}\mathbb {P}_{n}(S_{k}>b) \le \sum _{k=L_0}^{\infty }(k-L_0)\bar{\Phi }\left( \frac{\mu -\delta }{\sigma }k^{1-H}\right) <\infty . $$

Putting parts (I)–(III) together, we have \(\sum _{n=1}^{\infty }q(n)<\infty \). By Borell-Cantelli Lemma, there exits L, which is finite almost surely, such that for any \(n>L\), \(q(n)<a\).

Lastly, we show that \(\mathbb {E}[L^{\eta }]<\infty \) for any \(\eta >0\). Let \(L_1\) denote a large enough constant, such that \(\sum _{k=L_1}^{\infty }\bar{\Phi }\left( \frac{\mu -\delta }{\sigma }k^{1-H}\right) <a\). Then, \(L\le \max \{L_{0},L_1\}\). Thus, to prove \(\mathbb {E}[L^{\eta }]<\infty \), we only need to show that \(\mathbb {E}[L_{0}^{\eta }]<\infty \). Define \(\mathcal {A}_{n}=\bigcup _{k=n}^{\infty }\{\tilde{\mu }_{n}(k)>\delta k\}\). Then \(L_{0}^{\eta }\le \sum _{n=1}^{\infty }1\{\mathcal {A}_{n}\}n^{\eta }\), and

$$\begin{aligned} \mathbb {E}[L_{0}^{\eta }]\le \mathbb {E}\left[ \sum _{n=1}^{\infty }1\{\mathcal {A}_{n}\}n^{\eta }\right]&=\sum _{n=1}^{\infty }\sum _{k=n}^{\infty }\mathbb {P}(\tilde{\mu }_{n}(k)>\delta k)n^{\eta }\\&=\sum _{k=1}^{\infty }\sum _{n=1}^{k}n^{\eta } \mathbb {P}(\tilde{\mu }_{n}(k)>\delta k) \le \sum _{k=1}^{\infty } k^{\eta } \bar{\Phi }\left( \frac{\delta }{\sigma }k^{1-H}\right) <\infty , \end{aligned}$$

where the last inequality follows from (11). \(\square \)

1.2 Proof of Lemma 2

Proof

With a little abuse of notation, we denote \(\mathbb {Q}_n\) as the measure induced by the TBS procedure. First note that

$$\begin{aligned}&\mathbb {Q}_n((S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k)= \sum _{m=n+1}^{\infty }f_{n}(m) \mathbb {P}_{n}((S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k|S_{m}>b)\\ =&\sum _{m=n+1}^{\infty }f_{n}(m)\frac{\mathbb {P}_{n}((S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k,S_{m}>b)}{\mathbb {P}_{n}(S_{m}>b)}\\ =&\sum _{m=n+1}^{\infty }\frac{\mathbb {P}_{n}((S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k,S_{m}>b)}{\sum _{\ell =n+1}^{\infty }\mathbb {P}_{n}(S_{\ell }>b)}\\ =&\sum _{m=n+1}^{\infty }\mathbb {P}_{n}((S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k)\frac{\mathbb {P}_{n}(S_{m}>b|(S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k)}{\sum _{\ell =n+1}^{\infty }\mathbb {P}_{n}(S_{\ell }>b)}\\ =&~\mathbb {P}_{n}((S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k)\frac{\sum _{m=k}^{\infty }\mathbb {P}_{n}(S_{m}>b|(S_{n+1},...,S_{k})\in \cdot ,\tau (b)=k)}{\sum _{\ell =n+1}^{\infty }\mathbb {P}_{n}(S_{\ell }>b)}. \end{aligned}$$

Thus, \(\frac{\textrm{d}\mathbb {P}_{n}}{\textrm{d}\mathbb {Q}_n}(S_{n+1},...,S_{\kappa _n}, \kappa _n<\infty )=\frac{\sum _{\ell =n+1}^{\infty }\mathbb {P}_{n}(S_{\ell }>b)}{\sum _{m=\kappa _n}^{\infty }\mathbb {P}_{\kappa _n}(S_{m}>b)}\). \(\square \)

1.3 Proof of Lemma 3

Proof

Let \(\mathbb {E}_{\mathbb {Q}}\) denote the expectation under measure \(\mathbb {Q}\). Suppose \(M_n=b\). First note that by Lemma 2,

$$\begin{aligned} \begin{aligned} \mathbb {Q}_n(I=1) =&~\mathbb {E}_{\mathbb {Q}_n}\left[ \Bigr (\sum _{\ell =\kappa _n}^{\infty }\mathbb {P}_{\kappa _n}(S_{\ell }>b)\Bigr )^{-1}\right] \\ =&~\mathbb {E}_{\mathbb {P}_{n}}\left[ \frac{1}{\sum _{\ell =\kappa _n}^{\infty }\mathbb {P}_{\kappa _n}(S_{\ell }>b)}\frac{\sum _{\ell =\kappa _n}^{\infty }\mathbb {P}_{\kappa _n}(S_{\ell }>b)}{\sum _{\ell =n+1}^{\infty }\mathbb {P}_{n}(S_{\ell }>b)}1\{\kappa _n<\infty \}\right] \\ =&~\mathbb {E}_{\mathbb {P}_{n}}\left[ \Bigr (\sum _{\ell =n+1}^{\infty }\mathbb {P}_{n}(S_{\ell }>b)\Bigr )^{-1}1\{\kappa _n<\infty \}\right] = \frac{\mathbb {P}_{n}(\kappa _n<\infty )}{\sum _{\ell =n+1}^{\infty }\mathbb {P}_{n}(S_{\ell }>b)} \end{aligned} \end{aligned}$$
(13)

Next, by Bayes rule,

$$\begin{aligned} \mathbb {Q}_n((S_{n+1},...,S_{\kappa _n})\in \cdot ,\kappa _n\in \cdot |I=1) =\frac{\mathbb {Q}_n(I=1|\kappa _n,\mathcal {S}_{\kappa _n})\mathbb {Q}_n((S_{n+1},...,S_{\kappa _n})\in \cdot ,\kappa _n\in \cdot )}{\mathbb {Q}_n(I=1)}. \end{aligned}$$
(14)

As \(\mathbb {Q}_n(I=1|\kappa _n,(S_{n+1},...,S_{\kappa _n}))=\frac{1}{\sum _{\ell =\tau (b)}^{\infty }\mathbb {P}_{\kappa _n}(S_{\ell }>b)}\), plugging (13) in (14), we have

$$\begin{aligned}&\mathbb {Q}_n((S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k|I=1)\\ =&~ \frac{1}{\sum _{\ell =k}^{\infty }\mathbb {P}_{k}(S_{\ell }>b)}\mathbb {Q}_n((S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k) \frac{\sum _{\ell =n+1}^{\infty }\mathbb {P}_{n}(S_{\ell }>b)}{\mathbb {P}_{n}(\kappa _n<\infty )}\\ =&~ \mathbb {E}_{\mathbb {Q}_n}\left[ 1\{(S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k\}\frac{\sum _{\ell =n+1}^{\infty }\mathbb {P}_{n}(S_{\ell }>b)}{\sum _{\ell =k}^{\infty }\mathbb {P}_{k}(S_{\ell }>b)}\right] \frac{1}{\mathbb {P}_{n}(\kappa _n<\infty )}\\ =&~ \frac{\mathbb {E}_{\mathbb {P}_{n}}\left[ 1\{(S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k\}\right] }{\mathbb {P}_{n}(\kappa _n<\infty )} \text{ by } \text{ Lemma } \text{2 }\\ =&~ \mathbb {P}_{n}(S_{n+1},...,S_{k})\in \cdot ,\kappa _n=k|\kappa _n<\infty ). \end{aligned}$$

\(\square \)

1.4 Proof of Lemma 4

Proof

Given \(\mathcal {S}_n\), suppose \(M_n=b\). We also define

$$ N_{1}=\left( \frac{2\sigma ^{2}n^{H}\Vert \boldsymbol{\Sigma }_{n}^{-1}\Vert _{1}\Vert \boldsymbol{\tilde{S}}_{n}\Vert _{1}}{\mu }\right) ^{\frac{1}{1-H}},~ N_{2}=\left( \frac{2\sigma ^{2}}{\pi \mu ^{2}}\right) ^{\frac{1}{2(1-H)}}, \text{ and } N_{3}=\left( \frac{2H-1}{1-H}\frac{16\sigma ^{2}}{\mu ^{2}}\right) ^{2}. $$

Note that for any \(k>n\), \(S_{k}\) conditional on \(\mathcal {S}_{n}\) is still a Gaussian random variable with conditional mean \(\mu _{n}(k)=\mathbb {E}[S_{k}|\mathcal {S}_{n}]=-k\mu +\boldsymbol{U}_{nk}^{\top }\boldsymbol{\Sigma }_{n}^{-1}\tilde{\mathcal {S}}_{n}\), and conditional variance \(\sigma _{n}(k)^{2}={\text {Var}}[S_{k}|\mathcal {S}_{n}]=\sigma ^{2}k^{2H}-\boldsymbol{U}_{nk}^{\top }\boldsymbol{\Sigma }_{n}^{-1}\boldsymbol{U}_{nk}\).

We first establish the sequence of bounds. The lower bound is straightforward. For the upper bound, note that for \(k\ge N_{1}\),

$$ \mu _{n}(k) \le -k\mu +\sigma ^{2}(nk)^{H}\Vert \boldsymbol{\Sigma }_{n}^{-1}\tilde{\mathcal {S}}_{n}\Vert _{1} \le -k\mu +\sigma ^{2}(nk)^{H}\Vert \boldsymbol{\Sigma }_{n}^{-1}\Vert _{1}\Vert \tilde{\mathcal {S}}_{n}\Vert _{1}\le -\frac{k\mu }{2}. $$

Next, note that for \(k\ge \max \{N_{1}, N_{2}\}\),

$$\begin{aligned} \mathbb {P}_{n}(S_{k}>b)\le \frac{1}{\sqrt{2\pi }}\frac{\sigma _{n}(k)}{b-\mu _{n}(k)}\exp \left( -\frac{(b-\mu _{n}(k))^{2}}{\sigma _{n}(k)^{2}}\right) \le \exp \left( -\frac{\mu ^{2}}{8\sigma ^{2}}k^{2-2H}\right) . \end{aligned}$$
(15)

To see the second inequality, note that when \(k\ge N_{1}\), \(\mu _{n}(k)\le -k\mu /2\) and \(\sigma _{n}(k)\le \sigma k^{H}\). Thus, \(\frac{b-\mu _{n}(k)}{\sigma _{n}(k)} \ge \frac{b+k\mu }{2\sigma k^{H}}\ge \frac{\mu }{2\sigma }k^{1-H}\). And for \(k\ge N_{2}\), \(\frac{1}{\sqrt{2\pi }}\left( \frac{\mu }{2\sigma }k^{1-H}\right) ^{-1} \le 1\).

Lastly, we have for \(\ell \ge \max \{N_{1},N_{2},N_{3}\}\),

$$\begin{aligned} \sum _{k=\ell +1}^{\infty }\mathbb {P}_{n}(S_{k}>b)&\le \sum _{k=\ell +1}^{\infty } \exp \left( -\frac{\mu ^{2}}{8\sigma ^{2}}k^{2-2H}\right) ~~~ \text{ from } \text{15 } \text{ as }\, \ell \ge \max \{N_1, N_2\}\\&\le \int _{\ell }^{\infty } \exp \left( -\frac{\mu ^{2}}{8\sigma ^{2}}k^{2-2H}\right) \textrm{d}k\\&=\frac{1}{2-2H} \int _{\ell ^{2-2H}}^{\infty } y^{(2H-1)/(2-2H)}\exp \left( -\frac{\mu ^{2}}{8\sigma ^{2}}y\right) \textrm{d}y\\&\le \frac{1}{2-2H} \int _{\ell ^{2-2H}}^{\infty } \exp \left( -\frac{\mu ^{2}}{16\sigma ^{2}}y\right) \textrm{d}y ~~~ \text{ as }\, \ell \ge N_3\\&\le \frac{8\sigma ^{2}}{(1-H)\mu ^{2}} \exp \left( -\frac{\mu ^{2}}{16\sigma ^{2}}\ell ^{2-2H}\right) =h(\ell ). \end{aligned}$$

For \(\mathbb {E}[B(n)^{\eta }]\), we first note that \(N_{2}\) and \(N_{3}\) are finite constants. Thus, we only need to show that \(\mathbb {E}[N_{1}^{\eta }]<\infty \). For any fixed n,

$$\begin{aligned} \mathbb {E}[N_{1}^{\eta }]&=\mathbb {E}\left[ \left( \frac{2\sigma ^{2}n^{H}\Vert \boldsymbol{\Sigma }_{n}^{-1}\Vert _{1}\Vert \boldsymbol{\tilde{S}}_{n}\Vert _{1}}{\mu }\right) ^{\tfrac{\eta }{1-H}}\right] \\&=\left( \frac{2\sigma ^{2}n^{H}\Vert \boldsymbol{\Sigma }_{n}^{-1}\Vert _{1}}{\mu }\right) ^{\tfrac{\eta }{1-H}}\mathbb {E}\left[ \left( \sum _{k=1}^{n}|S_{k}+k\mu |\right) ^{\tfrac{\eta }{1-H}}\right] \\&\le \left( \frac{2\sigma ^{2}n^{H}\Vert \boldsymbol{\Sigma }_{n}^{-1}\Vert _{1}}{\mu }\right) ^{\tfrac{\eta }{1-H}} n^{\tfrac{\eta }{1-H}-1}\sum _{k=1}^{n}\mathbb {E}\left[ |S_{k}+k\mu |^{\tfrac{\eta }{1-H}}\right] \\&=\left( \frac{2\sigma ^{2}\Vert \boldsymbol{\Sigma }_{n}^{-1}\Vert _{1}}{\mu }\right) ^{\tfrac{\eta }{1-H}} n^{\tfrac{\eta H+\eta +H-1}{1-H}} \frac{\Gamma (\tfrac{\eta /(1-H)+1}{2})}{\sqrt{\pi }}(2\sigma ^{2})^{\tfrac{\eta }{2(1-H)}}\sum _{k=1}^{n}k^{\tfrac{\eta H}{1-H}} \\&\le \left( \frac{2^{3/2}\sigma ^{3}\Vert \boldsymbol{\Sigma }_{n}^{-1}\Vert _{1}}{\mu }\right) ^{\tfrac{\eta }{1-H}}\frac{\Gamma (\tfrac{\eta /(1-H)+1}{2})}{\sqrt{\pi }}n^{\tfrac{2\eta H+\eta }{1-H}}. \end{aligned}$$

\(\square \)

1.5 Proof of Lemma 5

Proof

Given \(\kappa _n\) and \(\mathcal {S}_{\kappa _n}\), suppose \(M_n=b\). First note that

$$ \tilde{q}(n,\ell ) \le \tilde{q}(n,\ell ) \le \cdots \le \sum _{i=\kappa _n+1}^{\infty } \mathbb {P}_{\kappa _n}(S_{i}>b). $$

Next, following the proof of Lemma 4, we have for \(\ell \ge B(\kappa _n)\),

$$ \tilde{q}(n,\ell )+h(\ell )\ge \tilde{q}(n,\ell +1)+h(\ell +1)\ge \cdots \ge \sum _{i=\kappa _n+1}^{\infty } \mathbb {P}_{\kappa _n}(S_{i}>b). $$

Since \(\mathbb {P}_{k}(S_{k}>b)=1\), \(p(k)=(1+\sum _{i=k+1}^{\infty } \mathbb {P}_{k}(S_{i}>b))^{-1}\), and for \(\ell \ge B(\kappa _n)\), \((1+\tilde{q}(n,\ell )+h(\ell ))^{-1} \le p(\kappa _n)\le (1+\tilde{q}(n,\ell ))^{-1}\). The rest of the results follow similarly. \(\square \)

1.6 Proof of Lemma 6

Proof

We first note that in Step 2.1 in Algorithm 2, \(\mathbb {P}_{n}(N(n)=\ell , J=1)= \mathbb {P}_{n}(S_{\ell }>M_{n})\). Next, following the same lines of analysis as the proof of Lemma 1, we have for any \(\delta >0\), there exists \(L_{0}>0\) such that for \(\ell >L_{0}\), \(\mathbb {P}_{n}(S_{\ell }>M_{n})\le \bar{\Phi }\left( \frac{\mu -\delta }{\sigma } \ell ^{1-H}\right) \), and for any \(\eta >0\), \(\mathbb {E}[L_{0}^{\eta }]<\infty \). Then for any \(\eta >0\),

$$\begin{aligned} \mathbb {E}[N(n)^{\eta }|\mathcal {S}_{n}]= & {} \sum _{\ell =n+1}^{\infty }\ell ^{\eta }\mathbb {P}_{n}(N(n)=\ell ) \\\le & {} \mathbb {E}[N_{0}^{\eta }|\mathcal {S}_{n}] + \mathbb {E}\left[ \left. \sum _{\ell =N_{0}+1}^{\infty } \ell ^{\eta }\bar{\Phi }\left( \frac{\mu -\delta }{\sigma } \ell ^{1-H}\right) \right| \mathcal {S}_{n}\right] . \end{aligned}$$

Thus,

$$\mathbb {E}[N(n)^{\eta }]= \mathbb {E}[\mathbb {E}[N(n)^{\eta }|\mathcal {S}_{n}]]\le \mathbb {E}[N_{0}^{\eta }] + \mathbb {E}\left[ \sum _{\ell =N_{0}+1}^{\infty } \ell ^{\eta }\bar{\Phi }\left( \frac{\mu -\delta }{\sigma } \ell ^{1-H}\right) \right] <\infty .$$

\(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Blanchet, J., Chen, L., Dong, J. (2022). Exact Sampling for the Maximum of Infinite Memory Gaussian Processes. In: Botev, Z., Keller, A., Lemieux, C., Tuffin, B. (eds) Advances in Modeling and Simulation. Springer, Cham. https://doi.org/10.1007/978-3-031-10193-9_3

Download citation

Publish with us

Policies and ethics