Simulating space-time random fields with nonseparable Gneiting-type covariance functions


Two algorithms are proposed to simulate space-time Gaussian random fields with a covariance function belonging to an extended Gneiting class, the definition of which depends on a completely monotone function associated with the spatial structure and a conditionally negative definite function associated with the temporal structure. In both cases, the simulated random field is constructed as a weighted sum of cosine waves, with a Gaussian spatial frequency vector and a uniform phase. The difference lies in the way to handle the temporal component. The first algorithm relies on a spectral decomposition in order to simulate a temporal frequency conditional upon the spatial one, while in the second algorithm the temporal frequency is replaced by an intrinsic random field whose variogram is proportional to the conditionally negative definite function associated with the temporal structure. Both algorithms are scalable as their computational cost is proportional to the number of space-time locations that may be irregular in space and time. They are illustrated and validated through synthetic examples.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2


  1. Arroyo, D., Emery, X.: Simulation of intrinsic random fields of order \(k\) with Gaussian generalized increments by Gibbs sampling. Math. Geosci. 47(8), 955–974 (2015)

    MATH  Google Scholar 

  2. Barabesi, L., Pratelli, L.: Universal methods for generating random variables with a given characteristic function. J. Stat. Comput. Simul. 85–8, 1679–1691 (2015)

    MathSciNet  MATH  Google Scholar 

  3. Bernstein, S.: Sur les fonctions absolument monotones. Acta Math. 52, 1–66 (1929)

    MathSciNet  MATH  Google Scholar 

  4. Bochner, S.: Harmonic Analysis and the Theory of Probability. University of California Press, Berkeley (1955)

    MATH  Google Scholar 

  5. Bondesson, L.: On simulation from infinitely divisible distributions. Adv. Appl. Probab. 14(4), 855–869 (1982)

    MathSciNet  MATH  Google Scholar 

  6. Bourotte, M., Allard, D., Porcu, E.: A flexible class of non-separable cross-covariance functions for multivariate space-time data. Spat. Stat. 18, 125–146 (2016)

    MathSciNet  Google Scholar 

  7. Box, G., Muller, M.: A note on the generation of random normal deviates. Ann. Math. Stat. 29(2), 610–611 (1958)

    MATH  Google Scholar 

  8. Brix, A.: Generalized gamma measures and shot-noise Cox processes. Adv. Appl. Probab. 31(4), 929–953 (1999)

    MathSciNet  MATH  Google Scholar 

  9. Carrizo-Vergara, R., Allard, D., Desassis, N.: A general framework for SPDE-based stationary random fields (2018). arXiv preprint arXiv:1806.04999

  10. Chambers, J., Mallows, C., Stuck, B.: A method for simulating stable random variables. J. Am. Stat. Assoc. 71(354), 340–344 (1976)

    MathSciNet  MATH  Google Scholar 

  11. Chan, G., Wood, A.T.: Simulation of stationary Gaussian vector fields. Stat. Comput. 9(4), 265–268 (1999)

    Google Scholar 

  12. Chilès, J.-P., Delfiner, P.: Geostatistics: Modeling Spatial Uncertainty, 2nd edn. Wiley, London (2012)

    MATH  Google Scholar 

  13. Cox, D.R., Isham, V.: A simple spatial-temporal model of rainfall. Proc. R. Soc. Lond. A Math. Phys. Sci. 415(1849), 317–328 (1988)

    MathSciNet  Google Scholar 

  14. Cuevas, F., Porcu, E., Bevilacqua, M.: Contours and dimple for the Gneiting class of space-time correlation functions. Biometrika 101(4), 995–1001 (2017)

    MathSciNet  MATH  Google Scholar 

  15. Davis, M.: Production of conditional simulations via the LU triangular decomposition of the covariance matrix. Math. Geol. 19(2), 91–98 (1987)

    MathSciNet  Google Scholar 

  16. De Iaco, S., Myers, D.E., Posa, D.: Space-time analysis using a general product-sum model. Stat. Probab. Lett. 52(1), 21–28 (2001)

    MathSciNet  MATH  Google Scholar 

  17. Devroye, L.: Non-Uniform Random Variate Generation. Springer, Berlin (1986)

    MATH  Google Scholar 

  18. Devroye, L.: The computer generation of random variables with a given characteristic function. Comput. Math. Appl. 7, 547–552 (2001)

    MathSciNet  MATH  Google Scholar 

  19. Devroye, L.: Random variate generation for exponentially and polynomially tilted stable distributions. ACM Trans. Model. Comput. Simul. 19–4, 1–20 (2009)

    MATH  Google Scholar 

  20. Dietrich, C., Newsam, G.: Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix. SIAM J. Sci. Comput. 18(4), 1088–1107 (1997)

    MathSciNet  MATH  Google Scholar 

  21. Emery, X.: Statistical tests for validating geostatistical simulation algorithms. Comput. Geosci. 34(11), 1610–1620 (2008a)

    Google Scholar 

  22. Emery, X.: Substitution random fields with Gaussian and gamma distributions: theory and application to a pollution data set. Math. Geosci. 40(1), 83–99 (2008b)

    MathSciNet  MATH  Google Scholar 

  23. Emery, X., Lantuéjoul, C.: Tbsim: a computer program for conditional simulation of three-dimensional Gaussian random fields via the turning bands method. Comput. Geosci. 32(10), 1615–1628 (2006)

    Google Scholar 

  24. Feller, W.: An Introduction to Probability Theory and its Applications, vol. II. Wiley, London (1966)

    MATH  Google Scholar 

  25. Gneiting, T.: Nonseparable, stationary covariance functions for space-time data. J. Am. Stat. Assoc. 97(458), 590–600 (2002)

    MathSciNet  MATH  Google Scholar 

  26. Gneiting, T., Guttorp, P.: Continuous parameter spatio-temporal processes. Handb. Spat. Stat. 97, 427–436 (2010)

    MathSciNet  Google Scholar 

  27. Gneiting, T., Sasvári, Z., Schlather, M.: Analogies and correspondences between variograms and covariance functions. Adv. Appl. Probab. 33(3), 617–630 (2001)

    MathSciNet  MATH  Google Scholar 

  28. Gneiting, T., Schlather, M.: Stochastic models that separate fractal dimension and the Hurst effect. SIAM Rev. 46(2), 269–282 (2004)

    MathSciNet  MATH  Google Scholar 

  29. Kent, J.T., Mohammadzadeh, M., Mosammam, A.M.: The dimple in Gneiting’s spatial-temporal covariance model. Biometrika 98(2), 489–494 (2011)

    MathSciNet  MATH  Google Scholar 

  30. Khintchine, A., Lévy, P.: Sur les lois stables. Compte Rendus de l’académie des Sciences de Paris 202, 374–376 (1936)

    MATH  Google Scholar 

  31. Kolmogorov, A.: The local structure of turbulence in incompressible viscous fluid at very large Reynolds numbers. In: Friedlander, S., Topping, L. (eds.) Turbulence: Classic Papers on Statistical Theory, pp. 151–155. Interscience Publishers, New York (1961)

    Google Scholar 

  32. Lancaster, H.-O.: Some properties of the bivariate normal distribution considered in the form of a contingency table. Biometrika 44(1–2), 289–292 (1957)

    MATH  Google Scholar 

  33. Lantuéjoul, C.: Ergodicity and integral range. J. Microsc. 161(3), 387–404 (1991)

    Google Scholar 

  34. Lantuéjoul, C.: Non conditional simulation of stationary isotropic multigaussian random functions. In: Armstrong, M., Dowd, P.A. (eds.) Geostat. Simul., pp. 147–177. Kluwer Academic Publishers, Dordrecht (1994)

    Google Scholar 

  35. Lantuéjoul, C.: Geostatistical Simulation: Models and Algorithms. Springer, Berlin (2002)

    MATH  Google Scholar 

  36. Ma, C.: Spatio-temporal covariance functions generated by mixtures. Math. Geol. 34(8), 965–975 (2002)

    MathSciNet  MATH  Google Scholar 

  37. Ma, C.: Families of spatio-temporal stationary covariance models. J. Stat. Plan. Inference 116(2), 489–501 (2003)

    MathSciNet  MATH  Google Scholar 

  38. Matheron, G.: Leçon sur les fonctions aléatoires d’ordre 2. Technical report, Centre de Géostatistique, Ecole des Mines de Paris, C-53 (1972)

  39. Matheron, G.: The intrinsic random functions and their applications. Adv. Appl. Probab. 4–3, 508–541 (1973)

    MATH  Google Scholar 

  40. Matheron, G.: La déstructuration des hautes teneurs et le krigeage des indicatrices. Technical report, Centre de Géostatistique, École des Mines de Paris (1982)

  41. Oberhettinger, F., Badii, L.: Tables of Laplace Transforms. Springer, New York (1973)

    MATH  Google Scholar 

  42. Sato, K.: Absolute continuity of multivariate distributions of class L. J. Multivar. Anal. 12(1), 89–94 (1982)

    MathSciNet  MATH  Google Scholar 

  43. Schilling, R., Song, R., Vondraček, Z.: Bernstein Functions. De Gruyter, Berlin (2010)

    MATH  Google Scholar 

  44. Schlather, M.: Some covariance models based on normal scale mixtures. Bernoulli 16(3), 780–797 (2010)

    MathSciNet  MATH  Google Scholar 

  45. Schlather, M.: Construction of covariance functions and unconditional simulation of random fields. In: Advances and Challenges in Space-Time Modelling of Natural Events, pp. 25–54. Springer (2012)

  46. Schlather, M., Malinowski, A., Menck, P.J., Oesting, M., Strokorb, K., et al.: Analysis, simulation and prediction of multivariate random fields with package RandomFields. J. Stat. Softw. 63(8), 1–25 (2015)

    Google Scholar 

  47. Schlather, M., Malinowski, A., Oesting, M., Boecker, D., Strokorb, K., Engelke, S., Martini, J., Ballani, F., Moreva, O., Auel, J., Menck, P.J., Gross, S., Ober, U., Ribeiro, P., Ripley, B.D., Singleton, R., Pfaff, B., R Core Team: RandomFields: Simulation and Analysis of Random Fields. R package version 3.3.8 (2020)

  48. Schlather, M., Moreva, O.: A parametric model bridging between bounded and unbounded variograms. Stat 6(1), 47–52 (2017)

    MathSciNet  Google Scholar 

  49. Schoenberg, I.-J.: Metric spaces and completely monotone functions. Ann. Math. 39(4), 811–831 (1938)

    MathSciNet  MATH  Google Scholar 

  50. Shinozuka, M.: Simulation of multivariate and multidimensional random processes. J. Acoust. Soc. Am. 49(1B), 357–367 (1971)

    Google Scholar 

  51. White, P., Porcu, E.: Towards a complete picture of stationary covariance functions on spheres cross time. Electr. J. Stat. 13, 2566–2594 (2019)

    MathSciNet  MATH  Google Scholar 

  52. Wood, A.T., Chan, G.: Simulation of stationary Gaussian processes in \([0, 1]^d\). J. Comput. Graph. Stat. 3(4), 409–432 (1994)

    MathSciNet  Google Scholar 

  53. Yaglom, A.: Some classes of random fields in \(n\)-dimensional space, related to stationary random processes. Theory Probab. Its Appl. 2–3, 273–320 (1957)

    Google Scholar 

  54. Zastavnyi, V.P., Porcu, E.: Characterization theorems for the Gneiting class of space-time covariances. Bernoulli 17(1), 456–465 (2011)

    MathSciNet  MATH  Google Scholar 

Download references


Denis Allard and Christian Lantuéjoul acknowledge support of the RESSTE network funded by the Applied Mathematics and Informatics division of INRA. Xavier Emery acknowledges the support of Grant CONICYT PIA AFB180004 (AMTC) from the National Agency for Research and Development of Chile. We are grateful to three anonymous reviewers for their very careful reading and the many valuable comments that helped to improve the manuscript.

Author information



Corresponding author

Correspondence to Denis Allard.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 16 KB)

Supplementary material 2 (pdf 2377 KB)


A Proofs for the spectral approach

A.1 Proof of Theorem 1

We use the notations defined in Sect. 3. Substituting (7) in (15), one obtains

$$\begin{aligned}&{\mathbb {E}}\left[ e^{ \textstyle i \, u \, \Omega ^T(\varvec{\omega },r) } \right] \nonumber \\&\quad = \exp \bigl ( - \lambda (\varvec{\omega },r) \gamma (u) \bigr ) \nonumber \\&\quad = \exp \left( - \lambda (\varvec{\omega },r) \int _{{\mathbb {R}}}\left( 1 - \cos ( u x) \right) {\mathcal {X}}(dx)\right) \nonumber \\&\quad = \exp \left( \lambda (\varvec{\omega },r) \int _{{\mathbb {R}}}\left( e^{i u x}-1-\frac{i u x}{1+x^2}\right) {\mathcal {X}}(dx)\right) ,\nonumber \\&\quad u \in {\mathbb {R}}, \end{aligned}$$

where \(\lambda (\varvec{\omega },r)=\frac{|\varvec{\omega }|^2}{4r}\) and where the last equality stems from the symmetry of \({\mathcal {X}}\) and the integrability condition (8). Therefore, the distribution \(F_T ( d \tau \mid \varvec{\omega }, r)\) is an infinitely divisible distribution with Lévy measure \({\nu }_{\varvec{\omega },r}=\lambda (\varvec{\omega },r){\mathcal {X}}\).

Assume first that \({\mathcal {X}}({\mathbb {R}})=+\infty \). In this case, \({\nu }_{\varvec{\omega },r}({\mathbb {R}})=+\infty \) for \(\varvec{\omega }\ne 0\) and the Lévy measure \({\nu }_{\varvec{\omega },r}\) is absolutely continuous, since \({\mathcal {X}}\) is absolutely continuous. Then, by applying Lemma 1 of Sato (1982) the infinitely divisible distribution \(F_T ( d \tau \mid \varvec{\omega }, r)\) is absolutely continuous for \(\varvec{\omega }\ne 0\) and for a.e. \(\varvec{\omega }\). As a consequence, since \(F_S ( d\varvec{\omega }\mid r)\) is also absolutely continuous, it follows that

$$\begin{aligned} F(d\varvec{\omega },d\tau )=\int _0^{+\infty } F_S ( d\varvec{\omega }\mid r)F_T (d \tau \mid \varvec{\omega }, r)\mu (dr) \end{aligned}$$

is absolutely continuous.

Let us now assume \(\vartheta ={\mathcal {X}}({\mathbb {R}})<+\infty \). In this case, (22) can be rewritten as

$$\begin{aligned}&{\mathbb {E}}\left[ e^{ \textstyle i \, u \, \Omega ^T(\varvec{\omega },r) } \right] = \exp \left( \lambda (\varvec{\omega },r) \int _{{\mathbb {R}}}\left( e^{i u x}-1\right) {\mathcal {X}}(dx)\right) , \end{aligned}$$

by symmetry of the finite measure \({\mathcal {X}}\). Since \(0< \vartheta < +\infty \), \(\Omega ^T(\varvec{\omega },r)\) has a compound Poisson distribution with \({\mathbb {P}}\bigl (\Omega ^T(\varvec{\omega },r)=0)\bigr ) =\exp \bigl (-\vartheta \lambda (\varvec{\omega },r)\bigr )\), which implies that

$$\begin{aligned} {\mathbb {P}}\left( \Omega ^T=0\right) ={\mathbb {E}}\left[ \exp \left( -\frac{\vartheta \vert \varvec{\Omega }^S\vert ^2 }{4R}\right) \right] >0. \end{aligned}$$

In conclusion, when \(\vartheta < +\infty \), the distribution of \(\Omega ^T\) has an atom at 0. Hence, F is not absolutely continuous in this case.

A.2 Proof of Theorem 2

Recall that

$$\begin{aligned}&C (\varvec{h}, u \mid r ) = \frac{1}{\bigl (\gamma (u)+1 \bigr )^{k/2}} \, \exp \left( - \frac{r |\varvec{h}|^2}{\gamma (u)+1} \right) \nonumber \\&:= \int _{{\mathbb {R}}^k} \int _{\mathbb {R}}e^{ \textstyle i \langle \varvec{h}, \varvec{\omega }\rangle + i u \tau } \, F_T ( d \tau \mid \varvec{\omega }, r) \, F_S ( d \varvec{\omega }\mid r) . \end{aligned}$$

If \(u=0\), then

$$\begin{aligned} C (\varvec{h}, 0 \mid r ) = \exp \left( - r |\varvec{h}|^2 \right) := \int _{{\mathbb {R}}^k} e^{ \textstyle i \langle \varvec{h}, \varvec{\omega }\rangle } \, F_S ( d \varvec{\omega }\mid r), \end{aligned}$$

which is nothing but (14). This shows that \( F_S\) possesses the density

$$\begin{aligned} f_S(\varvec{\omega }\mid r ) = \frac{1}{(4\pi r)^{k/2}} \exp \left( - \frac{ |\varvec{\omega }|^2}{4 r} \right) . \end{aligned}$$

Plugging (24) into (23), one obtains

$$\begin{aligned} C (\varvec{h}, u \mid r )&= \frac{1}{(4\pi r)^{k/2}} \int _{{\mathbb {R}}^k} e^{ \textstyle i \langle \varvec{h}, \varvec{\omega }\rangle } \nonumber \\&\quad \times \exp \left( - \frac{ |\varvec{\omega }|^2}{4 r} \right) \, \int _{\mathbb {R}}e^{ \textstyle i u \tau } \, F_T ( d \tau \mid \varvec{\omega }, r) \, d \varvec{\omega }. \end{aligned}$$

On the other hand, up to a multiplicative factor, the function \(\varvec{\omega }\, \mapsto \exp \left( -\, r |\varvec{h}|^2 / (\gamma (u)+1) \right) \) is the Fourier transform of a Gaussian random vector:

$$\begin{aligned} C (\varvec{h}, u \mid r )&= \frac{1}{(4\pi r)^{k/2}} \int _{{\mathbb {R}}^k} e^{ \textstyle i \langle \varvec{h}, \varvec{\omega }\rangle } \nonumber \\&\quad \times \, \exp \left( - \frac{ |\varvec{\omega }|^2 \, (\gamma (u)+1)}{4 r} \right) \, d \varvec{\omega }. \end{aligned}$$

Comparing (25) and (26), the injectivity of the Fourier transform implies

$$\begin{aligned}&\exp \left( - \frac{ |\varvec{\omega }|^2}{4 r} \right) \, \int _{\mathbb {R}}e^{ \textstyle i u \tau } \, F_T ( d \tau \mid \varvec{\omega }, r) \\&\quad = \exp \left( - \frac{ |\varvec{\omega }|^2 \, (\gamma (u)+1) }{4 r} \right) \qquad \varvec{\omega }\text {-a.e.}, \end{aligned}$$

or equivalently

$$\begin{aligned} \int _{\mathbb {R}}e^{ \textstyle i u \tau } \, F_T ( d \tau \mid \varvec{\omega }, r) = \exp \left( - \frac{ |\varvec{\omega }|^2 \gamma (u)}{4 r} \right) \qquad \varvec{\omega }\text {-a.e.}, \end{aligned}$$

which is precisely (15). \(\square \)

A.3 On the generic approach

The aim of this section is to show that \(\sum _{n \ge 1} X_{\mathrm{T}_n}\) and \(\Omega ^T (\varvec{\omega },r)\) have the same distribution. This is done by comparing their Fourier transforms. Remind that the spectral measure of the variogram is positive, symmetric, without an atom at the origin, and satisfies the integrability property

$$\begin{aligned} \int _{{\mathbb {R}}} \frac{x^2 \, {\mathcal {X}}(dx)}{1 + x^2} = A < +\infty . \end{aligned}$$

Let us start with

$$\begin{aligned} {\mathcal {X}}(dx) = \int _{{\mathbb {R}}_+} \exp \left( - t \frac{x^2}{1 + x^2} \right) \, \frac{x^2 \, {\mathcal {X}}(dx)}{1 + x^2} \, dt. \end{aligned}$$

Because of (27), the positive function \(\theta \) defined on \({\mathbb {R}}_+\) by

$$\begin{aligned} \theta (t) = \int _{{\mathbb {R}}} \exp \left( - t \frac{x^2}{1 + x^2} \right) \, \frac{x^2 \, {\mathcal {X}}(dx)}{1 + x^2} \end{aligned}$$

is upper bounded by A. It follows that, for each \(t>0\), the measure

$$\begin{aligned} {\mathcal {X}}_t (dx) = \frac{1}{\theta (t)} \exp \left( - t \frac{x^2}{1 + x^2} \right) \, \frac{x^2 \, {\mathcal {X}}(dx)}{1 + x^2} \end{aligned}$$

is a probability measure on \({\mathbb {R}}\). This measure is symmetric, and satisfies

$$\begin{aligned} {\mathcal {X}}(dx) = \int _{{\mathbb {R}}_+} {\mathcal {X}}_t (dx) \, \theta (t) \, dt. \end{aligned}$$

Consider now a Poisson point process \(\bigl (\mathrm{T}_n: n \ge 1 \bigr )\) with intensity \(\lambda (t) = \lambda \, \theta (t)\) on \({\mathbb {R}}_+\) (\(\lambda \) is put here as a short notation for \(\frac{|\varvec{\omega }|^2}{4r}\)). Since \(\theta \)\(\lambda (t)\) is upper bounded by \(\lambda A\), this process has no accumulation point. Consider also a family \(\bigl ( X_t: t \in {\mathbb {R}}_+ \bigr )\) of independent random variables, with \(X_t\) being distributed as \({\mathcal {X}}_t\). Because \({\mathcal {X}}_t\) is symmetric, the Fourier transform of \(X_t\) can be written as

$$\begin{aligned} {\mathbb {E}}\bigl [ \exp ( i u X_t ) \bigr ] = \int _{{\mathbb {R}}} \cos (u x) \, {\mathcal {X}}_t (dx). \end{aligned}$$

In what follows, we calculate the Fourier transform of \(\mathrm{T}= \sum _{n \ge 1} X_{\mathrm{T}_n}\). Denoting by \( \Lambda (t_0)\) the integral of \(\lambda (t)\) on \(]0,t_0[\), we have

$$\begin{aligned}&{\mathbb {E}}\bigl [ \exp ( i u \mathrm{T}) \bigr ]\\&\quad = \lim _{t_0 \longrightarrow +\infty } \sum _{n=0}^{+\infty } \exp \bigl ( - \Lambda (t_0) \bigr ) \, \frac{\Lambda ^n (t_0)}{n!} \\&\qquad \times \left[ \int _0^{t_0} \frac{\lambda (t)}{\Lambda (t_0)} \, {\mathbb {E}}\bigl [ \exp ( i u X_t ) \bigr ] \, dt\right] ^n \\&\quad = \lim _{t_0 \longrightarrow +\infty } \exp \left( \int _0^{t_0} {\mathbb {E}}\bigl [ \exp ( i u X_t ) - 1 \bigr ] \, \lambda (t)\, dt \right) \\&\quad = \exp \left( \int _0^{+\infty } {\mathbb {E}}\bigl [ \exp ( i u X_t ) - 1 \bigr ] \, \lambda (t) \, dt \right) . \end{aligned}$$

This implies, owing to (29)

$$\begin{aligned} {\mathbb {E}}\bigl [ \exp ( i u \mathrm{T}) \bigr ] = \exp \left( \int _0^{+\infty } \int _{{\mathbb {R}}} \bigl [ \cos (u x) - 1 \bigr ] \, {\mathcal {X}}_t (dx) \, \lambda (t) \, dt \right) . \end{aligned}$$

Permuting the integrals and replacing \(\lambda (t)\) by its expression, we obtain

$$\begin{aligned} {\mathbb {E}}\bigl [ \exp ( i u \mathrm{T}) \bigr ] =\exp \left( \frac{ \vert \varvec{\omega }|^2}{4 r} \, \int _{{\mathbb {R}}} \bigl [ \cos (u x) - 1 \bigr ] \, {\mathcal {X}}(dx) \right) . \end{aligned}$$

Finally, the spectral representation (7) of \(\gamma \) gives

$$\begin{aligned} {\mathbb {E}}\bigl [ \exp ( i u \mathrm{T}) \bigr ] =\exp \left( - \frac{ \vert \varvec{\omega }|^2}{4 r} \gamma (u) \right) , \end{aligned}$$

which is precisely the Fourier transform (15) of \(\Omega ^T (\varvec{\omega },r)\). \(\Box \)

A.4 Implementing the generic approach for logarithmic variograms

The construction of \(\theta \) and \({\mathcal {X}}_t\) proposed in Appendix A.3 is not necessarily unique. Starting from

$$\begin{aligned} {\mathcal {X}}(d x)= & {} \frac{\exp (- a |x |)}{ |x |\, \ln a^2} \\= & {} \frac{1}{\ln a^2} \int _a^{+\infty } \exp \bigl (- t \, |x |\bigr ) \, d t , \end{aligned}$$

it appears that a possible decomposition such as (16) can be obtained by taking

$$\begin{aligned} \theta (t) = \frac{2}{t \, \ln a^2} \, 1_{t \ge a} =\frac{1}{t \, \ln a} \, 1_{t \ge a} \end{aligned}$$


$$\begin{aligned} {\mathcal {X}}_t (dx) = \frac{1}{2} t \, e^{ - t |x \vert } \, dx, \qquad t > a . \end{aligned}$$

Consider now a Poisson point process \( \bigl ( \mathrm{T}_n : \, n \ge 1 \bigr ) \) on \({\mathbb {R}}_+\) with intensity function

$$\begin{aligned} \lambda (t) = \frac{ |\varvec{\omega }|^2}{4 \, r } \, \theta (t) := \frac{\lambda }{t} \, 1_{t \ge a}. \end{aligned}$$

A simple approach to simulate a nonhomogeneous Poisson point process is recommended by Devroye (1986, Chapter 6). It consists in calculating the inverse of the primitive of the intensity function \(\lambda (t)\) that vanishes at a, i.e. \(\Lambda (t) = \lambda \, \ln ( t / a) \), at the points of a homogeneous Poisson process with a unit intensity on the positive half-line, as shown in Fig. 3.

In this figure, the \(U_i\)’s are independent standard uniform variables and are related to the Poisson time \(\mathrm{T}_n\) by the formula \( \Lambda ( \mathrm{T}_n ) = \lambda \, \ln ( \mathrm{T}_n / a ) = - \ln \bigl ( U_1 \cdots U_n \bigr )\), which gives

$$\begin{aligned} \mathrm{T}_n = \frac{a}{(U_1 \cdots U_n )^{1 / \lambda }} . \end{aligned}$$
Fig. 3

Simulation of a nonhomogeneous Poisson point process by inversion of a homogeneous Poisson process with a unit intensity

Now, recall that \(\Omega ^T(\varvec{\omega },r)\) has the same distribution as \(\sum _{n=1}^{+\infty } X_{\mathrm{T}_n}\), where each \(X_t\) is distributed as \({\mathcal {X}}_t\). Because the \(X_t\)’s are independent, we have

$$\begin{aligned} \mathrm {Var} \Bigl [ \sum _{n \ge 1} X_{\mathrm{T}_n} \Bigr ] =\sum _{n \ge 1} \mathrm {Var} \bigl [ X_{\mathrm{T}_n} \bigr ]. \end{aligned}$$

Moreover, (30) implies

$$\begin{aligned} \mathrm {Var} \bigl [ X_{\mathrm{T}_n} \bigr ]= & {} {\mathbb {E}}\Bigl [ \mathrm {Var} \bigl [ X_{\mathrm{T}_n} \vert \mathrm{T}_n \bigr ] \Bigr ] = {\mathbb {E}}\Bigl [ \frac{2}{\mathrm{T}^2_n} \Bigr ]\\= & {} \frac{2}{a^2} \, {\mathbb {E}}\Bigl [ (U_1 \cdots U_n)^{2 \lambda } \Bigr ] =\frac{2}{a^2} \, \left( \frac{\lambda }{\lambda + 2}\right) ^n. \end{aligned}$$


$$\begin{aligned} \mathrm {Var} \Bigl [ \sum _{n \ge 1} X_{\mathrm{T}_n} \Bigr ] =\frac{2}{a^2} \, \sum _{n \ge 1} \left( \frac{\lambda }{\lambda + 2}\right) ^n = \frac{\lambda }{a^2}. \end{aligned}$$

Similarly, if the series is truncated at order \(n_0\), then the same calculation leads to the residual variance

$$\begin{aligned} \mathrm {Var} \Bigl [ \sum _{n \ge n_0+1} X_{\mathrm{T}_n} \Bigr ]= & {} \frac{2}{a^2} \sum _{n \ge n_0+1} \left( \frac{\lambda }{\lambda + 2}\right) ^n \\= & {} \left( \frac{\lambda }{\lambda + 2} \right) ^{n_0} \frac{\lambda }{a^2}. \end{aligned}$$

Let \(\varepsilon > 0\) be arbitrarily small. From the previous calculations, it follows that

$$\begin{aligned} \frac{ \mathrm {Var} \Bigl [ \sum _{n \ge n_0+1} X_{\mathrm{T}_n} \Bigr ]}{\mathrm {Var} \Bigl [ \sum _{n \ge 1} X_{\mathrm{T}_n} \Bigr ] }< \varepsilon \\Longleftrightarrow & {} \ \left( \frac{\lambda }{\lambda + 2} \right) ^{n_0} < \varepsilon \\\Longleftrightarrow & {} \ n_0 > \frac{- \ln \epsilon }{\ln ( 1 + 2 / \lambda ) }. \end{aligned}$$

\(\square \)

B Proofs for the substitution approach

Proof of Theorem 3

For \((\varvec{x},t) \in {\mathbb {R}}^k \times {\mathbb {R}}\), \(Z(\varvec{x},t)\) conditional on \((R,\tilde{\varvec{\Omega }},U,W)\) (i.e. only letting \(\Phi \) vary randomly) has a zero expectation, insofar as it is proportional to the cosine of a random variable uniformly distributed on an interval of length \(2 \pi \). The prior expectation of \(Z(\varvec{x},t)\) is therefore zero:

$$\begin{aligned} {\mathbb {E}}[Z(\varvec{x},\varvec{t}) ] = {\mathbb {E}}[ {\mathbb {E}}[ Z(\varvec{x},t) \mid R, \tilde{\varvec{\Omega }}, U, W ] ] = 0. \end{aligned}$$

Let us now calculate the covariance between the random variables \(Z(\varvec{x},t)\) and \(Z(\varvec{x}^{\prime },t^{\prime })\), with \((\varvec{x},t) \in {\mathbb {R}}^k \times {\mathbb {R}}\) and \((\varvec{x}^{\prime },t^{\prime }) \in {\mathbb {R}}^k \times {\mathbb {R}}\):

$$\begin{aligned}&{\mathbb {E}}[ Z(\varvec{x},t) Z(\varvec{x}^{\prime },t^{\prime })] \\&\quad = 2 {\mathbb {E}}\bigg [-\ln (U) \cos \left( \sqrt{2R} \, \langle \tilde{\varvec{\Omega }}, \varvec{x}\rangle + \frac{|\tilde{\varvec{\Omega }}|}{\sqrt{2}} W(t) + \Phi \right) \\&\qquad \times \cos \left( \sqrt{2R} \, \langle \tilde{\varvec{\Omega }}, \varvec{x}^{\prime } \rangle + \frac{|\tilde{\varvec{\Omega }}|}{\sqrt{2}} W(t^{\prime }) + \Phi \right) \bigg ] \\&\quad = 2 {\mathbb {E}}\bigg [\cos \left( \sqrt{2R} \, \langle \tilde{\varvec{\Omega }},\varvec{x}\rangle + \frac{|\tilde{\varvec{\Omega }}|}{\sqrt{2}} W(t) + \Phi \right) \\&\qquad \times \cos \left( \sqrt{2R} \, \langle \tilde{\varvec{\Omega }}, \varvec{x}^{\prime } \rangle + \frac{|\tilde{\varvec{\Omega }}|}{\sqrt{2}} W(t^{\prime }) + \Phi \right) \bigg ]. \end{aligned}$$

The last equality stems from the fact that \(-\ln (U)\) is an exponential random variable with mean 1 and is independent of \((R,\tilde{\varvec{\Omega }},W)\). Using the product-to-sum trigonometric identities, one can write the product of cosines as half the sum of two cosines, namely:

  • the cosine of the difference: \(\cos \left( \sqrt{2R} \, \langle \tilde{\varvec{\Omega }}, \varvec{x}-\varvec{x}^{\prime } \rangle + \frac{|\tilde{\varvec{\Omega }}|}{\sqrt{2}} (W(t)-W(t^{\prime })) \right) \)

  • the cosine of the sum: \(\cos \left( \sqrt{2R} \, \langle \tilde{\varvec{\Omega }}, \varvec{x}+\varvec{x}^{\prime } \rangle + \frac{|\tilde{\varvec{\Omega }}|}{\sqrt{2}} (W(t)+W(t^{\prime })) + 2 \Phi \right) \).

However, because \(\Phi \) is uniformly distributed on \((0, 2\pi )\) and independent of \((R, \tilde{\varvec{\Omega }},W)\), the expectation of the cosine of the sum is 0. It remains

$$\begin{aligned}&{\mathbb {E}}[Z(\varvec{x},t) Z(\varvec{x}^{\prime },t^{\prime })]\\&\quad = {\mathbb {E}}\bigg [\cos \left( \sqrt{2R} \, \langle \tilde{\varvec{\Omega }}, \varvec{x}-\varvec{x}^{\prime } \rangle +\frac{|\tilde{\varvec{\Omega }}|}{\sqrt{2}} (W(t)-W(t^{\prime })) \right) \bigg ]. \end{aligned}$$

The increment \(W(t)-W(t^{\prime })\) is a Gaussian random variable with zero mean and variance \(2 \gamma (t-t^{\prime })\) and is independent of \((R,\tilde{\varvec{\Omega }})\), i.e.:

$$\begin{aligned} W(t)-W(t^{\prime }) = \sqrt{2 \gamma (t-t^{\prime })} Y, \end{aligned}$$

with \(Y \sim {\mathcal {N}} (0,1)\) independent of \((R,\tilde{\varvec{\Omega }})\). Defining \(\varvec{h}= \varvec{x}- \varvec{x}^{\prime }\) and \(u=t - t^{\prime }\) and denoting by g the standard Gaussian probability density, one therefore obtains:

$$\begin{aligned}&{\mathbb {E}}[ Z(\varvec{x},t) Z(\varvec{x}^{\prime },t^{\prime })] \nonumber \\&\quad = \int _{{\mathbb {R}}_+} \int _{{\mathbb {R}}^k} \int _{{\mathbb {R}}} \cos \left( \sqrt{2r} \, \langle \tilde{\varvec{\omega }}, \varvec{h}\rangle + |\tilde{\varvec{\omega }} |\sqrt{\gamma (u)} y \right) \nonumber \\&\qquad \times g(y) dy \frac{1}{(2\pi )^{k/2}} \, \exp \left( -\frac{|\tilde{\varvec{\omega }} |^2}{2} \right) d \tilde{\varvec{\omega }} \mu (dr) \nonumber \\&\quad = \frac{1}{(2\pi )^{k/2}} \int _{{\mathbb {R}}_+} \int _{{\mathbb {R}}^k} \cos \left( \sqrt{2r} \, \langle \tilde{\varvec{\omega }}, \varvec{h}\rangle \right) \nonumber \\&\qquad \times \int _{{\mathbb {R}}} \cos \left( |\tilde{\varvec{\omega }}|\sqrt{\gamma (u)} y \right) g(y) dy \, \exp \left( -\frac{|\tilde{\varvec{\omega }} |^2}{2} \right) d\tilde{\varvec{\omega }} \mu (dr) . \end{aligned}$$

The last equality in (31) stems from the angle-sum trigonometric identity and the fact that \(\langle \tilde{\varvec{\omega }}, \varvec{h}\rangle \) is an odd function of \(\tilde{\varvec{\omega }}\) and \(|\tilde{\varvec{\omega }}|\sqrt{\gamma (u)} y\) is an even function of \(\tilde{\varvec{\omega }}\).

The simulated random field Z is therefore second-order stationary, since its expectation is identically zero and the covariance between any two variables \(Z(\varvec{x},t)\) and \(Z(\varvec{x}^{\prime },t^{\prime })\) only depends on \(\varvec{h}= \varvec{x}- \varvec{x}^{\prime }\) and \(u = t - t^{\prime }\). Up to a multiplicative factor, the last integral in (31) appears as the Fourier transform of the standard Gaussian probability density g(y) on \({\mathbb {R}}\). Specifically:

$$\begin{aligned} \int _{{\mathbb {R}}} \cos \left( |\tilde{\varvec{\omega }}|\sqrt{\gamma (u)} y \right) g(y) dy = \exp \left( -|\tilde{\varvec{\omega }}|^2 \frac{\gamma (u)}{2} \right) . \end{aligned}$$


$$\begin{aligned}&{\mathbb {E}}[ Z(\varvec{x},t) Z(\varvec{x}^{\prime },t^{\prime })]\nonumber \\&\quad = \frac{1}{(2\pi )^{k/2}} \int _{{\mathbb {R}}_+} \int _{{\mathbb {R}}^k} \cos (\sqrt{2r} \, \langle \tilde{\varvec{\omega }},\varvec{h}\rangle ) \nonumber \\&\qquad \times \exp \left( - |\tilde{\varvec{\omega }}|^2 \frac{\gamma (u)+1}{2} \right) d\tilde{\varvec{\omega }} \mu (dr) \nonumber \\&\quad = C(\varvec{h},u). \end{aligned}$$

The last equality in (32) stems from (11) and completes the proof. \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Allard, D., Emery, X., Lacaux, C. et al. Simulating space-time random fields with nonseparable Gneiting-type covariance functions. Stat Comput 30, 1479–1495 (2020).

Download citation


  • Spectral simulation
  • Spectral measure
  • Substitution random field
  • Gaussian random field