Skip to main content
Log in

Stochastic synthesis approximating any process dependence and distribution

  • Original Paper
  • Published:
Stochastic Environmental Research and Risk Assessment Aims and scope Submit manuscript

Abstract

An extension of the symmetric-moving-average (SMA) scheme is presented for stochastic synthesis of a stationary process for approximating any dependence structure and marginal distribution. The extended SMA model can exactly preserve an arbitrary second-order structure as well as the high order moments of a process, thus enabling a better approximation of any type of dependence (through the second-order statistics) and marginal distribution function (through statistical moments), respectively. Interestingly, by explicitly preserving the coefficient of kurtosis, it can also simulate certain aspects of intermittency, often characterizing the geophysical processes. Several applications with alternative hypothetical marginal distributions, as well as with real world processes, such as precipitation, wind speed and grid-turbulence, highlight the scheme’s wide range of applicability in stochastic generation and Monte-Carlo analysis. Particular emphasis is given on turbulence, in an attempt to simulate in a simple way several of its characteristics regarded as puzzles.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  • Aksoy H, Toprak ZF, Aytek A, Ünal NE (2004) Stochastic generation of hourly mean wind speed data. Renew Energy 29:2111–2131

    Article  Google Scholar 

  • Balestrino A, Caiti A, Crisostomi E (2006) Efficient numerical approximation of maximum entropy estimates. Int J Control 79(9):1145–1155

    Article  Google Scholar 

  • Barndorff-Nielsen OE (1978) Hyperbolic distributions and distributions on hyperbolae. Scand J Stat 5:151–157

    Google Scholar 

  • Batchelor GK, Townsend AA (1949) The nature of turbulent motion at large wave-numbers. Proc R Soc Lond A 199:238–255

    Article  Google Scholar 

  • Castro JJ, Carsteanu AA, Fuentes JD (2011) On the phenomenology underlying Taylor’s hypothesis in atmospheric turbulence. Rev Mex Fis 57(1):60–64

    Google Scholar 

  • Cerutti S, Meneveau C (2000) Statistics of filtered velocity in grid and wake turbulence. Phys Fluids 12(1):143–165

    Google Scholar 

  • Chhikara R, Folks L (1989) The inverse Gaussian distribution: theory, methodology and applications. Marcel Dekker, New York

    Google Scholar 

  • Conradsen K, Nielsen LB, Prahm LP (1984) Review of Weibull statistics for estimation of wind speed distributions. J Clim Appl Meteorol 23:1173–1183

    Article  Google Scholar 

  • Cordeiro GM, de Castro M (2011) A new family of generalized distributions. J Stat Comput Simul 81:883–898

    Article  Google Scholar 

  • Deligiannis H, Dimitriadis P, Daskalou O, Dimakos Y, Koutsoyiannis D (2016) Global investigation of double periodicity οf hourly wind speed for stochastic simulation; application in Greece. Energy Procedia 97:278–285. https://doi.org/10.1016/j.egypro.2016.10.001

    Article  Google Scholar 

  • Dimitriadis P (2017) Hurst–Kolmogorov dynamics in hydrometeorological processes and in the microscale of turbulence, Ph.D. thesis, 167 pages, National Technical University of Athens, Athens

  • Dimitriadis P, Koutsoyiannis D (2015a) Climacogram versus autocovariance and power spectrum in stochastic modelling for Markovian and Hurst–Kolmogorov processes. Stoch Env Res Risk Assess 29(6):1649–1669

    Article  Google Scholar 

  • Dimitriadis P, Koutsoyiannis D (2015b) Application of stochastic methods to double cyclostationary processes for hourly wind speed simulation. Energy Procedia 76:406–411. https://doi.org/10.1016/j.egypro.2015.07.851

    Article  Google Scholar 

  • Dimitriadis P, Koutsoyiannis D (2016) A parsimonious stochastic model for wind variability, 30 years of nonlinear dynamics in geosciences. Rhodes, Greece

    Google Scholar 

  • Dimitriadis P, Koutsoyiannis D, Papanicolaou P (2016a) Stochastic similarities between the microscale of turbulence and hydrometeorological processes. Hydrol Sci J 61(9):1623–1640. https://doi.org/10.1080/02626667.2015.1085988

    Article  Google Scholar 

  • Dimitriadis P, Koutsoyiannis D, Tzouka K (2016b) Predictability in dice motion: how does it differ from hydrometeorological processes? Hydrol Sci J 61(9):1623–1640

    Article  Google Scholar 

  • Dimitriadis P, Gournary N, Koutsoyiannis D (2016c) Markov vs. Hurst–Kolmogorov behaviour identification in hydroclimatic processes. In: European Geosciences Union General Assembly 2016, Geophysical Research Abstracts, vol 18, Vienna, EGU2016-14577-4, European Geosciences Union

  • Dimitriadis P, Iliopoulou T, Tyralis H, Koutsoyiannis D (2017) Identifying the dependence structure of a process through pooled time series analysis. In: IAHS 2017 Scientific Assembly 10–14 JULY 2017, Port Elizabeth, South Africa, Water and Development: scientific challenges in addressing societal issues, W17: Stochastic hydrology: simulation and disaggregation models

  • Efstratiadis A, Dialynas Y, Kozanis S, Koutsoyiannis D (2014) A multivariate stochastic model for the generation of synthetic time series at multiple time scales reproducing long-term persistence. Environ Model Softw 62:139–152. https://doi.org/10.1016/j.envsoft.2014.08.017

    Article  Google Scholar 

  • Fernandez B, Salas JD (1986) Periodic gamma autoregressive processes for operational hydrology. Water Resour Res 22(10):1385–1396

    Article  Google Scholar 

  • Frechet M (1951) Sur les tableaux de correlation dont les marges son donnees. Ann Univ Lyon Sect A 9:53–77

    Google Scholar 

  • Frisch U (2006) Turbulence: the legacy of A. N. Kolmogorov. Cambridge University Press, Cambridge

    Google Scholar 

  • Gneiting T (2000) Power-law correlations, related models for long-range dependence and their simulation. J Appl Prob 37(04):1104–1109

    Article  Google Scholar 

  • Gneiting T, Schlather M (2004) Stochastic models that separate fractal dimension and the hurst effect. SIAM Rev 46(2):269–282

    Article  Google Scholar 

  • Gneiting T, Ševčíková H, Percival DB (2012) Estimators of fractal dimension: assessing the roughness of time series and spatial data. Stat Sci 27(2):247–277

    Article  Google Scholar 

  • Halliwell LJ (2013) Classifying the tails of loss distributions. CAS E-Forum, vol 2

  • Hoeffding W (1940) Scale-invariant correlation theory. In: Fisher NI, Sen PK (eds) The collected works of Wassily Hoeffding. Springer, New York, pp 57–107

    Google Scholar 

  • Ibragimov R, Lentzas G (2017) Copulas and long memory. Probab Surveys 14:289–327. https://doi.org/10.1214/14-PS233

    Article  Google Scholar 

  • Iliopoulou T, Papalexiou SM, Markonis Y, Koutsoyiannis D (2016) Revisiting long-range dependence in annual precipitation. J Hydrol. https://doi.org/10.1016/j.jhydrol.2016.04.015

    Article  Google Scholar 

  • Jaynes ET (1957) Information theory and statistical mechanics. Phys Rev 106:620

    Article  Google Scholar 

  • Kang HS, Chester S, Meneveau C (2003) Decaying turbulence in an active-grid-generated flow and comparisons with large-eddy simulation. J Fluid Mech 480:129–160

    Article  Google Scholar 

  • Khan MS, King R, Hudson IL (2016) Statistics in transition new series, vol 17, No. 2, pp. 1–28, Transmuted Kumaraswamy Distribution

  • Klugman SA, Panjer HH, Willmot GE (1998) Loss models: from data to decisions. Wiley, New York

    Google Scholar 

  • Kolmogorov AN (1931) Uber die analytischen Methoden in der Wahrscheinlichkcitsrechnung, Math. Ann. 104, 415–458. (English translation: On analytical methods in probability theory, In: Kolmogorov, A.N., 1992. Selected Works of A. N. Kolmogorov—Volume 2, Probability Theory and Mathematical Statistics A. N. Shiryayev, ed., Kluwer, Dordrecht, The Netherlands, pp. 62–108)

  • Kolmogorov AN (1933) Grundbegrijfe der Wahrscheinlichkeitsrechnung, Ergebnisseder Math. (2), Berlin. (2nd English Edition: Foundations of the Theory of Probability, 84 pp. Chelsea Publishing Company, New York, 1956)

  • Kolmogorov AN (1941a) The local structure of turbulence in incompressible viscous fluid for very large Reynolds number. Dokl Akad Nauk SSSR 30:299–303

    Google Scholar 

  • Kolmogorov AN (1941b) On the decay of isotropic turbulence in an incompressible viscous flow. Dokl Akad Nauk SSSR 31:538–540

    Google Scholar 

  • Kolmogorov AN (1941c) Dissipation energy in locally isotropic turbulence. Dokl Akad Nauk SSSR 32:16–18

    Google Scholar 

  • Koutsoyiannis D (2000) A generalized mathematical framework for stochastic simulation and forecast of hydrologic time series. Water Resour Res 36(6):1519–1533

    Article  Google Scholar 

  • Koutsoyiannis D (2002) The Hurst phenomenon and fractional Gaussian noise made easy. Hydrol Sci J 47(4):573–595

    Article  Google Scholar 

  • Koutsoyiannis D (2003) Climate change, the Hurst phenomenon, and hydrological statistics. Hydrol Sci J 48(1):3–24

    Article  Google Scholar 

  • Koutsoyiannis D (2004a) Statistics of extremes and estimation of extreme rainfall, 1, Theoretical investigation. Hydrol Sci J 49(4):575–590

    Google Scholar 

  • Koutsoyiannis D (2004b) Statistics of extremes and estimation of extreme rainfall, 2, Empirical investigation of long rainfall records. Hydrol Sci J 49(4):591–610

    Google Scholar 

  • Koutsoyiannis D (2005) Uncertainty, entropy, scaling and hydrological stochastics, 1, Marginal distributional properties of hydrological processes and state scaling. Hydrol Sci J 50(3):381–404

    Google Scholar 

  • Koutsoyiannis D (2010) HESS opinions “A random walk on water”. Hydrol Earth Syst Sci 14:585–601

    Article  Google Scholar 

  • Koutsoyiannis D (2011) Hurst–Kolmogorov dynamics as a result of extremal entropy production. Phys A 390(8):1424–1432

    Article  CAS  Google Scholar 

  • Koutsoyiannis D (2014) Entropy: from thermodynamics to hydrology. Entropy 16(3):1287–1314. https://doi.org/10.3390/e16031287

    Article  CAS  Google Scholar 

  • Koutsoyiannis D (2016) Generic and parsimonious stochastic modelling for hydrology and beyond. Hydrol Sci J 61(2):225–244

    Article  Google Scholar 

  • Koutsoyiannis D (2017) Entropy production in stochastics. Entropy 19(11):581

    Article  Google Scholar 

  • Koutsoyiannis D, Manetas A (1996) Simple disaggregation by accurate adjusting procedures. Water Resour Res 32(7):2105–2117. https://doi.org/10.1029/96WR00488

    Article  Google Scholar 

  • Koutsoyiannis D, Montanari A (2015) Negligent killing of scientific concepts: the stationarity case. Hydrol Sci J 60(7–8):1174–1183

    Google Scholar 

  • Koutsoyiannis D, Onof C, Wheater HS (2003) Multivariate rainfall disaggregation at a fine timescale. Water Resour Res 39(7):1173. https://doi.org/10.1029/2002wr001600

    Article  Google Scholar 

  • Koutsoyiannis D, Yao H, Georgakakos A (2008) Medium-range flow prediction for the Nile: a comparison of stochastic and deterministic methods. Hydrol Sci J 53(1):142–164

    Article  Google Scholar 

  • Koutsoyiannis D, Dimitriadis P, Lombardo F, Stevens S (2018) From fractals to stochastics: seeking theoretical consistency in analysis of geophysical data. In: Tsonis A (ed) Advances in nonlinear geosciences. Springer, New York. https://doi.org/10.1007/978-3-319-58895-7

    Chapter  Google Scholar 

  • Kraichnan RH (1959) The structure of isotropic turbulence at very high Reynolds numbers. J Fluid Mech 5:497–543

    Article  Google Scholar 

  • Krajewski WF, Kruger A, Nespor V (1998) Experimental and numerical studies of small-scale rainfall measurements and variability. Water Sci Technol 37:131–138

    Article  Google Scholar 

  • Kumaraswamy P (1980) A generalized probability density function for double-bounded random processes. J Hydrol 46(1–2):79–88

    Article  Google Scholar 

  • Langousis A, Koutsoyiannis D (2006) A stochastic methodology for generation of seasonal time series reproducing overyear scaling behaviour. J Hydrol 322:138–154

    Article  Google Scholar 

  • Lavergnat J (2016) On the generation of colored non-Gaussian time sequences, hal (01399446)

  • Lo Brano V, Orioli A, Ciulla G, Culotta S (2011) Quality of wind speed fitting distributions for the urban area of Palermo, Italy. Renew Energy 36(3):1026–1039

    Article  Google Scholar 

  • Lombardo F, Volpi E, Koutsoyiannis D (2012) Rainfall downscaling in time: theoretical and empirical comparison between multifractal and Hurst–Kolmogorov discrete random cascades. Hydrolog Sci J 57:1052–1066

    Article  Google Scholar 

  • Lombardo F, Volpi E, Koutsoyiannis D, Papalexiou SM (2014) Just two moments! A cautionary note against use of high-order moments in multifractal models in hydrology. Hydrol Earth Syst Sci 18:243–255

    Article  Google Scholar 

  • Lombardo F, Volpi E, Koutsoyiannis D, Serinaldi F (2017) A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall. Water Resour Res. https://doi.org/10.1002/2017WR020529

    Article  Google Scholar 

  • Markonis Y, Koutsoyiannis D (2013) Climatic variability over time scales spanning nine orders of magnitude: connecting Milankovitch cycles with Hurst–Kolmogorov dynamics. Surv Geophys 34(2):181–207

    Article  Google Scholar 

  • Nataf A (1962) Statistique mathematique-determination des distributions de probabilites dont les marges sont donnees. C R Acad Sci Paris 255:42–43

    Google Scholar 

  • Nelsen RB (2006) An introduction to copulas, Springer Series in Statistics, second edition

  • Nespor V, Sevruk B (1999) Estimation of wind-induced error of rainfall gauge measurements using a numerical simulation. J Atmos Ocean Technol 16:450–464

    Article  Google Scholar 

  • O’Connell PE, Koutsoyiannis D, Lins HF, Markonis Y, Montanari A, Cohn T (2016) The scientific legacy of Harold Edwin Hurst (1880–1978). Hydrol Sci J 61:1571–1590

    Article  Google Scholar 

  • Papadopoulos V, Giovanis DG (2018) Stochastic finite element methods - an introduction. Springer, p 138

  • Papoulis A (1991) Probability, random variables, and stochastic processes, 3rd edn. McGraw-Hill, New York

    Google Scholar 

  • Pearson K (1930) On a new theory of progressive evolution. Ann Eugen IV(1–2):1–40

    Article  Google Scholar 

  • Pope SB (2000) Turbulent flows. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Serinaldi F, Lombardo F (2017a) General simulation algorithm for autocorrelated binary processes. Phys Rev E 95:023312

    Article  Google Scholar 

  • Serinaldi F, Lombardo F (2017b) BetaBit: a fast generator of autocorrelated binary processes for geophysical research. EPL 118(3):30007

    Article  CAS  Google Scholar 

  • She ZS, Leveque E (1994) Universal scaling laws in fully developed turbulence. Phys Rev Lett 72:336

    Article  CAS  Google Scholar 

  • Sklar A (1959) Fonctions de répartition à n dimensions et leurs marges. Publications de l’Institut de Statistique de l’Université de Paris 8:229–231

    Google Scholar 

  • Taylor GI (1938) Proc R Soc Lond A 164:476

    Article  Google Scholar 

  • Tsekouras G, Koutsoyiannis D (2014) Stochastic analysis and simulation of hydrometeorological processes associated with wind and solar energy. Renew Energy 63:624–633

    Article  Google Scholar 

  • Tsoukalas I, Efstratiadis A, Makropoulos C (2018) Stochastic periodic autoregressive to anything (SPARTA): modelling and simulation of cyclostationary processes with arbitrary marginal distributions. Water Resour Res 54(1):161–185. https://doi.org/10.1002/2017WR021394

    Article  Google Scholar 

  • Tyralis H, Koutsoyiannis D (2011) Simultaneous estimation of the parameters of the Hurst–Kolmogorov stochastic process. Stoch Environ Res Risk Assess 25:21–33

    Article  Google Scholar 

  • Tyralis H, Koutsoyiannis D, Kozanis S (2013) An algorithm to construct Monte Carlo confidence intervals for an arbitrary function of probability distribution parameters. Comput Stat 28(4):1501–1527

    Article  Google Scholar 

  • Villarini G, Mandapaka PV, Krajewski WF, Moore RJ (2008) Rainfall and sampling uncertainties: a rain gauge perspective. J Geophys Res Atmospheres 113:D11102. https://doi.org/10.1029/2007JD009214

    Article  Google Scholar 

  • Von Karman T (1948) The local structure of atmospheric turbulence. Dokl Akad Nauk SSSR 67:643

    Google Scholar 

  • Wilczek M, Daitche A, Friedrich R (2011) On the velocity distribution in homogeneous isotropic turbulence: correlations and deviations from Gaussianity. J Fluid Mech 676:191–217

    Article  Google Scholar 

Download references

Acknowledgements

The Authors are grateful to the Editor George Christakos, the anonymous Associate Editor and three Reviewers as well as to T. Iliopoulou, for their efforts, useful comments and suggestions that helped us improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Panayiotis Dimitriadis.

Appendices

Appendix A

Here, we describe how the SMA scheme can preserve an approximation of the marginal distribution of a process through the preservation of high-order moments. Although this scheme can preserve any number of moments, here we specify the analytical solution for the preservation up to the fourth moment corresponding to kurtosis and to the fifth raw moment (for illustration). Assuming \({\text{E}}\left[ {\underline{x}_{i} } \right] = [{\text{E}}\left[ {\underline{v} } \right] = 0\), raw moments are identical to the corresponding central moments; the pth moment can be expressed through the SMA scheme as:

$${\text{E}}\left[ {\underline{x}_{i}^{p} } \right] = {\text{E}}\left[ {\left( {\mathop \sum \limits_{j = - l}^{l} a_{\left| j \right|} \underline{v}_{i + j} } \right)^{p} } \right]$$
(24)

Therefore, assuming also that \({\text{E}}\left[ {\underline{v}^{2} } \right] = 1\), the second and third raw moments can be expressed as (Koutsoyiannis 2000):

$${\text{E}}\left[ {\underline{x}^{2} } \right] = \left( {a_{0}^{2} + 2\mathop \sum \limits_{j = 1}^{l} a_{j}^{2} } \right)$$
(25)
$${\text{E}}\left[ {\underline{x}^{3} } \right] = \left( {a_{0}^{3} + 2\mathop \sum \limits_{j = 1}^{l} a_{j}^{3} } \right){\text{E}}\left[ {\underline{v}^{3} } \right]$$
(26)

For a raw moment of order p we use the multinomial theorem:

$${\text{E}}\left[ {\underline{x}_{i}^{p} } \right] = {\text{E}}\left[ {\left( {\mathop \sum \limits_{j = - l}^{l} a_{\left| j \right|} \underline{v}_{i + j} } \right)^{p} } \right] = \mathop \sum \limits_{{k_{ - l} + k_{1 - l} + \ldots + k_{l} = p}} \left( {\begin{array}{*{20}c} p \\ {k_{ - l} ,k_{1 - l} , \ldots ,k_{l} } \\ \end{array} } \right){\text{E}}\left[ {\mathop \prod \limits_{ - l \le j \le l} \left( {a_{\left| j \right|} \underline{v}_{i + j} } \right)^{{k_{j} }} } \right]$$
(27)

where \(\left( {\begin{array}{*{20}c} p \\ {k_{ - l} ,k_{1 - l} , \ldots ,k_{l} } \\ \end{array} } \right) = \frac{p!}{{k_{ - l} !k_{1 - l} ! \ldots k_{l} !}} ,\) is a multinomial coefficient.

We notice that all combinations with k j  = 1 are zero and thus, after algebraic manipulations we obtain for p = 4:

$${\text{E}}\left[ {\underline{x}^{4} } \right] = {\text{E}}\left[ {\underline{v}^{4} } \right]\mathop \sum \limits_{j = - l}^{l} a_{\left| j \right|}^{4} + 6\mathop \sum \limits_{j = - l}^{l - 1} \mathop \sum \limits_{k = j + 1}^{l} a_{\left| j \right|}^{2} a_{\left| k \right|}^{2}$$
(28)

After typical manipulations we derive the expressions for the coefficients of skewness and kurtosis shown in Eqs. (10) and (11), respectively. Also, Eq. 11 can be further simplified for faster calculations to:

$$C_{{{\text{k}},v}} = \frac{{C_{{{\text{k}},x}} \left( {a_{0}^{2} + 2\mathop \sum \nolimits_{j = 1}^{l} a_{j}^{2} } \right)^{2} - 6\mathop \sum \nolimits_{j = 1}^{l} a_{j}^{4} - 12a_{0}^{2} \mathop \sum \nolimits_{j = 1}^{l} a_{j}^{2} - 24\mathop \sum \nolimits_{j = 1}^{l - 1} \left( {a_{j}^{2} \mathop \sum \nolimits_{k = j + 1}^{l} a_{k}^{2} } \right)}}{{\left( {a_{0}^{4} + 2\mathop \sum \nolimits_{j = 1}^{l} a_{j}^{4} } \right)}}$$
(29)

with l > 1, while for l = 1 the last term of the double sum is zero.

For illustration, we also present the 5th raw moment as estimated from Eqs. 24 and 27 (above that moment the computation requirements highly increase due to multiplication of more than two coefficients):

$${\text{E}}\left[ {\underline{x}^{5} } \right] = {\text{E}}\left[ {\underline{v}^{5} } \right]\mathop \sum \limits_{j = - l}^{l} a_{\left| j \right|}^{5} + 10\mathop \sum \limits_{j = - l}^{l} \mathop \sum \limits_{k = - l;k \ne j}^{l} a_{\left| j \right|}^{2} a_{\left| k \right|}^{3}$$
(30)

However, the extension to higher than the fourth moment is not required for the applications of this paper, since as illustrated in Sect. 4 through the estimation of the ME distribution, the contribution of the fourth moment is small and even more so is that of even higher moments (we test up to the 6th).

Appendix B

Here, we describe how we can use selected distributions as a means to preserve the desirable statistical central moments through the SMA model. For random number generation from thin-tailed distributions we adopt an extended standardized version of the Kumaraswamy (1980) distribution (abbreviated as ESK) with probability distribution function:

$$F\left( {x;\varvec{p}} \right): = 1 - \left( {1 - \left( {\frac{x - c}{d}} \right)^{a} } \right)^{b}$$
(31)

where x ∊ [cc + d], \(\varvec{p} = \left[ {a,b,c,d} \right]\), the parameters of the distribution (see also Tables 2 and 3), with \(c,d \in {\mathbb{R}}\) (location and scale parameters, respectively, with units same as in x) and \(a, b\) > 0 (dimensionless shape parameters).

Table 2 Mean, variance, and coefficients of skewness and kurtosis for the ESK and NIG distributions. Note that \(B_{i} = b{\text{B}}\left( {1 + i/a,b} \right),\) where \({\text{B}}\left( {x,y} \right)\) is the beta function and i an integer
Table 3 Parameters of the ESK and NIG distributions in terms of the mean, standard deviation, and coefficients of skewness and kurtosis (see also Fig. 14)

Below, we estimate several statistical characteristics of the ESK distribution such as the mean, variance, and coefficients of skewness and kurtosis, as well as the minimum and maximum kurtosis as a function of skewness. A detailed analysis on the general expansion of the Kumaraswamy distribution can be found in Cordeiro and de Castro (2011), and Khan et al. (2016). The ESK distribution has simple, analytical and closed expressions for its statistical central moments. Notably, we find through numerical investigation that ESK has a low kurtosis boundary based on its skewness and approximately expressed by \(C_{\text{k}} \ge C_{\text{s}}^{2} + 1\), which is also the mathematical boundary for the sample skewness and kurtosis (Pearson 1930).

The central moments of the ESK distribution can be expressed as:

$${\text{E}}\left[ {\left( {\underline{x} - \mu } \right)^{p} } \right] = d^{p} \mathop \sum \limits_{\xi = 1}^{p + 1} \left( {\left( { - 1} \right)^{p + 1 - \xi } \left( {\begin{array}{*{20}c} p \\ {\xi - 1} \\ \end{array} } \right)B_{1}^{p + 1 - \xi } B_{\xi - 1} } \right)$$
(32)

for p > 1 and where μ = c + dB1, \(\left( {\begin{array}{*{20}c} p \\ {\xi - 1} \\ \end{array} } \right)\) the binomial coefficient and \(B_{\xi } = b{\text{B}}\left( {1 + \xi /a,b} \right)\), with \({\text{B}}\) the beta function.

Thus, the variation, skewness and kurtosis coefficients can be expressed as

$$C_{\text{v}} = \frac{{B_{2} - B_{1}^{2} }}{{\left( {B_{1} + c/d} \right)^{2} }},\,C_{\text{s}} = \frac{{2B_{1}^{3} - 3B_{1} B_{2} + B_{3} }}{{\left( {B_{2} - B_{1}^{2} } \right)^{3/2} }},\,C_{\text{k}} = \frac{{ - 3B_{1}^{4} + 6B_{1}^{2} B_{2} - 4B_{1} B_{3} + B_{4} }}{{\left( {B_{2} - B_{1}^{2} } \right)^{2} }}$$
(33)

respectively. After the numerical estimation of a and b, the parameters c and d can be analytically calculated as:

$$d = \sigma / \sqrt {b{\text{B}}\left( {1 + \frac{2}{a},b} \right) - b^{2} {\text{B}}^{2} \left( {1 + \frac{1}{a},b} \right)} ,\,c = \mu - bd{\text{B}}\left( {1 + \frac{1}{a},b} \right)$$
(34)

Therefore, we can use the ESK distribution to approximate a variety of thin-tailed distributions based on the estimation of \(a, b, c\) and d parameters from data. Note that if we wish to extend the SMA model to preserve additional moments, we could similarly expand the ESK distribution to simulate two (or more) additional moments, i.e. \(F\left( {x;\varvec{p}} \right): = 1 - \left( {1 - F\left( {x;\varvec{p}} \right)^{a'} } \right)^{b '}\)., with a’ and b’ two extra parameters.

For heavy-tailed distributions we use the standardized version of the Normal-Inverse-Gaussian (abbreviated as NIG) distribution with probability density function (cf., Barndorff-Nielsen 1978):

$$f\left( {x;\varvec{p}} \right)\text{ := }\frac{{\sqrt {a^{2} + b^{2} } {\text{e}}^{{b + \frac{{a\left( {x - c} \right)}}{d}}} }}{{\pi d\sqrt {1 + \left( {\frac{{\left( {x - c} \right)}}{d}} \right)^{2} } }}K_{1} \left( {\sqrt {a^{2} + b^{2} } \sqrt {1 + \left( {\left( {x - c} \right)/d} \right)^{2} } } \right)$$
(35)

where \(x \in {\mathbb{R}}\), \(\varvec{p} = \left[ {a,b,c,d} \right]\), the parameters of the distribution with \(c \in {\mathbb{R}}\), a ≠ 0 and bd > 0 (see also Tables 2 and 3); again cd are location and scale parameters, respectively, with units same as in x, and \(a, b\) > 0 are dimensionless shape parameters.

The NIG distribution has similar advantages to the ESK, such as closed expressions for the first four central moments. Also, it enables a large variety of skewness-kurtosis combinations and its random numbers can be generated almost as fast as the ESK ones through the normal variance-mean mixture:

$$x = c + \frac{a}{d}z + \sqrt z g$$
(36)

where

$$g\sim\,N\left( {0,1} \right),\,z\sim\,f\left( {y;b ,d} \right) = d/\sqrt {2\pi y^{3} } {\text{e}}^{{ - \frac{{b^{2} \left( {y/d - d/b} \right)^{2} }}{2y}}}$$
(37)

The latter distribution is the inverse Gaussian distribution which can be easily and fast generated (e.g., Chhikara and Folks 1989, section 4.5).

Below, we estimate the statistical characteristics of the NIG and we justify the use of the NIG distribution as a heavy-tailed distribution. Note that the central moments of the NIG function cannot be expressed as closed and analytical forms and thus, we can estimate them through the NIG characteristic function (cf., Barndorff-Nielsen 1978):

$$\varphi_{X} \left( t \right) = {\text{E}}\left[ {{\text{e}}^{itX} } \right] = {\text{e}}^{{ict + b - \sqrt {b^{2} - i2adt + d^{2} t^{2} } }}$$
(38)

where the pth raw moment corresponds to:

$${\text{E}}\left[ {X^{p} } \right] = \left( { - i} \right)^{p} \mathop {\lim }\limits_{t \to 0} \left( {\frac{{{\text{d}}^{p} \varphi_{X} \left( t \right)}}{{{\text{d}}t^{p} }}} \right) .$$
(39)

Particularly, the first moment and the sequent three central moments are given by:

$$\mu = c + ad/b$$
(40)
$${\text{E}}\left[ {\left( {\underline{x} - \mu } \right)^{2} } \right] = \left( {a^{2} + b^{2} } \right)d^{2} /b^{3}$$
(41)
$${\text{E}}\left[ {\left( {\underline{x} - \mu } \right)^{3} } \right] = \frac{{3a\left( {\left( {a^{2} + b^{2} } \right)d^{2} /b^{3} } \right)^{3/2} }}{{\sqrt {b\left( {a^{2} + b^{2} } \right)} }}$$
(42)
$${\text{E}}\left[ {\left( {\underline{x} - \mu } \right)^{4} } \right] = \frac{{3\left( {\left( {a^{2} + b^{2} } \right)d^{2} /b^{3} } \right)^{2} }}{b}\left( {1 + \frac{4}{{1 + \left( {b/a} \right)^{2} }}} \right) + 3\left( {\left( {a^{2} + b^{2} } \right)d^{2} /b^{3} } \right)^{2}$$
(43)

After algebraic manipulations the coefficients of variation, skewness and kurtosis can be expressed as

$$C_{\text{v}} = \frac{{a^{2} + b^{2} }}{{b\left( {a + bc/d} \right)^{2} }},\,C_{\text{s}} = \frac{3a}{{\sqrt {b\left( {a^{2} + b^{2} } \right)} }} ,\,C_{\text{k}} = \frac{3}{b}\left( {1 + \frac{4}{{1 + \left( {b/a} \right)^{2} }}} \right) + 3$$
(44)

respectively. The NIG parameters can then be calculated from these equations as:

$$d = \frac{{3\sigma \sqrt {3C_{\text{k}} - 5C_{\text{s}}^{2} - 9} }}{{3C_{\text{k}} - 4C_{\text{s}}^{2} - 9}},\,b = \frac{d}{\sigma }\sqrt {\frac{3}{{C_{\text{k}} - \frac{5}{3}C_{\text{s}}^{2} - 3}}} ,\,a = \frac{{b^{2} C_{\text{s}} \sigma }}{3d},\,c = \mu - ad/b$$
(45)

Also, we can derive theoretically the minimum kurtosis of NIG for a given skewness:

$$C_{\text{k}} \ge \frac{5}{3}C_{\text{s}}^{2} + 3$$
(46)

with the equality holding only for the limit where the NIG tends to the normal distribution.

For the classification of tails we use the test based on the functions proposed by Klugman et al. (1998, sect. 3.4.3; see also Halliwell 2013) and here defined as:

$$\tau_{\text{r}} \text{ := } - \mathop {\lim }\limits_{x \to \infty } \left( {\frac{{{\text{d}}f\left( {x;\varvec{p}} \right)}}{{f\left( {x;\varvec{p}} \right){\text{d}}x}}} \right),\,\tau_{\text{l}} \text{ := } \mathop {\lim }\limits_{x \to - \infty } \left( {\frac{{{\text{d}}f\left( {x;\varvec{p}} \right)}}{{f\left( {x;\varvec{p}} \right){\text{d}}x}}} \right)$$
(47)

After calculations we get

$$\tau_{\text{r}} = \sqrt {a^{2} + b^{2} } /d - a/d \ge 0,\,\tau_{\text{l}} = \sqrt {a^{2} + b^{2} } /d + a/d \ge 0$$
(48)

and hence, the NIG is expected to represent a large variety of heavy-tailed distributions.

Note that again, if we wish to extend the SMA model to preserve additional moments through the NIG distribution, we could similarly expand the normal variance-mean mixture to simulate two additional moments, i.e.:

$$x = c + \frac{a}{d}z '+ a '\sqrt {z '} g$$
(49)

with a’ an extra parameter and z’ the so-called generalized inverse Gaussian distribution.

In Figs. 13 and 14, we observe that the smaller possible kurtosis of the ESK distribution for a given skewness coincides with the theoretical limit defined by Pearson (1930). Also, the larger kurtosis of the ESK includes a variety of sub-Gaussian and thin-tailed distributions. On the contrary, the smaller kurtosis of the NIG distribution is very close to the larger one of the ESK and thus, it can include (as shown above) a variety of heavy-tailed distributions. Note that these two distributions are chosen to simulate unbounded processes (NIG) and bounded between two real values (ESK). We could easily find similar distributions for upper (or lower) bounded processes as, for example, the generalized Kumaraswamy or hypergeometric ones.

Fig. 13
figure 13

Combinations of skewness and kurtosis coefficients for various two-parameter (Weibull, GEV, lognormal, generalized normal I, skew-exponential-power —SEP— and gamma), three-parameter (generalized normal II and skew normal) and the four-parameter Pareto-Burr-Fuller (PBF, further described in Sect. 4) distribution functions along with the thin-heavy tailed separation based on the ESK and NIG functions, respectively

Fig. 14
figure 14

Isopleths for estimated coefficients of skewness and kurtosis for the specified values of parameters \(a {\text{and }}b\) of the ESK and NIG distributions

In case additional moments need to be preserved, a more generalized methodology includes the use of the ME distribution, which can be applied for any type of distribution (see Eqs. 1213):

$$f\left( {x;\varvec{\lambda}} \right): = \frac{1}{{\lambda_{0} }}{\text{e}}^{{ - \left( {\frac{x}{{\lambda_{1} }} + {\text{sign}}\left( {\lambda_{2} } \right)\left( {\frac{x}{{\lambda_{2} }}} \right)^{2} + \left( {\frac{x}{{\lambda_{3} }}} \right)^{3} + {\text{sign}}\left( {\lambda_{4} } \right)\left( {\frac{x}{{\lambda_{4} }}} \right)^{4} + \left( {\frac{x}{{\lambda_{3} }}} \right)^{5} + {\text{sign}}\left( {\lambda_{6} } \right)\left( {\frac{x}{{\lambda_{6} }}} \right)^{6} + \ldots + \left( {\frac{x}{{\lambda_{m} }}} \right)^{m} } \right)}}$$
(50)

where λ \(= \left[ {\lambda_{0} , \lambda_{1} ,\lambda_{2} ,\lambda_{3} ,\lambda_{4} ,\lambda_{5} ,\lambda_{6} , \ldots ,\lambda_{m} } \right]\), (where m even) with λ0, λ1λ2λ3λ4λ5λ6, …, λ m having same units as x and with m + 1 constraints, where m is the number of moments we wish to preserve:

$$\int\nolimits_{ - \infty }^{\infty } x^{r} f\left( {x;\varvec{\lambda}} \right){\text{d}}x = {\text{E}}\left[ {\underline{x}^{r} } \right],\,{\text{ for}}\quad r = 0, \ldots ,m$$
(51)

For the generation scheme of the above distribution, we may use the random number generator described in the next steps. After we estimate the λ parameters of the MED we can rewrite the above equation as:

$$f'\left( {x;\varvec{\lambda}'} \right): = \lambda '_{0} {\text{e}}^{{ - \left( {\left( {\lambda '_{1} x + \lambda '_{2} } \right)^{2} + \left( {\lambda '_{3} x + \lambda '_{4} } \right)^{4} + \left( {\lambda '_{5} x + \lambda '_{6} } \right)^{6} + \ldots } \right)}}$$
(52)

with exactly the same number of unknown parameters (note that an exact solution for λ′ always exists).

After we estimate the new parameters λ’ we can approximate the above distributions with an auxiliary distribution function for an even m:

$$g\left( {x;\varvec{a},\varvec{b},\varvec{c},\varvec{d}} \right) = \left\{ {\begin{array}{*{20}c} {c_{1} {\text{e}}^{{ - \left( {a_{1} x + b_{1} } \right)^{2} }} , d_{1} < x < d_{2} } \\ {c_{2} {\text{e}}^{{ - \left( {a_{2} x + b_{2} } \right)^{4} }} , d_{3} < x \le d_{1} ,d_{2} \le x < d_{4} } \\ {c_{3} {\text{e}}^{{ - \left( {a_{3} x + b_{3} } \right)^{6} }} , d_{5} < x \le d_{3} ,d_{4} \le x < d_{6} } \\ \ldots \\ \end{array} } \right.$$
(53)

where a = [a1a2a3, …], b = [b1b2b3, …], c = [c1c2c3, …] and d = [d1d2d3, …] with the last two d parameters to be − ∞ and + ∞, respectively. The above distribution is subjected to the constraint \(\int\nolimits_{ - \infty }^{\infty } g\left( {x;\varvec{a},\varvec{b},\varvec{c},\varvec{d}} \right){\text{d}}x = 1\), and continuity in all its branches.

After the estimation of all the parameters through optimization techniques (so as g to be as close as possible to f′) we can use the rejection method (Papoulis 1991, pp. 261–263) to generate random number for the MED. Note that for the generation of each branch function of the g distribution, we can use the random number generator of the powered-exponential distribution function through the generation of the gamma distribution function (the latter can be also generated using the rejection method as described in Koutsoyiannis and Manetas 1996).

Appendix C

Here we describe how the SMA model can be used to cope with non-stationary processes. The general idea is to convert non-stationary processes to stationary ones, so that eventually the simulation is made for a stationary process (Dimitriadis 2017). This conversion is achieved by appropriate transformations or by separating them into segments, as for example in the case of cyclostationary processes. While in the recent literature there is no shortage of publications seeking or assuming non-stationarity, this may just reflect incomplete understanding of what stationarity is (Koutsoyiannis and Montanari 2015). A common confusion is that non-stationarity is regarded as a property of the natural process, while in fact it is a property of a mathematical (stochastic) process. In non-stationary processes some of the statistical properties change in time in a deterministic manner. The deterministic function describing the change in the statistical properties is rarely known in advance and, in studies claiming non-stationarity, is typically inferred from the data. However, it is impractical or even impossible to properly fit a non-stationary mathematical process to time series, as in nature only one time series of observations of a certain process is possible, while the definition of stationarity or non-stationarity relies on the notion of an ensemble of time series.

A simple example of how we can deal with a non-stationary process through a stationary one follows. We consider an HK process (denoted as x) with H = 0.8 μ = 0 and σ = 1 and by aggregation we also take the cumulative process (denoted as y, i.e. y i  = yi – 1 + x i ). Figure 15 shows a time series generated from x and the corresponding time series from y. Clearly, x is stationary and y is non-stationary (the so-called fractional Brownian noise). If we have the information about the theoretical basis of the two processes, then it is trivial to correctly model them (Koutsoyiannis 2016). In particular, we will know that the mean of the process y is constant (zero, not a function of time) while its variance is an increasing function of time (a power-law function of i). Otherwise, if the only available information is the time series of y, then we may be tempted to assume a linear trend for the mean of y and express the mean of the process as a linear function of time, μ i  = a i + b (with a and b the parameters of the slope and intercept of a regression line on the time series). This, however, would be plain wrong as in fact (by construction) the mean of y is zero for any time i. In addition, the introduction of the two extra parameters (i.e. a and b) has negative implications in terms of the overall uncertainty of the model, which would cease to be parsimonious. But again, even with this wrong assumption, the next step would be to construct a stationary model, i.e. z i  = y i a ib and use that model in simulations. The correct approach for this case would be to construct the time series of x by differentiation of y (i.e. x i  = y i yi−1), which is stationary, and use the stationary process x for stochastic simulation; then a synthetic time series of the non-stationary process y will be constructed from a time series of x. Thus, in all cases, whether with correct or incorrect assumptions, the stochastic simulation is always done for a stationary process.

Fig. 15
figure 15

Time series with length 1000 from the example processes x and y

Appendix D

The first applied generic schemes for a stochastic synthesis are the implicit ones, i.e. those approximating the distribution and dependence structure of a process through non-linear transformations. These non-linear transformations (or else known as copula, Hoeffding 1940; Frechet 1951; Sklar 1959; Nelsen 2006, and references therein) are often based on the autocovariance function for any distribution function, where the uniform is usually preferred for reasons of simplicity, whereas for reasons of flexibility the Gaussian distribution (the so-called Gaussian-copula; Lebrun and Dutfoy 2009) can be also used (for the bivariate Gaussian copula see Nataf 1962; Serinaldi and Lombardo 2017a; Tsoukalas et al. 2018; Papadopoulos and Giovanis 2018; and references therein). This scheme is also known as Nataf transformation, but here we propose to use the name Hoeffding–Frechet–Sklar–Nataf (HFSN) transformation, since Nataf wrote a half-page conference paper (presented by Frechet) just mentioning (without further analyzing it) a specific case of the general methodology earlier discussed by Hoeffding, Frechet and Sklar. The general HFSN transformation can be written as:

$$\rho_{{x_{i} x_{j} }} \sigma^{2} + \mu^{2} = {\text{E}}\left[ {\underline{x}_{i} \underline{x}_{j} } \right] = {\text{E}}\left[ {T\left( {\underline{y}_{i} } \right)T\left( {\underline{y}_{j} } \right)} \right] = \int\nolimits_{ - \infty }^{ + \infty } \int\nolimits_{ - \infty }^{ + \infty } T\left( {y_{i} } \right)T\left( {y_{j} } \right)f\left( {y_{i} ,y_{j} ;\rho_{{y_{i} y_{j} }} } \right){\text{d}}y_{i} {\text{d}}y_{j}$$
(54)

where \(\rho_{{x_{i} x_{j} }}\) and \(\rho_{{y_{i} y_{j} }}\) are the cross-correlations between x i and x j as well as y i and y j , respectively; μ and σ are the process mean and standard deviation; \(f\left( {y_{i} ,y_{j} ;\rho_{{y_{i} y_{j} }} } \right)\) is the joint distribution between y i and y j ; T(y i ) and T(y j ) are the transformations of the original known distribution of x i and x j to the selected distribution of y i and y j , respectively (see below for an example of such transformations). In case that \(\underline{y}\) is for example N(0,1) distributed, the bivariate N(0,1) is used, i.e. \(f\left( {y_{i} ,y_{j} ;\rho_{{y_{i} y_{j} }} } \right) = {\text{e}}^{{ - 1/2\left( {y_{i}^{2} + y_{j}^{2} - 2y_{i} y_{j} \rho_{{y_{i} y_{j} }} } \right)/\left( {1 - \rho_{{y_{i} y_{j} }}^{2} } \right)}} /\left( {2\uppi\left( {1 - \rho_{{y_{i} y_{j} }}^{2} } \right)^{1/2} } \right)\).

Similar implicit schemes are developed based on the power spectrum (Cugar 1968; Lavergnat 2016 and references therein). This implicit scheme can be also introduced through the climacogram, i.e.:

$$\gamma_{x} \left( k \right) + \mu^{2} = {\text{E}}\left[ {\left( {\underline{x}^{\left( k \right)} } \right)^{2} } \right] = {\text{E}}\left[ {\left( {T\left( {\underline{y} } \right)^{\left( k \right)} } \right)^{2} } \right] = \frac{1}{{k^{2} }}\int\nolimits_{ - \infty }^{ + \infty } \left( {\int\nolimits_{0}^{k} T\left( {y\left( t \right)} \right){\text{d}}t} \right)^{2} f^{\left( k \right)} \left( {y;\gamma_{y} \left( k \right)} \right){\text{d}}y$$
(55)

where \(\underline{x}^{\left( k \right)} \text{ := }\frac{1}{k}\int\nolimits_{0}^{k} \underline{x} \left( t \right){\text{d}}t\) is the averaged process of the continuous-time process \(\underline{x} \left( t \right)\), μ is the process mean, and \(T\left( {\underline{y} } \right)\) is a transformation function of the original process \(\underline{y}\) (with the selected uniform, Gaussian etc. distribution function and with an unknown γ y dependence structure) to the desired one \(\underline{x}\) (with known density distribution f(x) and dependence structure γ x adjusted for bias). For example, a y ~ N(0,1) can be easily transformed to a x ~ F(x) = 1– (a/x)b by \(\underline{x} \left( t \right) = T\left( {\underline{y} \left( t \right)} \right) = a/\left( {1 - \left( {1/2\left( {1 + {\text{erf}}\left( {\underline{y} \left( t \right)/\sqrt 2 } \right)} \right)} \right)^{1/b} } \right)\).

The climacogram-implicit scheme has been applied to several (stationary and single/double cyclostationary) processes, such as solar radiation (Koudouris et al. 2017), wave height and wind process for renewable energy production (Moschos et al. 2017), as well as for the wind speed using a special case of the PBF distribution (Deligiannis et al. 2016) but also a generalized non-linear transformation (equivalent to a distribution function) based on the maximization of entropy when the distribution function is unknown (Dimitriadis and Koutsoyiannis 2015b). Note that in all the above applications the same dependence structure is used for the original and the transformed process, since a small deviation between then is noticed and therefore, additional trials are considered not necessary.

A difficulty with the implicit schemes is that they involve non-linear transformations and double integration (both of which may highly increase the numerical burden, even though fast algorithms have been discussed by Serinaldi and Lombardo 2017b). Several exact solutions of the implicit scheme may exist even though an exact solution may not be possible for some processes (especially in very strong correlation structure as is the case in the small scales related to the fractal behaviour). Furthermore, there is no guarantee that the resulting autocorrelation structure of the transformed process will be symmetric positive definite (Lebrun and Dutfoy 2009). In addition to the above, the transformation cannot be invariant with respect to the time lag or time scale, while the fractal and HK behaviour cannot be easily handled since the transformation is invariant with respect to the zero and infinite time scale. Some of these limitations can be dealt with through a cautiously constructed binary scheme, a multivariate Gaussian (or with other distribution such as the uniform one) copula scheme, a Monte-Carlo approach to identify the unknown dependence structure, or a properly handled disaggregation scheme for generating events of the process or more generally, by adjusting any desired stochastic properties (dependence structure and distribution function) to each scale (Lombardo et al. 2017).

However, three of these problems concerning the implicit schemes can be easily dealt by the proposed explicit scheme. Namely, these are (a) the inability of simulating the effect of the fractal behaviour of a process at small scales, in which the correlation structure is very strong, (b) the difficulties in preserving long-term persistence, and in particular its variability and (c) the effect of the statistical bias (Dimitriadis 2017, sect. 2.4.5). More specifically, the implicit schemes simulate the fractal and HK behaviour and bias of the non-linear transformation process T(y), i.e. \(f\left( {T\left( {y_{i} } \right),T\left( {y_{j} } \right);\rho_{{T\left( {y_{i} } \right)T\left( {y_{j} } \right)}} } \right)\), which due to discretization and finite length are not equal to the ones of the infinite length size continuous-time process, i.e. \(f\left( {y_{i} ,y_{j} ;\rho_{{y_{i} y_{j} }} } \right)\). Because of the theoretical origins of these three limitations the above implicit schemes are more theoretically valid for processes with short-term persistence (low bias effect) and no fractal behaviour (i.e. H = M = 0.5), and can be used for long-term processes with fractal behaviour only as a rough approximation.

For illustration, we present a simple example to highlight the above problems related to the implicit schemes described above such as the case of the HFSN transformation. Particularly, we generate (through the SMA scheme) data from a N(0,1) distribution with an HHK dependence structure (q = 10, M = 1/3, H = 5/6) and we transform them to a Pareto II distribution (a = b = 10) through its inverse distribution function. Subsequently, we estimate (through Monte-Carlo techniques) the expected climacogram of the transformed process and we perform separately a sensitivity analysis (as in Dimitriadis et al. 2016a) for the original (Gaussian-HHK) and transformed Pareto-HHK (q = 6.538, M = 0.431, H = 0.832) processes both adjusted for bias (Fig. 16). Furthermore, we simulate the same transformed process but now using the explicit scheme proposed in this paper with just four moments (Fig. 16). Finally, by comparing the differences between the two methods, we see that while the marginal distribution is well preserved by the implicit scheme, the fractal and HK behaviour are not exactly preserved. A complication of this, is that, for example, the variances of the sample variance of the two schemes are very different (although their mean values coincide) and that the implicit scheme overestimates it (by a factor of 10). Note that the true variance of the sample variance corresponds to that of the explicit scheme, since it explicitly preserves both the climacogram and the coefficient of kurtosis and thus, can approximate all the arising moments through the SMA scheme, such as the regular moments (i.e. E[X], E[X2] and E[X4]) but also the joint moments (i.e. E[X i X 2 j ] and E[X 2 i X 2 j ]), which are all function of a combination of the SMA weight coefficients and the marginal moments of the white noise process (that are both exactly preserved in the explicit scheme). We believe that the reason why this is not identified in some of the recent literature is probably because all applications of the second-order implicit schemes are based solely on the preservation of the expected (mean) value of the dependence structure and not additionally on its variance (or its distribution in general) for each scale. This can be dealt by higher-order (more than two) copula schemes (or by selecting other distributions, such as the uniform one, for the transformation) but some difficulties may still remain (see above for three major ones).

Fig. 16
figure 16

Mean, 5, 95% [left] and variance [right] of the sample climacogram for the implicit and explicit (preserving four moments) scheme of a Pareto II(a = b = 10) and HHK(q = 6.538, M = 0.431, H = 0.832) process

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dimitriadis, P., Koutsoyiannis, D. Stochastic synthesis approximating any process dependence and distribution. Stoch Environ Res Risk Assess 32, 1493–1515 (2018). https://doi.org/10.1007/s00477-018-1540-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00477-018-1540-2

Keywords

Navigation