Quantification of LongRange Persistence in Geophysical Time Series: Conventional and BenchmarkBased Improvement Techniques
Authors
 First online:
 Received:
 Accepted:
DOI: 10.1007/s1071201292178
Abstract
Time series in the Earth Sciences are often characterized as selfaffine longrange persistent, where the power spectral density, S, exhibits a powerlaw dependence on frequency, f, S(f) ~ f ^{−β }, with β the persistence strength. For modelling purposes, it is important to determine the strength of selfaffine longrange persistence β as precisely as possible and to quantify the uncertainty of this estimate. After an extensive review and discussion of asymptotic and the more specific case of selfaffine longrange persistence, we compare four common analysis techniques for quantifying selfaffine longrange persistence: (a) rescaled range (R/S) analysis, (b) semivariogram analysis, (c) detrended fluctuation analysis, and (d) power spectral analysis. To evaluate these methods, we construct ensembles of synthetic selfaffine noises and motions with different (1) time series lengths N = 64, 128, 256, …, 131,072, (2) modelled persistence strengths β _{model} = −1.0, −0.8, −0.6, …, 4.0, and (3) onepoint probability distributions (Gaussian, lognormal: coefficient of variation c _{v} = 0.0 to 2.0, Levy: tail parameter a = 1.0 to 2.0) and evaluate the four techniques by statistically comparing their performance. Over 17,000 sets of parameters are produced, each characterizing a given process; for each process type, 100 realizations are created. The four techniques give the following results in terms of systematic error (bias = average performance test results for β over 100 realizations minus modelled β) and random error (standard deviation of measured β over 100 realizations): (1) Hurst rescaled range (R/S) analysis is not recommended to use due to large systematic errors. (2) Semivariogram analysis shows no systematic errors but large random errors for selfaffine noises with 1.2 ≤ β ≤ 2.8. (3) Detrended fluctuation analysis is well suited for time series with thintailed probability distributions and for persistence strengths of β ≥ 0.0. (4) Spectral techniques perform the best of all four techniques: for selfaffine noises with positive persistence (β ≥ 0.0) and symmetric onepoint distributions, they have no systematic errors and, compared to the other three techniques, small random errors; for antipersistent selfaffine noises (β < 0.0) and asymmetric onepoint probability distributions, spectral techniques have small systematic and random errors. For quantifying the strength of longrange persistence of a time series, benchmarkbased improvements to the estimator predicated on the performance for selfaffine noises with the same time series length and onepoint probability distribution are proposed. This scheme adjusts for the systematic errors of the considered technique and results in realistic 95 % confidence intervals for the estimated strength of persistence. We finish this paper by quantifying longrange persistence (and corresponding uncertainties) of three geophysical time series—palaeotemperature, river discharge, and Auroral electrojet index—with the three representing three different types of probability distribution—Gaussian, lognormal, and Levy, respectively.
Keywords
Fractional noises and motions Selfaffine time series Longrange persistence Hurst rescaled range (R/S) analysis Semivariogram analysis Detrended fluctuation analysis Power spectral analysis Random and systematic errors Rootmeansquared error Confidence intervals Benchmarkbased improvements Geophysical time series1 Introduction
Time series can be found in many areas of the Earth Sciences and other disciplines. After obvious periodicities and trends have been removed from a time series, the stochastic component remains. This can be broadly broken up into two parts: (1) the statistical frequencysize distribution of values (how many values at a given size) and (2) the correlations between those values (how successive values cluster together, or the memory in the time series). In this paper, and because of their importance and use in the broad Earth Sciences, we will compare the strengths and weaknesses of commonly used measures for quantifying a frequently encountered type of memory, longrange persistence, also known as longmemory or longrange correlations.
This paper is organized as follows. In this introduction section we introduce longrange persistence and its importance in the Earth Sciences. We then provide in Sect. 2 a brief background to processes and time series and in Sect. 3 a more detailed background to longrange persistence. Section 4 describes the synthetic time series construction and presentation of the synthetic noises (with normal, lognormal, and Levy onepoint probability distributions) that we will use for evaluating the strength of longrange persistence. This is followed in Sect. 5 (time domain techniques) and Sect. 6 (frequencydomain techniques) with a description of several prominent techniques (Hurst rescaled range analysis, semivariogram analysis, detrended fluctuation analysis, and power spectral analysis) for measuring the strength of longrange persistence. Section 7 presents the results of the performance analyses of the techniques, with in Sect. 8 a discussion of the results. In Sect. 9, benchmarkbased improvements to the estimators for longrange dependence that are based on the techniques described in Sects. 5 and 6 are introduced. Section 10 is devoted to applying these tools to characterize the longrange persistence of three geophysical time series. These three time series—palaeotemperature, river discharge, and Auroral electrojet index—represent three different types of onepoint probability distribution—Gaussian, lognormal, and Levy, respectively. Finally, Sect. 11 gives an overall summary and discussion.
After the paper’s main text, five appendices give details of the construction of synthetic noises used in this paper and the fitting of power laws to data. Additionally, to accompany this paper, are four sets of electronic supplementary material: (1) 1,260 synthetic fractional noise examples and an R program for creating them, (2) an R program for the user to run the five types of longrange persistence analyses described in this paper, (3) an Excel spread sheet which includes detailed summary results of the performance tests applied here to 6,500 different sets of time series parameters, and a calibration spreadsheet/graph for the user to do benchmarkbased improvement techniques, and (4) a PDF file with the 41 figures from this paper at high resolution.
We now introduce the idea of longrange persistence in the context of the Earth Sciences, with many of these ideas explored in more depth in later sections. Many time series in the Earth Sciences exhibit persistence (memory) where successive values are positively correlated; big values tend to follow big and small values follow small. The correlations are the statistical dependence of directly and distantly neighboured values in the time series. Besides correlations caused by periodic components, two types of correlations are often considered in the statistical modelling of time series: shortrange (Priestley 1981; Box et al. 1994) and longrange (Beran 1994; Taqqu and Samorodnitsky 1992). Shortrange correlations (persistence) are characterized by a decay in the autocorrelation function that is bounded by an exponential decay for large lags; in other words, a fixed number of preceding values influence the next value in the time series. In contrast, longrange correlated time series (of which a specific subclass is sometimes referred to as fractional noises or 1/f noises) are such that any given value is influenced by ‘all’ preceding values of the time series and are characterized by a powerlaw decay (exact or asymptotic) of the correlation between values as a function of the temporal distance (or lag) between them.
This powerlaw decay of values can be better understood in the context of selfsimilarity and selfaffinity. Mandelbrot (1967) introduced the idea of selfsimilarity (and subsequently fractals) in the context of the coast of Great Britain where the same approximate coastal shape is found at multiple scales. He found a powerlaw relationship between the total length of the coast as a function of the segment length, with the powerlaw exponent parameter called the fractal dimension. The concept of fractals to describe spatial objects has become widely used in the Earth Sciences (in addition to other disciplines). Mandelbrot and van Ness (1968) extended the idea of selfsimilarity in spatial objects to time series, calling the latter a selfaffine fractal or a selfaffine time series when appropriately rescaling the two axes produces a time series that is statistically similar.
In a selfaffine time series, the strength of the variations at a given frequency varies as a powerlaw function of that frequency. Thus, a large range of frequencies are influenced. In other words, any given value in a time series is influenced by all other values preceding it, with the values themselves forming a selfsimilar pattern and the selfaffine time series exhibiting, by definition, longrange persistence. The strength of longrange correlations can be related to the fractal dimension (Voss 1985; Klinkenberg 1994) and influences the efficacy and appropriateness of longrange persistent algorithms chosen.
Selfaffine time series (longrange persistence) have been discussed and documented for many processes in the Earth Sciences. Examples include river runoff and precipitation (Hurst 1951; Mandelbrot and van Ness 1968; Montanari et al. 1996; Kantelhardt et al. 2003; Mudelsee 2007; Khaliq et al. 2009), atmospheric variability (Govindan et al. 2002), temperatures over short to very long time scales (Pelletier and Turcotte 1999; Fraedrich and Blender 2003), fluctuations of the NorthAtlantic Oscillation index (Collette and Ausloos 2004), surface wind speeds (Govindan and Kantz 2004), the geomagnetic auroral electrojet index (Chapman et al. 2005), geomagnetic variability (Anh et al. 2007), and ozone records (Kiss et al. 2007).
Although longrange persistence has been shown to be a part of many geophysical records, physical explanations for this type of behaviour and geophysical models that describe this property appropriately are less common. In one example, Pelletier and Turcotte (1997) modelled longrange persistence found in climatological and hydrological time series with an advection–diffusion model of heat and water vapour in the atmosphere. In another example, Blender and Fraedrich (2003) modelled longrange persistent surface temperatures by coupled atmosphere–ocean models and found different persistence strengths for ocean and coastal areas. In a third example, Mudelsee (2007) proposed a hydrological model, where a superposition of shortrange dependent processes with different model parameters results in a longrange persistent process; he modelled river discharge as the spatial aggregation of mutually independent reservoirs (which he assumed to be firstorder autoregressive processes).
Longrange persistent behaviour occurs also in a few (but not in all) models of selforganized criticality (Bak et al. 1987; Turcotte 1999; Hergarten 2002; Kwapień and Drożdż 2012); as an example the Bak–Sneppen model (Bak and Sneppen 1993; Daerden and Vanderzande 1996) is a simple model of coevolution between interacting species and has been used to describe evolutionary biological processes. The Bak–Sneppen model has also been extended to solar and geophysical phenomena such as Xray bursts at the Sun’s surface (Bershadskii and Sreenivasan 2003), solar flares (Meirelles et al. 2010), and for Earth’s magnetic field reversals (Papa et al. 2012). Nagler and Claussen (2005) found that cellular automata models (i.e. gridbased models with simple nearestneighbour rules of interaction) can also generate longrange persistent behaviour.
Physical explanations and models for longrange persistence are certainly a strong step forward in the published literature, rather than ‘just’ documentation of persistence (based on the statistical properties of measured data) itself. However, these physical explanations in the community are often confounded by the following: (1) a confusion of whether asymptotic or the more specific case of selfaffine longrange persistence is being explored; (2) in the case of some models, such as ‘toy’ cellular automata models and some ‘philosophical’ models, a lack of sensitivity in the model itself, so that any output tends towards some sort of universal behaviour; and (3) sometimes nonrigorous and visual comparison of any model output (which itself is based on a simplification of the physical explanations) with ‘reality’. As such, these physical explanations and models are welcome, but are often met with a bit of scepticism by peers in any given community (e.g., see Frigg 2003).
Longrange correlations are also generic to many chaotic systems (Manneville 1980; Procaccia and Schuster 1983; Geisel et al. 1985, 1987), for which a large class of models in the geosciences has been designed. Furthermore, over the last decade it has become clear that longrange correlations are not only important for describing the clustering of the time series values (i.e. big or small values clustering together), but are also one of the key parameters for describing the return times of and correlations between values in a series of extremes over a given threshold (Altmann and Kantz 2005; Bunde et al. 2005; Blender et al. 2008) and for characterizing the scaling of linear trends in short segments of the considered time series (Bunde and Lennartz 2012).
Most empirical studies of selfaffinity and longrange persistence compare different techniques or discuss the minimal length of the time series to ensure reliable estimates of the strength of longrange dependence. There are few (e.g., Malamud and Turcotte 1999a; Velasco 2000) systematic studies on the influence of onepoint probability distributions (e.g., normal vs. other distributions) on the performance of the estimators. As many time series in the geosciences have a onepoint probability density that is heavily nonGaussian, we will in this paper systematically examine different synthetic time series with varying strengths of longrange persistence and different statistical distributions. By doing so, we will repeat and review parts of what has been found previously, confirming and/or highlighting major issues, but also systematically examine nonGaussian time series in a manner previously not done, particularly with respect to heavytailed frequencysize probability distributions. We will thus establish the degree of utility of common techniques used in the Earth Sciences for examining the presence or absence, and strength, of longrange persistence, by using synthetic time series with probability distributions and number of data values similar to those commonly found in the geosciences.
2 Time Series
Notation and abbreviations
Symbol 
Description 

‰ 
Parts per mil (parts per thousand) 
 
The vertical bar means ‘given’. For example, \( P\left( {\boldsymbol{\beta}}_{{{\mathbf{measured}}}} \beta_{\text{model}} \right) \) would mean the distribution of measured values \( {\boldsymbol{\beta}}_{{{\mathbf{measured}}}} \) given \( \beta_{\text{model}} \) 
α 
Powerlaw exponent of the fluctuation function 
β 
Powerlaw exponent of the power spectral density, in other words, the strength of the longrange persistence 
β _{Hu} , β _{Ha} , β _{DFA} , β _{PS} 
Strength of longrange persistence, measured by using the following analyses (indicated by a subscript): Hu (rescaled range (R/S)), Ha (semivariogram), DFA (detrended fluctuation), and PS (power spectral) 
\( \beta_{\text{Hu}}^{*} \), \( \beta_{\text{Ha}}^{*} \), \( \beta_{\text{DFA}}^{*} \), \( \beta_{\text{PS}}^{*} \) 
Benchmarkbased improvement of longrange persistence, where a given time series’ strength of longrange persistence is measured, and then compared to Monte Carlo simulations for the respective onepoint probability distribution and length of the time series 
β _{measured} 
Estimator of longrange persistence that are calculated using different techniques, β _{measured} = \({\beta }_{[{\text {Hu},}\,{{\text {Ha},}}\,{\text{DFA,}}\,{\text{PS}}]}\), where Hu, Ha, DFA and PS represent the technique applied 
β _{model} 
The modelled strength of longrange persistence of a constructed selfaffine time series (fractional noises and motions) using inverse Fourier filtering 
γ(τ) 
Semivariogram depending on the time lag 
Δ 
Sampling interval (including units) 
ε 
White noise 
κ 
Constant 
μ 
Mean value (parameter of normal and lognormal distributions) 
\(\sigma \) 
Standard deviation (parameter of normal and lognormal distributions) 
\(\sigma \) _{ x } 
Sample standard deviation of the time series x _{1}, x _{2}, …, x _{ N } 
τ 
Time lag 
ϕ _{1} 
First coefficient of an autoregressive (AR) process 
\( \psi \left( t \right)\) 
Mother wavelet function 
a 
Tail parameter (exponent) of the Levy distribution 
Bias 
\( \bar{\beta }_{\text{measured}}  \beta_{\,{\text{model}}} \) 
C(τ) 
Autocorrelation function depending on the time lag 
c _{v} 
Coefficient of variation (parameter of the lognormal distribution) 
c _{1} , c _{2} 
Constants 
D 
Fractal dimension 
f 
Frequency 
F 
Fluctuation function 
F _{DFA}(l) 
Fluctuation function for detrended fluctuation analysis depending on the segment length l 
Ha 
Hausdorff exponent (powerlaw exponent of the semivariogram) 
Hu 
Hurst exponent (powerlaw exponent of the rescaled range (R/S)) 
i 
Imaginary number, i ^{2} = –1 
k 
Frequency index, 1 ≤ k ≤ N/2 
l 
Segment length 
L 
Likelihood function 
L 
Time series length for the construction of fractional noises and motions (see Appendices 1–4) 
m 
Segment index 
n _{ t } 
tth element of a white noise, n _{1}, n _{2}, …, n _{ N } 
N 
Number of values in a time series 
P 
Noncumulative probability density 
P _{Gaussian} , P _{lognormal} , P _{Levy} 
Onepoint probability density distributions with index giving the family of distributions 
P(AB) 
Conditional probability of A given B 
R/S 
Rescaled range (R/S); R = range, S = standard deviation 
RMSE 
Rootmeansquared error (Eq. 30) 
S(f) 
Power spectral density depending on the frequency f 
S _{ k } 
Periodogram (estimator of the power spectral density) depending on the index k 
s _{ t } 
Aggregated time series 
t 
Time, where time is used as an index of the data points, 1 ≤ t ≤ N 
w _{ t } 
Coefficients of the window function, w _{1}, w _{2}, …, w _{ N } 
W 
Normalized window function (Eq. 26b) 
x _{ t } 
Time series, x _{1}, x _{2}, …, x _{ N } 
\( \bar{x} \) 
Sample mean of the time series, x _{1}, x _{2}, …, x _{ N } 
X _{ k } 
Fourier transform coefficients X _{ k } (k = 1, 2, …, N/2) of the time series, x _{ t } (t = 1, 2, …, N) 
Abbreviation or acronym 
Description 

ACF 
Autocorrelation function 
AE 
Auroral Electrojet 
AR 
Autoregressive 
ARFIMA 
Autoregressive fractional integrated moving average 
ARMA 
Autoregressive moving average 
CRB 
Cramér–Rao bound 
DARFIMA 
Discontinuous ARFIMA 
DFA 
Detrended fluctuation analysis 
DFAk 
DFA with polynomials of order k applied to the profile. For example, DFA1 is a linear fit to the profile. 
DMA 
Detrended moving average 
FARIMA 
Fractional autoregressive integrated moving average 
FGN 
Fractional Gaussian noise 
FLevyN 
Fractional Levy noise 
FLNN_{a} 
Fractional lognormal noise, constructed by Box–Cox transform 
FLNN_{b} 
Fractional lognormal noise, constructed by the Schreiber–Schmitz algorithm 
FFT 
Fast Fourier transform 
GISP2 
Greenland Ice Sheet Project Two 
h 
Hour 
MA 
Moving average 
max 
Maximum 
min 
Minute (units) 
min 
Minimum 
MLE 
Maximum likelihood estimator or maximum likelihood estimation 
PSA 
Power spectral analysis 
Std Dev 
Standard deviation 
We can also have other processes which are not described by a simple set of equations, for example, geoprocesses (e.g., climate dynamics, plate tectonics) or a large experimental setup where the results of the experiment are data; the process in the latter case is the physical or computational interactions in the experiment. In the geosciences, often just a single or a very few realizations of a process are available (e.g., temperature records, recordings of seismicity), unless one does extensive model simulations, where hundreds to thousands of realizations of a given process might be created. Each realization of a process is called a time series. In the geosciences, with (often) just one time series, which is itself one realization of a process, we then attempt to infer from that single realization (the time series), properties of the process. The process can be considered to be the ‘underlying’ physical mechanism or equation or theory for a given system.
For each of the three time series in Fig. 1a,b,d are given the data in time (left) and their respective probability densities and underlying probability distributions (right). Each time series is equally spaced in time, with respective temporal spacing as follows: palaeotemperature Δ = 20 years, river discharge Δ = 1 day, and AE index Δ = 1 min (minute). However, the visual appearance when the three time series are compared is different. These ‘time impressions’ rely on the statistical frequencysize distribution of values (how many values at a given size) and the correlation between those values (how successive values cluster together, or the memory in the time series).
Visual examination of the probability distributions (Fig. 1, right) of the three time series confirms that they capture what we see in the time series (left) and provides some insight into their statistical character. The distribution of values in the time series x _{temp} (Fig. 1a) is broadly symmetric—with a mean value at about −34.8 [per mil] and with few extremes lower than −36 [per mil] or greater than −34 [per mil]. We see an underlying probability distribution that is symmetric, and most likely Gaussian.
The river discharge series shown in Fig. 1b consists of positive values 0 ≤ x _{discharge} ≤ 2,656 m^{3} s^{−1}. Note that two values are larger than 1,500 m^{3} s^{−1} and not shown on the graph. Its underlying probability distribution shown to the right is highly asymmetric; in other words, there are very few very large values (x _{discharge} > 500 m^{3} s^{−1}) and many smaller values, a distribution with a long tail of larger values on the righthand side. This distribution can be approximated by a lognormal distribution.
The differenced AE index Δx _{AE} series presented in Fig. 1d has values between −120 and 140 [W min^{−2}] and is approximately symmetric around zero. Despite its symmetry, its underlying probability distribution is different from the Gaussianlike distributed palaeotemperature series x _{temp} presented in Fig. 1a. Here, the fraction of values in the centre and at the very tails of the distribution is larger, showing doublesided powerlaw behaviour of the probability distribution (Pinto et al. 2012). These probability densities can be approximated by a Levy probability distribution.
While correlations within each of the three types of geophysical time series given in Fig. 1 (left) are more difficult to compare visually, all three time series exhibit some persistence: large values tend to follow large ones, and small values tend to follow small ones. The relative ordering of small, medium, and large values creates clusters (or lack of clusters) which we can make some attempts to observe visually. The palaeotemperature series (Fig. 1a) appears to have small clusters, contrasting with the discharge series (Fig. 1b) and the differenced AE index series (Fig. 1d), which appear to have larger clusters. One might argue, although it is difficult to do this visually, that the latter two time series therefore exhibit a higher ‘strength’ of persistence. Measures for quantifying persistence strength will be introduced formally in Sect. 3.1. We can also look at the roughness or ‘noisiness’ of the time series. The palaeotemperature series (Fig. 1a) appears to have the most scatter followed by the river discharge (Fig. 1b) and the differenced AE index (Fig. 1d), although, again, it is difficult to compare these visually, between clearly very different types of time series. These considerations show that it is sometimes difficult to grasp the strength of persistence visually from the time series itself.
3 LongRange Persistence
In this section we first introduce a general quantitative description of correlations in the context of the autocorrelation function and with examples from shortrange persistent models (Sect. 3.1). We then give a formal definition of longrange persistence along with a discussion of stationarity (Sect. 3.2), examples of longrange persistent time series and processes from the social and physical sciences (Sect. 3.3), a discussion of asymptotic longrange persistence versus selfaffinity (Sect. 3.4), and a brief theoretical overview of white noise and Brownian motion (Sect. 3.5) and conclude with a discussion and overview of fractional noises and motions (Sect. 3.6).
3.1 Correlations
 1.
Shortrange correlations where values are correlated to other values that are in a close temporal neighbourhood with one another, that is, values are correlated with one another at short lags in time (Priestley 1981; Box et al. 1994).
 2.
Longrange correlations where all or almost all values are correlated with one another, that is, values are correlated with one another at very long lags in time (Beran 1994; Taqqu and Samorodnitsky 1992).
Persistence is where large values tend to follow large ones, and small values tend to follow small ones, on average more of the time than if the time series were uncorrelated. This contrasts with antipersistence, where large values tend to follow small ones and small values large ones. For both persistence and antipersistence, one can have a strength that varies from weak to very strong. We will consider in this paper models (processes) for both persistence and antipersistence.
For zero lag (τ = 0 in Eq. 3), and using the definition for variance (Eq. 1), the autocorrelation function is C(0) = 1.0. For processes considered in this paper, we find that as the lag, τ, increases, τ = 1, 2, …, (N − 1), the autocorrelation function C(τ) decreases and the correlation between x _{ t+τ } and x _{ t } decreases. Positive values of C(τ) indicate persistence, negative values indicate antipersistence, and zero values indicate no correlation. Various statistical tests exist (e.g., the Q _{ K } statistic, Box and Pierce 1970) that take into account the sample size of the time series, and values of C(τ) for those τ calculated, to determine the significance of rejecting the time series as being correlated. A plot of C(τ) versus τ is known as a correlogram. A rapid decay of the correlogram indicates shortrange correlations, and a slow decay indicates longrange correlations.
Other examples of empirical models for shortrange persistence in time series include the moving average (MA) model and the combination of the AR and MA models to create the ARMA model. Reviews of many of these models are given in Box et al. (1994) and Chatfield (1996). There are many applications of shortrange persistent models in the social and physical sciences, ranging from river flows (e.g., Salas 1993), and ecology (e.g., Ives et al. 2010) to telecommunication networks (e.g., Adas 1997).
3.2 Formal Definition of Longrange Persistence
One important aspect of a time series is the stationarity of its underlying process (Witt et al. 1998). A process is said to be strictly stationary if all moments (e.g., mean value, \( \bar{x} \); variance, \( \sigma_{x}^{2} \); kurtosis) over multiple time series realizations do not change with time t and, in particular, do not depend on the length of the considered time series. Secondorder or weak stationarity (Chatfield 1996) requires that the means and standard deviations for different sections of a time series—again taken over multiple realizations (i.e. the process) and for different section lengths—have autocorrelation functions that are approximately the same.
3.3 LongRange Persistence in the Physical and Social Sciences

The 1/f behaviour of voltage and current amplitude fluctuations in electronic systems modelled as a superposition of thermal noises (Schottky 1918; Johnson 1925; van der Ziel 1950).

Trajectories of tracer particles in hydrodynamic flows (Solomon et al. 1993) and in granular material (Weeks et al. 2000).

Condensed matter physics (Kogan 2008).

Neurosciences (LinkenkaerHansen et al. 2001; Bédard et al. 2006).

Econophysics (Mantegna and Stanley 2000).

Receptor systems (Bahar et al. 2001).

Human gait (Hausdorff et al. 1996; Delignieres and Torre 2009).

Human sensory motor control system (Cabrera and Milton 2002; Patzelt et al. 2007) and human eye movements during spoken language comprehension (Stephen et al. 2009).

Heart beat intervals (Kobayashi and Musha 1982; Peng et al. 1993a; Goldberger et al. 2002).

Swimming behaviour of parasites (Uppaluri et al. 2011).
However, with the widespread identification of longrange persistence in physical and social systems has come a concern by those (Rangarajan and Ding 2000; Maraun et al. 2004; Gao et al. 2006; Rust et al. 2008) who believe that longrange persistence has often been incorrectly identified in time series, and who believe instead that many time series are in fact shortrange persistent. One part of the confusion surrounding the issue of shortrange versus longrange persistence is that of a frequent lack of knowledge as to the process involved that drives the persistence. This can take the form of lack of knowledge of underlying driving equations, physical process, or even a lack of understanding of the variables in the system being studied.
Another major issue, which we explore in more detail in the following section, is the semantics as to what we call longrange persistence. There are at least two ways of thinking about longrange persistence, which we will call asymptotic longrange persistence and selfaffine longrange persistence. These are simply called ‘longrange persistence’ in much of the literature and interchanged without the reader knowing which is being addressed.
3.4 Asymptotic LongRange Persistence Versus SelfAffinity
β = −2.0 
violet, purple 
β = −1.0 
blue^{†} 
β = 0.0 
white^{†} 
β = 1.0 
pink^{†}, flicker^{†} 
β = 2.0 
brown^{†}, red^{†} 
β > 2.0 
black 
For the general asymptotic case (scaling in the limit f → 0), a value of β = 0 stands for shortrange persistence (Beran 1994). This type of persistence is typical for such linear stochastic processes as moving average (MA) or autoregressive (AR) processes (Priestley 1981) and is also known under the names of blue, pink, or red noise (Hasselmann 1976; Kurths and Herzel 1987; Box et al. 1994). However, there is different usage of colour names by different authors in the literature as to the specific type of shortrange persistence being referred to. In addition, colours like ‘pink’ and ‘red’ have one meaning for shortrange persistence (e.g., any increase in power in the lower frequencies) and another for longrange (a strength of longrange persistence of β = 1 and 2, for pink and red, respectively). This has caused a bit of confusion between different groups of researchers in terms of false assumptions as to the specific kind of process (e.g., shortrange vs. longrange) being explored based on the terminology used. We now discuss white noises and Brownian motion.
3.5 White Noises and Brownian Motions
The classic example of a nonstationary process is a Brownian motion (Brown 1828; Wang and Uhlenbeck 1945), which is obtained by summing a Gaussian white noise with zero mean. Einstein (1905) showed that, for the motion of a molecule in a gas which follows a Brownian motion, the mean square displacement grows linearly with the time of observation. This corresponds to a scaling parameter of the fluctuation function (Eq. 9) of α = 0.5 and consequently to a strength of longrange persistence of β = 2. Therefore, the value β = 2 corresponds to Brownian motion and the theory of random walks (Brown 1828; Einstein 1905; Chandrasekhar 1943) and describes ‘ordinary’ diffusion. A Brownian motion is an example of a selfaffine longrange persistent process that has a strength of persistence that is very strong. Persistence strength β with β ≠ 2 characterizes ‘anomalous’ diffusion with 1 < β < 2 related to subdiffusion and β > 2 to superdiffusion (Metzler and Klafter 2000; Klafter and Sokolov 2005).
3.6 Fractional Noises and Fractional Motions
In the last section we considered white noises and Brownian motions. Here, we consider fractional noises and fractional motions. Applying our definition of (weak) stationarity given in Sect. 3.2, an asymptotic longrange persistent noise (scaling in the limit f → 0) is a (weakly) stationary time series if the strength of persistence β < 1 (Malamud and Turcotte 1999b). We will refer to these longrange persistent weakly stationary (β < 1) time series as fractional noises. For stronger values of longrange persistence (β > 1), the means and standard deviation are no longer defined since they now depend on the length of the series and the location in the time series. We will refer to these longrange persistent nonstationary (β > 1) time series as fractional motions. The value β = 1 represents a crossover value between (weakly) stationary and nonstationary processes, and between fractional noises and motions; this value is sometimes considered a fractional noise or motion, depending on the context. For very small values of the strength of longrange persistence (β < −1), the corresponding processes are unstable (Hosking 1981); these processes cannot be represented as AR models (generalization of the process in Eq. 2 to processes that incorporate more lags). In Sect. 4.2 we will construct and give examples of both fractional noises and motions, but intuitively, as the value of β increases, the contribution of the highfrequency (shortperiod) terms is reduced.
Just as previously we summed a Gaussian white noise with β = 0.0 to give a Brownian motion with β = 2.0 (Fig. 7), one can also sum fractional Gaussian noises (e.g., β = 0.7) to give fractional Brownian motions (e.g., β = 2.7), so that the running sum will result in a time series with β shifted by +2.0 (Malamud and Turcotte 1999a). This relationship is true for any symmetrical frequencysize distribution (e.g., the Gaussian) and longrange persistent time series. Analogous results hold for differencing a longrange persistent process (e.g., the first difference of a fractional motion with β = 1.5 will have a value of β = −0.5). However, for selfaffine processes the aggregation and differencing results in processes that are asymptotic longrange persistent but not selfaffine (Beran 1994), although our studies show that they are almost selfaffine.
Another way of constructing longrange persistent processes is the superposition of shortmemory processes with suitably distributed autocorrelation parameters (Granger 1980). This has been used to give a physical explanation of the Hurst phenomenon of long memory in river runoff (Mudelsee 2007). Eliazar and Klafter (2009) have applied two similar approaches, the stationary superposition model and the dissipative superposition model, to describe the dynamics of systems carrying heavy information traffic. The resultant processes are Levy distributed and longrange persistent.
Both the general case of asymptotic longrange persistence (e.g., temperature records, Eichner et al. 2003, see also Sects. 3.3 and 3.4 of this paper) and the more specific case of selfaffine longrange persistence (many examples will be given in subsequent sections) are commonly identified in the Earth Sciences. In this paper, because selfaffine time series are commonly found in the Earth Sciences and many other disciplines, and widely examined using a variety of techniques, we will restrict our analyses to them.
We will call the selfaffine time series that we work with in this paper fractional noises. We have above classified fractional noises as a process that is asymptotic longrange persistent with β < 1, and fractional motions as those with β > 1. However, often in the literature, the term fractional noises or noises is used more generically, referring to an asymptotic longrange persistent time series with any value of β. We will try to take care to distinguish in this paper between fractional noises (β < 1) and motions (β > 1), but occasionally will use the more generic term ‘noises’ (or even sometimes ‘fractional noises’) to indicate the more general case (all β).
Several techniques and their associated estimators or measures for evaluating longrange persistence in a time series have been proposed. Most of them exploit the properties of longrange dependent time series as described in this section (in particular Eqs. 6, 7, 9). However, these techniques often do not perform hypothesis tests for or against longrange persistence (see Davies and Harte 1987 for an example where hypothesis tests are performed). Rather, all the techniques that will be discussed in this paper assume that the considered time series is longrange persistent, then they proceed to determine the strength of persistence. In this paper, we propose to provide a more rigorous grounding for the quantification of selfaffine longrange persistence in time series and will use both existing ‘conventional’ techniques and benchmarkbased improvement techniques.
In examining some of the different techniques and measures for quantifying longrange persistence, we will distinguish between techniques in the time domain (Sect. 5) and the frequency domain (Sect. 6). Five techniques will be discussed in detail: (1) (time domain techniques) Hurst rescaled range (R/S) analysis, semivariogram analysis, and detrended fluctuation analysis; and (2) (spectral domain techniques) power spectral analysis using both loglinear regression and maximum likelihood. To measure the performance of these techniques, we will apply them to a suite of synthetic fractional noise time series, the construction of which we now describe (Sect. 4).
4 Synthetic Fractional Noises and Motions
In this section we will first describe common techniques for the construction of fractional noises and motions that are commonly found in the literature (Sect. 4.1), and then introduce the extensive fractional noises and motions that we use in this paper (Sect. 4.2). We will conclude with a brief presentation of the fractional noises and motions that we include in the supplementary material, both as text files and R programs (Sect. 4.3). Accompanying this section are Appendices 1–4 which give more detailed specifics as to construction of our synthetic fractional noises and motions.
4.1 Common Techniques for Constructing Fractional Noises and Motions
There are different approaches for creating longrange dependent time series with and without shortrange correlations and also with and without distinct periodic components. In each case, however, the time series come from a model or process with known properties and defined strengths of persistence. We will use the subscript ‘model’ (e.g., β _{model}) to indicate that the process has given properties, and thus, the realizations of this process can be used as ‘benchmark’ time series.
 (1)
Selfaffine fractional noises and motions (Schottky 1918; Dutta and Horn 1981; Geisel et al. 1987; Bak et al. 1987). These are popular in the physical sciences community and are constructed to have an exact powerlaw scaling of the power spectral density (i.e. Eq. (7) holds for all f). These are constructed by inverse Fourier filtering of a white noise (briefly explained in Sect. 4.2). In Appendix 1–4, we give a detailed description about how to create realizations of this model, as used in this paper. For this type of construction, the autocorrelation and fluctuation functions are not selfaffine, and instead scale asymptotically (Eqs. (6) and (9) hold asymptotically for τ → ∞ and l → ∞, respectively).
 (2)
Selfsimilar processes (Mandelbrot and van Ness 1968; Embrechts and Maejima 2002). These constructed noises exhibit an exact powerlaw scaling of the fluctuation function for Gaussian onepoint probability distributions so that Eq. (9) holds for all l. They exhibit an asymptotic scaling of the power spectral density (i.e. Eq. (7) holds asymptotically for f → 0), and have an autocorrelation function that scales asymptotically with a power law (Eq. (6) holds for τ → ∞).
 (3)
Fractionally differenced noises (Granger and Joyeux 1980; Hosking 1981). These are commonly used in the stochastic time series analysis community and are based on infiniteorder moving average processes whose coefficients can be represented as binomial coefficients of fractal numbers. These fractional noises have an autocorrelation function, power spectral density, and fluctuation function which scale asymptotically with a power law (i.e. Eq. (6) as τ → ∞, Eq. (7) as f → 0, Eq. (9) as l → ∞).

Models which capture short and longrange correlations (ARFIMA or FARIMA) (Granger and Joyeux 1980; Hosking 1981; Beran 1994; Taqqu 2003). These can be constructed as finite order moving average (MA) or autoregressive (AR) process with a fractional noise as input.

Models for time series which exhibit longrange persistence and ‘seasonality’ (i.e. cyclicity) (PorterHudak 1990) or ‘periodicity’ (Montanari et al. 1999). These are based on fractional differencing of noise elements which are lagged by multiples of the assumed seasonal period.

Generalized longmemory time series models (e.g., Brockwell 2005) where the stochastic processes have timedependent parameters and these parameters are longrange dependent.

Models for longmemory process with asymmetric (e.g., lognormal) onepoint probability distributions. Two examples of such models that describe longrange persistence have been done for (1) varve glacial data (Palma and Zevallos 2011) and (2) solar flare activity (Stanislavsky et al. 2009).

Models for deterministic nonlinear systems at the edge between regularity and chaos (onset of chaos, Schuster and Just 2005; intermittency, Manneville 1980), and dynamics in Hamiltonian systems (Geisel et al. 1987). In this model class it is very difficult to find examples with a broad variety and continuity of strengths of longrange dependence, and the longrange persistence is true for only certain values of the parameters.

Multifractals (Hentschel and Procaccia 1983; Halsey et al. 1986; Chhabra and Jensen 1989) which depend on a continuum of parameters.

Alternative constructs of stochastic fractals such as cartoon Brownian motion (Mandelbrot 1999) and Weierstrass–Mandelbrot functions (Mandelbrot 1977; Berry and Lewis 1980). These have three properties that make them unsuitable for the performance tests applied in our paper (Sects. 5 and 6): (1) a complicated onepoint probability distribution, (2) nonequally spaced time series, and (3) multifractality.

Alternative approaches for constructing time series which are approximately selfsimilar and discussed by Koutsoyiannis (2002): multiple time scale fluctuations, symmetric moving averages, and disaggregation.
For this paper, the only models of longrange persistence considered are selfaffine fractional noises and motions. These processes are constructed to model a given (1) strength of longrange dependence and (2) onepoint probability distribution. As previously mentioned, these types of processes are discussed in detail in Schepers et al. (1992), Gallant et al. (1994), Bassingthwaighte and Raymond (1995), Mehrabi et al. (1997), Wen and SindingLarsen (1997), Pilgram and Kaplan (1998), Malamud and Turcotte (1999a), Heneghan and McDarby (2000), Weron (2001), Eke et al. (2002), Xu et al. (2005), and Franzke et al. (2012).
 (1)
Box–Cox transformation (Box and Cox 1964) which is applied to each element of the fractional Gaussian or Levy noise/motion, that is, one transforms x _{ t } to f(x _{ t }), t = 1, 2, …, N (for details, see Appendix 3).
 (2)
The Schreiber–Schmitz algorithm (Schreiber and Schmitz 1996) is an iterativeset operation applied to the entire data series (for details, see Appendix 4).
4.2 Sets of Synthetic Fractional Noises and Motions Used in this Paper
Table of onepoint probability distributions and their properties used for the construction of fractional noises and motions
Probability distribution 
Gaussian 
Levy 
Lognormal 

Onepoint probability distribution 
\( P(x) = \frac{1}{{\sqrt {2\pi \sigma^{2} } }}e^{{  \frac{{\left( {x  \mu } \right)^{2} }}{{2\sigma^{2} }}}} \) 
Closed expressions are only known for (Zolotarev 1986): a = 1.0 (Cauchy distribution), 1.5 (Holtsmark distribution), 2.0 (Gaussian distribution, see previous column) 
\( P(x) = \frac{1}{{x\sqrt {2\pi \sigma^{2} } }}e^{{  \frac{{\left( {\ln x  \mu } \right)^{2} }}{{2\sigma^{2} }}}} \) 
Range 
–∞ < x < ∞ 
–∞ < x < ∞ 
0 ≤ x < ∞ 
Parameters 
μ: mean value (–∞ < μ < ∞) \(\sigma \): standard deviation (\(\sigma \) > 0) 
a: exponent (1 ≤ a ≤ 2) 
μ: mean value of the logarithm of the time series values x _{ t }, 1 ≤ t ≤ N (–∞ < μ < ∞) \(\sigma \): standard deviation of the logarithm of the time series values (\(\sigma \) > 0) 
Mean value 
\( \bar{x}\; = \mu \) 
\( \bar{x}\; = 0 \) 
\( \bar{x}\; = e^{{\mu + \frac{1}{2}\sigma^{2} }} \) 
Standard deviation 
\( \sigma_{x} = \sigma \) 
Not defined 
\( \sigma_{x} = e^{{\mu + \frac{1}{2}\sigma^{2} }} \sqrt {e^{{\sigma^{2} }}  1} \) 
Symmetry properties 
Symmetric with respect to x = μ 
Symmetric with respect to x = 0 
Asymmetric with the coefficient of variation: \( c_{\text{v}} = \frac{{\sigma_{x} }}{{\bar{x}}} = \sqrt {e^{{\sigma^{2} }}  1} \) 
Tail properties 
Thintailed 
Heavytailed: \( {{P}}\left( x \right)\sim \frac{1}{{\left x \right^{a + 1} }} \) for x → ∞, the smaller the exponent a, the heavier the tail 
Thintailed 
Comments 
Collapses to a Gaussian for a → 2 
Collapses to a Gaussian for c _{v} → 0 
 (1)
Gaussian distributions are symmetric, thin tailed, and the most commonly used basis for synthetic fractional noises in the literature; they are also the base for the derivation of fractional noises with other thintailed probability distributions.
 (2)
Lognormal distributions are asymmetric, thintailed, but like many natural time series (e.g., river flow, sediment varve thicknesses) have only positive values.
 (3)
Levy distributions are symmetric and heavytailed (i.e. the onepoint probability distribution approaches a power law for large negative and positive values). Such heavytailed distributions are good approximations for the frequencysize statistics of a number of natural hazards (Malamud 2004). These include asteroid impacts (Chapman and Morrison 1994; Chapman 2004), earthquakes (Gutenberg and Richter 1954), forest fires (Malamud et al. 1998, 2005), landslides (Guzzetti et al. 2002; Malamud et al. 2004; Rossi et al. 2010), and volcanic eruptions (Pyle 2000). Floods (e.g., Malamud et al. 1996; Malamud and Turcotte 2006) have also been shown in many cases to follow powerlaw distributions.

Onepoint probability distributions: Gaussian, lognormal (coefficient of variation, \( c_{\text{v}} = \sigma_{x} /\bar{x} = 0.0,0.2,\, \ldots,\,2.0 \)), and (symmetric and centred) Levy distributions (exponent a = 1.0, 1.1, …, 2.0). The lognormal and Levy distributions reduce to Gaussian for c _{v} = 0 and a = 2, respectively. The lognormal distributions were constructed using two different techniques, Box–Cox transform and Schreiber–Schmitz algorithm. The parameter c _{v} is a measure of the skewness of a distribution, but only where that distribution is asymmetrically distributed, such as a lognormal distribution. One can compare the c _{v} of one distribution to another, but only if that distribution has the same underlying statistical family.

Strengths of longrange persistence: −1.0 ≤ β _{model} ≤ 4.0, step size of 0.2 (i.e. 26 successive values of β _{model}).

Length of time series: The time series were realized 100 times for a given β _{model} and constructed with N = 4,096 and then subdivided to also have N = 2,048, 1,024, and 512. These four time series lengths are focussed on in the main body of this paper. However, a further eight noise and motion lengths (N = 64, 128, 256, 8,192, 16,384, 32,768, 65,536, and 131,072) were also constructed, with results presented in the supplementary material.
In Figs. 10, 11, 12, 13, each figure represents a different onepoint probability distribution, and β (the strength of longrange persistence) increases from −1.0 to 2.5, reducing the contribution of the highfrequency (shortperiod) terms. For β < 0 (antipersistence), the highfrequency contributions dominate over the lowfrequency ones; adjacent values are thus anticorrelated relative to a white noise (β = 0). For these realizations of antipersistent processes, a value larger than the mean tends to be followed by a value smaller than the mean. With β = 0 (white noise), highfrequency and lowfrequency contributions are equal, resulting in an uncorrelated time series; adjacent values have no correlations with one another, and there is equal likelihood of a small or large value (relative to the mean) occurring. For β > 0, and as β gets larger, the lowfrequency contributions increasingly dominate over the highfrequency ones; the adjacent values become more strongly correlated, and the time series profiles become increasingly smoothed. The strength of persistence increases, and a value larger than the mean tends to be followed by another value larger than the then mean. As the persistence increases, the tendency for large to be followed by large (and small to be followed by small) becomes greater, manifesting itself in a clustering of large values and clustering of small values. In Sect. 5 we explore different techniques for measuring the strength of longrange persistence.
4.3 Fractional Noises and Motions: Description of Supplementary Material
 (1)Sample fractional noises and motions in tabdelimited text files. A zipped file which contains three folders:

FGaussianNoise contains fractional Gaussian noises.

FLogNormalNoise contains fractional lognormal noises constructed using the Box–Cox transform.

FLevyNoise contains fractional Levy noises.

 (2)
R program. We give a commented R program that we use to create the synthetic noises and motions in this paper.
5 Time Domain Techniques for Measuring the Strength of LongRange Persistence
There are a variety of time domain techniques for quantifying the strength of longrange persistence in selfaffine time series. Here, we first discuss two broad frameworks within which these techniques are based (this introduction). We then discuss three techniques that are commonly used, each based on a scaling behaviour of the dispersion of values in the time domain as a function of different time length segments: (1) Hurst rescaled range (R/S) analysis (Sect. 5.1); (2) semivariogram analysis (Sect. 5.2); and (3) detrended fluctuation analysis (DFA) (Sect. 5.3). After this, we discuss (Sect. 5.4) other time domain techniques.
 (A)

Autocorrelation function ^{□} and (semi)variogram analysis ^{□}. These evaluate the average dependence of lagged time series elements.
 (B1)

Methods which rely on the scaling of the variance of fractional noises and motions. These are called variable bandwidth methods, scaled windowed variance methods, or fluctuation analysis. The most common techniques in this class are Hurst rescaled range analysis (R/S)^{†} (Hurst 1951) and detrended fluctuation analysis (DFA)^{†} (Peng et al. 1994; Kantelhardt et al. 2001). We mention here three less commonly used other techniques:

The roughnesslength technique^{□} originally developed for use in the Earth Sciences (Malinverno 1990) is identical to DFA where linear fits are applied to the profile (called DFA1). In the roughness length, the ‘roughness’ is defined as the rootmeansquared value of the residual on a linear trend over the length of a given segment; since it is based on a ‘topographic’ profile, aggregating of the time series is not needed.

The detrended scaled windowed variance analysis^{†} (Cannon et al. 1997) is similar to DFA1; the absolute values of the data from aggregated time series have been used in place of the variance, and the corresponding dependence on the segment length is studied.

Higuchi’s method^{□} (Higuchi 1988) evaluates the scaling relationship between the mean normalized curve length of the coarsegrained time series (i.e. values x _{ kt } are considered for a fixed value of k and t = 1, 2, …, N/k) and the chosen sampling step (here k).

 (B2)

Dispersional analysis ^{□} (Bassingthwaighte and Raymond 1995) analyses the scaling of the variance of a time series that is coarse grained (averages of segments of equal length are considered) as a function of the segment length. This is very similar to relative dispersion analysis^{□} (Schepers et al. 1992) which describes the scaling of the standard deviation divided by the mean.
 (B3)

Average extreme value analysis ^{□} (Malamud and Turcotte 1999a) examines the mean value of the extremes (minimum, maximum) as a function of segment length.
5.1 Hurst Rescaled Range (R/S) Analysis
Historically, the first approach to the quantification of longrange persistence in a time series was developed by Hurst (1951), who spent his life studying the hydrology of the Nile River, in particular the record of floods and droughts. He considered a river flow as a time series and determined the storage limits in an idealized reservoir. To better understand his empirical data, he introduced rescaled range (R/S) analysis. The concept was developed at a time (1) when computers were in their early stages so that calculations had to be done manually and (2) before fractional noises or motions were introduced. Much of Hurst’s work inspired later studies by Mandelbrot and others into selfaffine time series (e.g., Mandelbrot and Van Ness 1968; Mandelbrot and Wallis 1968, 1969a, b, c). The use of Hurst (R/S) analysis (and variations of it) is still popular and often applied (e.g., human coordination, Chen et al. 1997; neural spike trains, Teich et al. 1997; plasma edge fluctuations, Carreras et al. 1998; earthquakes, Yebang and Burton 2006; rainfall, Salomão et al. 2009).
In this paper, the Hurst exponent Hu is derived by computing the rescaled range for segment lengths l = 8, 9, 10, 11, 12, 13, 14, 15, [2^{4.0}], [2^{4.1}], [2^{4.2}], [2^{4.3}], …, [N/4], where the square bracket symbol [ ] denotes rounding down to the closest integer and N is the length of the time series. The powerlaw exponent Hu from Eq. (15) is estimated by linear regression of log(R _{ l }/S _{ l }) versus log(l/2). The errors here (fluctuations around the bestfit line) are multiplicative and, therefore, we use linear regression of the logtransformed data (vs. ordinary nonlinear regression of the data itself) as an unbiased estimate of the powerlaw exponent. In Appendix 5 we discuss the choice of fitting technique used along with simulations of the resultant bias when different techniques are considered. In addition to Hurst (R/S), for three other techniques used in this paper (semivariogram, detrended fluctuation, and power spectral analyses), we estimate the bestfit power law to a given set of measured data by using a linear regression of the logtransformed data.

Anis–Lloyd correction (Anis and Lloyd 1976) is a correction term for Hu (see Eq. 15) that compensates the bias caused by small values of the time series length N. It is optimized for white noises (β = 0).

Lo’s correction (Lo 1991) which incorporates the autocovariance.

Detrending (Caccia et al. 1997).

Bias correction (Mielniczuk and Wojdyłło 2007).
We will quantify the bias using rescaled range analyses, under a variety of conditions, in our results (Sect. 7).
5.2 Semivariogram Analysis
In Sect. 3 we discussed that, in the case of a stationary fractional noise (−1 < β < 1), there is a powerlaw dependence of the autocorrelation function on lag, C(τ) ~ τ ^{−ν } (Eq. 6), with powerlaw coefficient ν = 1 − β. However, it is difficult to use the autocorrelation function for estimating the strength of longrange dependence β. This is because there are a considerable number of negative values for the autocorrelation function C, and therefore, a linear regression of the logarithm of autocorrelation function C(τ) versus the logarithm of the lag τ is not possible. Finding the bestfit powerlaw function for C(τ) as a function of τ comes with some technical difficulties (particularly compared to linear regression) such as how to choose good initial values for ν, and choosing appropriate weights and convergence criteria for the nonlinear regression. Because our focus is on less technical methods, we did not use the autocorrelation function to gain information about β.
5.3 Detrended Fluctuation Analysis (DFA)
Several authors have discussed potential limitations of detrended fluctuation analysis when applied to observational data that have attributes additional to that of just a ‘pure’ fractional noise or motion and a superimposed polynomial trend. For example, Hu et al. (2001) showed that an underlying linear, periodic, or powerlaw trend in the signal leads to a crossover behaviour (i.e. two scaling regimes with different exponents) in the scaling of the fluctuation function. Chen et al. (2002) discussed properties of detrended fluctuation analysis for different types of nonstationarity. In other studies, Chen et al. (2005) studied the effects on detrended fluctuation analysis of nonlinear filtering of the time series.
Guerrero and Smith (2005) have proposed a maximum likelihood estimator that provides confidence intervals for the estimated strength of longrange persistence. Marković and Koch (2005) demonstrated that periodic trend removal is an important prerequisite for detrended fluctuation analysis studies. Gao et al. (2006) and Maraun et al. (2004) have discussed the misinterpretation of detrended fluctuation analysis results and how to avoid pitfalls in the assessment of longrange persistence. Kantelhardt et al. (2003) have generalized the concept of detrended fluctuation analysis such that multifractal properties of time series can be studied. Detrended moving average (DMA) analysis is very similar to detrended fluctuation analysis, but the underlining trends are not assumed to be polynomial.
Within this paper, we restrict our studies to DFA2; in other words, quadratic trends are removed. Further, we have applied the same set of segment lengths as for Hurst rescaled range analysis (R/S): l = 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, [2^{4.0}], [2^{4.1}], [2^{4.2}], [2^{4.3}],…, [N/4], where [ ] denotes rounding down to the closest integer and N is the length of the time series. This set of segment lengths was chosen carefully and optimized for DFA2, by balancing the number of segment lengths to be (1) as high as possible to have a precise estimate for β _{DFA} and (2) as few as possible to have low computational costs. To further explore the segment length set chosen, we contrasted analyses using our chosen set (l = 8, 9, 10, 11, 12, 13, 14, 15, [2^{4.0}], [2^{4.1}], [2^{4.2}], [2^{4.3}], …, [N/4]) versus a ‘complete’ set (l = 3, 4, 5, …, N/4). We applied DFA2, using these two sets of segment lengths, on a fractional noise with strength of longrange persistence β = 0.5 and time series lengths N = 512, 1,024, 2,048, or 4,096. We found that the random error of the results from DFA2 using the segment length set chosen was as small as for the complete set of segment lengths. In our final analyses, ordinary linear regression (see Appendix 5) has been applied for the associated values of log(F ^{2}) versus log(l), and the slope of the bestfit linear model gives α from which we obtain the longrange persistence.
5.4 Other Time Domain Techniques for Examining LongRange Persistence
 (1)
Firstreturn and multireturn probability methods. The timings of threshold crossings are another feature sensitive to the strength of longrange dependence. The firstreturn probability method (Hansen et al. 1994) considers a given ‘height’ of the yaxis, which we will call h. It is based on the probability, conditional on starting at h, of exceeding h after a time τ (with no other crossing between t and t + τ). This probability scales with h as a power law. Alternatively, a multireturn probability (Schmittbuhl et al. 1995) can be studied (crossings between t and t + τ are allowed), which also results in a powerlaw scaling for the dependence on the height h. Both powerlaw exponents are related to the strength of longrange persistence, β. These return probability methods work for the stationary case, that is, –1 < β < 1, and for thintailed onepoint probability distributions. For heavytailed, onepoint probability distributions, the powerlaw exponent depends also on the tail parameter.
 (2)
Fractal geometry methods. These techniques are based on describing the fractal geometry (fractal dimension) of the graph of a fractional noise. By definition, a selfaffine, longrange persistent time series (fractional noises and motions) has selfaffine fractal geometry, with fractal dimensions constrained between D = 1.0 (a straight line) and 2.0 (space filling time series) (Mandelbrot 1985). The oldest of fractal geometry methods is the divider/ruler method (Mandelbrot 1967; Cox and Wang 1993) that measures the length of the graph of a fractal curve either at different resolutions or by walking a given length stick along the curve. The evaluated curve length depends on the resolution/stick length, and the shorter the length of the stick used, the longer the curve. The resultant powerlaw relationship of curve length as a function of stick length results in a powerlaw exponent which is the fractal dimension D or the strength of persistence β, respectively. However, appropriate care must be taken, as the vertical and horizontal coordinates can scale differently (e.g., different types of units). See Voss (1985) and Malamud and Turcotte (1999a) for discussion. After appropriately adjusting the vertical and horizontal coordinates of the time series, other fractal dimensions that are determined directly using geometric methods include the box counting dimension, the correlation dimension (Grassberger and Procaccia 1983; Osborne and Provenzale 1989), and the Kaplan–Yorke dimension (Kaplan and Yorke 1979; Wolf et al. 1985). Note that the application of different types of fractal dimensions to a time series leads to quantitatively different results: for instance, for a fractional motion (1 < β < 3), the divider/ruler dimension is D _{divider/ruler} = (5 – β)/2 (Brown 1987; De Santis 1997), while the correlation dimension is D _{corr} = 2/(β – 1) (Theiler 1991), so one must be careful about ‘which’ dimension is being referred to. It might be necessary to embed the time series into a higherdimensional space (Takens 1981) in order to extract the dimension of the time series, which in this context is the dimension of the attractor of the system from which the time series was measured. A number of the fractal dimension estimate techniques that have been discussed in this paragraph require very long and stationary time series.
Table of scaling exponents
Name and variable of the scaling exponent 
Function that exhibits powerlaw scaling 
Functional relationship to β 

ν 
Autocorrelation function, \( \left {C(\tau )} \right\sim \tau^{  \nu } \) 
β = 1– ν 
Strength of longrange persistence, β 
Power spectral density, P(f) ~ f ^{−β } 
β = β _{PS} 
Hurst exponent, Hu 
Rescaled range, (R/S) ~ (l/2)^{Hu} 
β = 2Ηu – 1 
Hausdorff exponent, Ha 
Semivariogram, γ(l) ~ l ^{2Ha} 
β = 2Ηa + 1 
α 
Fluctuation function, \( F_{\text{DFA}}^{2} \left( l \right) \) ~ l ^{ 2α } 
β = 2α – 1 
6 Frequencydomain Techniques for Measuring the Strength of LongRange Persistence: Power Spectral Analysis
It is common in the Earth Sciences and other disciplines to examine the strength of longrange persistence in selfaffine time series by first transforming the data from the time domain into the frequency (spectral) domain, using techniques such as the Fourier, Hilbert, or wavelet transforms. Here we will use the Fourier transform with two methods of estimation.
6.1 The Fourier Transform and Power Spectral Density
6.2 Detrending and Windowing
The discrete Fourier transform as defined in Eq. (21) is designed for ‘circular’ time series (i.e. the last and first values in the time series ‘follow’ one another) (Percival and Walden 1993). In order to reduce nondesirable effects on the Fourier coefficients caused by the large values of the absolute difference of the first and the last time series element, x _{ N } – x _{1}, which typically occurs for nonstationary time series and in particular for fractional motions (β > 1), detrending and windowing can be carried out. One example of these nondesirable effects is spectral domain leakage (for a comprehensive discussion, see Priestley 1981; Percival and Walden 1993). Leakage is a term used to describe power associated with frequencies that are noninteger k in Eq. (22) becoming distributed not only to their own bin, but also ‘leaking’ into other bins. The resultant leakage can seriously bias the resultant power spectral density distribution. To reduce this leakage we will both detrend and window the original time series before doing a Fourier analysis.
Many statistical packages and books recommend removing the trend (detrending) and removing the mean of a time series before performing a Fourier analysis. The mean of a time series can be set equal to 0 and the variance normalized to 1; this will not affect the resulting Fourier coefficients. However, detrending is controversial and, therefore, care should be taken. One way of detrending (which we use here before applying Fourier analysis) is to take the bestfit straight line to the time series and subtract it from all the values. Another way of detrending is to connect a line from the first point and the last point and subtract this line from the time series, forcing x _{0} = x _{ N }. If a time series shows a clear linear trend, where the series appears to be closely scattered around a straight line, the trend can be safely removed without affecting any but the lowest frequencies in the power spectrum. However, if there is no clear trend, detrending can cause the statistics of the periodogram (in particular the slope) to change.
In the next two sections, we describe two techniques commonly found in the time series analysis literature for finding a bestfit power law to the power spectral density (in our case, the strength of longrange persistence β in Eq. 23) and will also present the result of the power spectral analysis applied to the windowed and unwindowed time series examples discussed above.
6.3 Estimators Based on Logregression of the Power Spectral Densities
The strength of longrange persistence can be directly measured as a powerlaw decay of the power spectral density (Geweke and PorterHudak 1983). Robinson (1994, 1995) showed that the performance of this technique is similar for nonGaussian and Gaussian distributed data series. However, in the case of nonGaussian onepoint probability distributions, the uncertainty of the estimate might become larger (depending on the distribution), compared to Gaussian distributions.
If the power spectral density S (Eqs. 22, 26a) is expected to scale over the entire frequency range (and not just for frequencies f → 0) with a power law, \( S(f)\sim f^{  \beta } \), then the powerlaw coefficient, β, can be derived by (nonweighted) linear regression of the logarithm of the power spectral density, log(S), versus the logarithm of the frequency, log(f). Although this estimator appears simplistic (at least in comparison with the MLE estimator presented in the next section), it nevertheless has small biases in estimating β, along with tight confidence intervals, and is broadly applicable to time series with asymmetrical onepoint probability distributions (Velasco 2000). In Appendix 5 we discuss in detail the use of ordinary linear regression of the logtransformed data versus nonlinear leastsquares regression of the nontransformed data. Power spectral analysis, using linear regression of the logtransformed data, is illustrated for a fractional lognormal noise with β _{model} = 1.0 in Fig. 14d; the corresponding estimator is called β _{PS(bestfit)}.
We return to the effect of windowing on spectral analysis and in Fig. 15c show the results of power spectral analysis applied to a realization of an original lognormal fractional motion (c _{v} = 0.5, β _{model} = 2.5) and in Fig. 15d on the windowed version of this realization (time series). The power spectral analysis of the unwindowed time series results in a bestfit powerlaw exponent (using linear regression of log(S) vs. log(f)) of β _{PS} = 1.86, and for the windowed time series β _{PS} = 2.43. The power spectral analysis of the windowed time series has significantly less bias than power spectral analysis of the unwindowed time series.
Above, we are using detrending and windowing to reduce the leakage in the Fourier domain. For the purposes of this paper, we are interested in finding the estimator for a ‘single’ realization of the process, that is, producing the power spectral densities for a given realization, and finding the best estimator for these (we will discuss this in Sect. 6.4). If one is more interested in the spectral densities of the process (i.e. the average over an ensemble of realizations), then other techniques are more appropriate. For example, some authors take a single realization and break it up into smaller segments, then compute the power spectral densities for each segment, and average over them, thus resulting in less scatter of the densities, but not covering the same frequency range as for the single realization considered as a whole (see for instance Pelletier and Turcotte 1999). Other versions include not breaking up the single realization into orthogonal segments, but rather nonorthogonal (overlapping) segments (e.g., Welch’s Overlapped Segment Averaging technique, Mudelsee 2010). Another method includes taking a single realization of a process and binning the frequency range into octavelike frequency bands where linear regression is done for the mean of the logarithm of the power (per octave) versus the mean logarithm of the frequency in that band. Taqqu et al. (1995), however, have shown that this binningbased regression dramatically increases the uncertainties (random error) of the estimate of β.
6.4 Maximum Likelihood Estimators
Maximum likelihood estimators (MLEs) (Fisher 1912) have been developed for parametric models of the power spectral density or autocorrelation function (Fox and Taqqu 1986; Beran 1994). For Eq. (23), an MLE equation that depends on the parameters of the power spectral density is required, with maximum likelihood giving the bestfit estimators. These techniques assume Gaussian or Levydistributed time series and, in particular, a onepoint probability distribution that is symmetrical. Maximum likelihood estimators have the advantage when compared with logperiodogram regression to not only output an estimate of the strength of longrange persistence, but also result in a confidence interval based on the Fisher information (the expected value of the observed information) of the estimated parameter. The Whittle estimator (Whittle 1952) is a maximum likelihood estimator for deriving the strength of longrange persistence from the power spectral density.
 (1)
The power spectral density, S _{ k } (Eqs. 22, 26a), versus the frequency f _{ k } (k = 1, 2, …, N/2) of the original time series x _{ t } (t = 1, 2, …, N).
 (2)
The MLE model chosen; here, \( \tilde{S}_{{c,{\kern 1pt} \beta }} (f) = c\,f^{  \beta } \) is used as a model for the power spectral density S _{ k } (k = 1, 2, …, N/2) and has two parameters: the strength of longrange persistence, β, and a factor c, both of which will be evaluated by the MLE.
7 Results of Performance Tests
We have been interested in how exactly the considered techniques measure the strength of longrange persistence in a time series. We have applied these techniques to many realizations of fractional noises and motions with welldefined properties, and after discussing systematic and random errors in the context of a specific example (Sect. 7.1) and confidence intervals (Sect. 7.2), we will present the overall results of our performance tests and the results of other studies (Sect. 7.3), along with reference to the supplementary material which contains all of our results. We will then give a brief summary description of the results of each performance test: Hurst rescaled range (R/S) analysis (Sect. 7.4), semivariogram analysis (Sect. 7.5), detrended fluctuation analysis (Sect. 7.6), and power spectral analysis (Sect. 7.7).
7.1 Systematic and Random Error
The performance of a technique is further described by the random error of the considered technique. In our DFA example (Fig. 17) we have used the standard deviation σ _{ x }(β _{DFA}) of the sample values around the mean for quantifying the fluctuations of β _{DFA}. In this paper we will measure the random error of a technique by the standard deviation σ _{ x }(\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\)), which is called in the statistics literature the standard error of the estimator (Mudelsee 2010). The random error can be determined from many realizations of a process modelled to have a set of given parameters. If, however, just a single realization of the process is given, the random error σ _{ x }(\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\)) can be derived in various ways, such as bootstrapping and jackknifing (Efron and Tibshirani 1993; Mudelsee 2010), or in case of a maximum likelihood estimator by the Cramér–Rao bound (Rao 1945; Cramér 1946). In this paper we will, in most cases, calculate the random error from an ensemble of model realizations, but we will also consider Cramér–Rao bounds (Sect. 6.4) and apply a benchmarkbased improvement technique (Sect. 9).
Realizations of a process created to have a given strength of longrange persistence and onepoint probability distribution can be contrasted with the underlying behaviour of the process itself where the parameter of a process is β _{model}, in other words the desired β for the process. This process has realizations (the time series) which will have a distribution of their ‘true’ β values because of the finitesize effect (Peng et al. 1993b). We then measure these with a given technique, which itself has its own error, giving \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\). We are assuming the systematic error that is discussed here is based on the realizations having a Gaussian distribution and that we can get some handle on their ‘true’ distribution. We are also assuming that the techniques we are using reflect this, in addition to the bias in the techniques themselves. We will never know (except theoretically, if we have closed form equations) the true value of β for each realization of the process, just the parameter that we designed it for (i.e. β _{model}), unless the realizations are taken for an infinite number of values, in which case they will asymptote to the true value of β. In other words, there will always be a finitesize effect on individual realizations. Given this finitesize effect, we can never know the exact true β for each realization, but instead what we are measuring is a measure of the technique and the finitesize effect of going from process to realization (i.e. the synthetic noises and motions we have created). We will now discuss confidence intervals within the framework of our DFA example.
7.2 Confidence Intervals
Returning to Fig. 17, with our example of DFA applied to a lognormal noise (c _{v} = 0.5, N = 1,024, β _{model} = 0.8), we find that approximately 95 % of the values of β _{DFA} lie in the interval \( \left[ {\bar{\beta }_{\text{DFA}}  1.96\;\sigma_{x} \left( {\beta_{{\,{\text{DFA}}}} } \right),\,\bar{\beta }_{\text{DFA}} + 1.96\;\sigma_{x} \left( {\beta_{{\,{\text{DFA}}}} } \right)} \right] \), in other words, the 95 % confidence interval. In general, for confidence intervals, there must be a sufficient number of values from which to make a valid estimation, for which 95 % of those values are within the confidence interval boundaries. Some authors take this as 1,000 values or more (Efron and Tibshirani 1993). However, if the values follow a Gaussian distribution, the confidence interval boundaries can be computed directly from \( \bar{\beta }_{{\,{\text{measured}}}} \pm 1.96\;\sigma_{x} \left( {\beta_{{\,{\text{measured}}}} } \right) \). Efron and Tibshirani (1993) have determined that, for Gaussiandistributed values, confidence intervals can be constructed from just 100 realizations. We note that there are a number of different ways of constructing confidence intervals for β _{measured}, both theoretical (e.g., based on knowledge of the onepoint probability distribution) and empirical (e.g., actual examining how many values for a given set of realizations of a process lie in a given interval, such as 95 %). The latter is known as the empirical coverage and is discussed in detail, along with various methods for the construction of confidence intervals by Mudelsee (2010), who also discusses the use of empirical coverage studies in the wider literature. Here we do not determine the empirical coverage, but rather take the approach of first evaluating the normality of a given set of realizations of β _{measured} (relative to a given β _{model}), and then by using this assumed normality calculate the theoretical confidence interval.
Visually, we see that for normal and lognormal noises (Figs. 18a,b, 19a,b, 20a,b), the realizations are reasonably close to a Gaussian distribution. For the Levy realization results (Figs. 18c, 19c, 20c), these are only approximately Gaussian, although are reasonably symmetric. In Figs. 18d, 19d, 20d is given the skewness for each of the distributions from panels (a) to (c) in each figure. For the normal and lognormal results, and four lengths of time series considered, the skewness g is small (DFA: g < 0.10, R/S: g < 0.15); for Levy results, there are strong outliers in Fig. 19c (DFA) and Fig. 20c (R/S), resulting in large skew (DFA: g < 3; R/S: g < 0.8), although this is not the case for Fig. 18c (PS(bestfit)) where in Fig. 18d g < 0.15. A Shapiro–Wilk test of normality (Shapiro and Wilk 1965) on the different sets of realizations shows that for the smaller values of skewness, in many cases, a Gaussian distribution cannot be rejected at the 0.05 level, whereas for the larger values of skewness (FLevyN using DFA and R/S) it is rejected. Although we recognize that some of our results are only approximately Gaussian, we will use a value of 100 total realizations for a given process created and technique applied, to calculate confidence intervals based on \( \bar{\beta }_{{\,{\text{measured}}}} \pm 1.96\;\sigma_{x} \left( {\beta_{{\,{\text{measured}}}} } \right) \). The size of the 95 % confidence interval of the technique is 3.92 times the standard deviation (random error) of the technique.
7.3 Summary of Our Performance Test Results and Those of Other Studies
The benchmarks we carried out are extensive as they are based on fractional noises and motions which differ in length, onepoint probability distribution, and modelled strength of persistence. The performance of the different techniques has been studied here for their dependence on the modelled persistence strengths (26 different parameter values, β _{model} = −1.0 to 4.0, step size 0.2), the noise and motion lengths (4 different parameters, N = 512, 1,024, 2,056, and 4,096), and the type of the onepoint probability distribution (three different types: Gaussian, lognormal—two different types of construction, and Levy). These will be presented graphically in this section, with a further eight noise and motion lengths (N = 64, 128, 256, 8,192, 16,384, 32,768, 65,536, and 131,072) presented in the supplementary material (discussed in this section further below). Furthermore, in this section we present results for a fixed value of longrange dependence β _{model}, and the parameters that characterize the corresponding distribution parameters have been varied (11 values of the exponent of the Levy distribution a = 1.0 to 2.0, step size 0.1; 21 different coefficients of variation for two different lognormal distribution construction types, c _{v} = 0.0 to 2.0, step size 0.1). Overall, we have studied fractional noises and motions with about 17,000 different sets of characterizing parameters, of which the results for a subset of these (6,500 different sets of parameters) have been included in the supplementary material. For each set of parameters, 100 realizations have been created, and their persistence strength has been evaluated by the five techniques described above.
Performance^{a} of five techniques^{b} that evaluate longrange persistence for selfaffine fractional noises (i.e. −1.0 < β _{model} < 1.0) with N = 4,096 elements and different onepoint probability distributions
Technique^{b} Distribution of the noise 
Systematic error (bias) \( \bar{\beta }_{{\,{\text{measured}}}}  \beta_{{\,{\text{model}}}} \) 
Random error \( \sigma_{x} \left( {\beta_{\text{measured}} } \right) \) 

β _{Hu}  
Gaussian 
−0.49 to 0.02 
0.02 to 0.05 
lognormal (Box–Cox), c _{v} = 0.5 
−0.80 to 0.02 
0.03 to 0.05 
lognormal (Schreiber–Schmitz), c _{v} = 0.5 
−0.54 to 0.02 
0.02 to 0.04 
Levy, a = 1.5 
−0.85 to 0.02 
0.03 to 0.05 
β _{Ha}  
Gaussian 
−2.00 to −0.16 
0.00 to 0.04 
lognormal (Box–Cox), c _{v} = 0.5 
−2.00 to −0.15 
0.00 to 0.04 
lognormal (Schreiber–Schmitz), c _{v} = 0.5 
−2.00 to −0.15 
0.00 to 0.02 
Levy, a = 1.5 
−2.00 to −0.16 
0.03 to 0.05 
β _{DFA}  
Gaussian 
−0.26 to 0.03^{c} 
0.01 to 0.06 
lognormal (Box–Cox), c _{v} = 0.5 
−0.60 to 0.05^{c} 
0.04 to 0.07 
lognormal (Schreiber–Schmitz), c _{v} = 0.5 
−0.27 to 0.03^{c} 
0.02 to 0.06 
Levy, a = 1.5 
−0.27 to 0.02^{c} 
0.07 to 0.09 
β _{PS(bestfit)}  
Gaussian 
0.00 to 0.01^{d} 
0.03 to 0.03 
lognormal (Box–Cox), c _{v} = 0.5 
−0.37 to 0.03^{d} 
0.03 to 0.04 
lognormal (Schreiber–Schmitz), c _{v} = 0.5 
0.00 to 0.01^{d} 
0.03 to 0.04 
Levy, a = 1.5 
0.00 to 0.00^{d} 
0.02 to 0.03 
β _{PS(Whittle)}  
Gaussian 
0.00 to 0.01^{d} 
0.03 to 0.03 
lognormal (Box–Cox), c _{v} = 0.5 
−0.37 to 0.03^{d} 
0.02 to 0.03 
lognormal (Schreiber–Schmitz), c _{v} = 0.5 
0.00 to 0.00^{d} 
0.02 to 0.03 
Levy, a = 1.5 
0.00 to 0.00^{d} 
0.02 to 0.02 
Performance^{a} of five techniques^{b} that evaluate longrange persistence for selfaffine fractional motions (i.e. 1.0 < β _{model} < 3.0) with N = 4,096 elements and different onepoint probability distributions
Technique^{b} Distribution of the noise 
Systematic error (bias) \( \bar{\beta }_{{\,{\text{measured}}}}  \beta_{{\,{\text{model}}}} \) 
Random error \( \sigma_{x} \left( {\beta_{{\,{\text{measured}}}} } \right) \) 

β _{Hu}  
Gaussian 
0.20 to 1.98 
0.01 to 0.05 
lognormal (Box–Cox), c _{v} = 0.5 
0.21 to 1.98 
0.01 to 0.05 
lognormal (Schreiber–Schmitz), c _{v} = 0.5 
0.11 to 1.78 
0.00 to 0.04 
Levy, a = 1.5 
0.20 to 1.99 
0.01 to 0.05 
β _{Ha}  
Gaussian 
−0.16 to 0.14 
0.04 to 0.15 
lognormal (Box–Cox), c _{v} = 0.5 
−0.14 to 0.15 
0.04 to 0.15 
lognormal (Schreiber–Schmitz), c _{v} = 0.5 
−0.92 to 0.04 
0.02 to 0.26 
Levy, a = 1.5 
−0.16 to 0.06 
0.05 to 0.22 
β _{DFA}  
Gaussian 
0.03 to 0.03^{c} 
0.06 to 0.09 
lognormal (Box–Cox), c _{v} = 0.5 
0.03 to 0.04^{c} 
0.07 to 0.09 
lognormal (Schreiber–Schmitz), c _{v} = 0.5 
−0.79 to 0.01^{c} 
0.06 to 0.52 
Levy, a = 1.5 
0.01 to 0.02^{c} 
0.09 to 0.10 
β _{PS(bestfit)}  
Gaussian 
0.00 to 0.01^{d} 
0.03 to 0.03 
lognormal (Box–Cox), c _{v} = 0.5 
0.01 to 0.02^{d} 
0.03 to 0.04 
lognormal (Schreiber–Schmitz), c _{v} = 0.5 
−1.05 to 0.00^{d} 
0.03 to 0.48 
Levy_{,} a = 1.5 
0.00 to 0.00^{d} 
0.03 to 0.03 
β _{PS(Whittle)}  
Gaussian 
0.00 to 0.01^{d} 
0.03 to 0.03 
lognormal (Box–Cox), c _{v} = 0.5 
0.00 to 0.02^{d} 
0.03 to 0.03 
lognormal (Schreiber–Schmitz), c _{v} = 0.5 
−0.95 to 0.00^{d} 
0.03 to 0.48 
Levy_{,} a = 1.5 
0.00 to 0.00^{d} 
0.02 to 0.03 
A first inspection of Figs. 21, 22, 23, 24, 25, 26, 27, and Tables 4 and 5 shows that different techniques perform very differently. These differences will be summarized, for each technique, in Sects. 7.4–7.7.
 (1)
An Excel Spreadsheet of a subset of our results for all of our different analyses. For each set of 100 realizations of fractional noises or motion parameters for which the process was designed (onepoint probability distribution type, number of elements N, β _{model}) and technique applied, we give the mean \( \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}} \), systematic error (bias = \( \bar{\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}  \beta_{\text{model}} \)), random error (standard deviation σ _{ x }(\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\))), and rootmeansquared error \( (RMSE =( {( {{\text{systematic}}\;{\text{error}}} )^{2} + ( {{\text{random}}\,{\text{error}}} )^{2} } )^{0.5} ). \) In addition, for each set of 100 realizations, we give the minimum, 25 %, mode, 75 %, and maximum \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\). The analyses applied include those discussed in this paper (Hurst rescaled range analysis, semivariogram analysis, detrended fluctuation analysis, power spectral analysis [bestfit], and power spectral analysis [Whittle]) and the discrete wavelet transform (DWT, results not discussed in this paper, but ‘presented’ in the supplementary material; see Appendix 6 for a discussion of the DWT applied). These analyses results are provided for 6,500 parameter combinations (out of the 17,000 examined for this paper). See also Sect. 9.5 where the supplementary Excel spreadsheet is described in more detail in the context of benchmarkbased improved estimators for longrange persistence.
 (2)
R programs. We give the set of R programs that we use to perform the tests.
Review of selected studies that simulate longrange persistent time series to examine the performance of techniques that quantify longrange dependence
Reference 
Noises used (i) probability distribution (ii) technique to create (iii) Number of values, N 
Techniques used^{a} 
Comments^{a} 

Schepers et al. (1992) 
(i) Gaussian distributed (ii) Selfaffine noises; successive random additions (iii) N = 2^{9}, 2^{13}, 2^{15} 
ACF, PSA, R/S, relative dispersional analysis 
Bestperforming technique: power spectral analysis 
Gallant et al. (1994) 
(i) Gaussian distributed (ii) Selfaffine noises; successive random additions; Weierstrass–Mandelbrot functions (iii) N = 2^{10} 
PSA (standard and maximum entropy power spectrum), roughness length, semivariogram 
Bestperforming technique: maximum entropy power spectrum 
Bassingthwaighte and Raymond (1995) 
(i) Gaussian distributed (ii) Selfaffine noises; successive random additions (iii) N = 2^{6}, …, 2^{20} 
Dispersional analysis 
Dispersional analysis is biased for short time series 
Mehrabi et al. (1997) 
(i) Gaussian distributed (ii) Selfaffine noises; successive random additions; Weierstrass–Mandelbrot functions (iii) N = 3 × 10^{2}, 3 × 10^{3}, 3 × 10^{4}, 3 × 10^{5}, 3 × 10^{5} 
PSA (including MLE), roughness length, R/S, covariance analysis, wavelet analysis, Levy method  
Wen and Sinding–Larsen (1997) 
(i) Gaussian distributed (ii) Selfaffine noises; successive random additions; superposition of selfaffine time series; superposition of selfaffine time series and white noise (iii) N = 2^{10} 
PSA (averaged over equally sized windows of the time series), variogram  
Pilgram and Kaplan (1998) 
(i) Gaussian distributed (ii) Selfaffine noises (iii) N = 2^{8}, …, 2^{13} 
R/S, PSA (standard and for power averages in logarithmically spaced frequency bands), MLE of the ACF, DFA 
Bestperforming technique: DFA 
Malamud and Turcotte (1999a) 
(i) Gaussian and lognormal distributed (ii) Selfaffine noises (iii) N = 2^{12} 
PSA, wavelet analysis, semivariogram, R/S, average extreme value analysis  
Heneghan and McDarby (2000) 
(i) Gaussian distributed (ii) Selfaffine noises (iii) N = 2^{15} 
DFA, PSA 
Discusses how to distinguish between fractional Gaussian noise and motion in physiological time series 
Weron (2001) 
(i) Gaussian distributed (ii) white noise (iii) N = 2^{8}, …, 2^{16} 
R/S (standard and Anis–Lloyd corrected), DFA, PSA 
Construction of confidence intervals 
Eke et al. (2002) 
(i) Gaussian distributed (ii) Selfaffine noises and their aggregated series; method of Davies and Harte (1987) (iii) N = 2^{8}, …, 2^{18} 
R/S (standard and Anis–Lloyd corrected), autocorrelation analysis, scaled windowed variance analysis, dispersional analysis, PSA, DFA, fractal wavelet analysis 
Bestperforming technique: PSA, dispersional analysis, scaled windowed variance analysis 
Xu et al. (2005) 
(i) Gaussian distributed (ii) Selfaffine noises (iii) N = 2^{20} 
DFA, DMA 
Bestperforming technique: DFA for time series with β _{model} > 0.0, DMA for time series with β _{model} ≤ 0.0 
Witt and Malamud (2013) (this paper) 
(i) Gaussian, lognormal (c _{ v } = 0.0 to 2.0), and Levy (a = 1.0 to 2.0) distributed (ii) Selfaffine noises and motions (iii) N = 2^{6}, …, 2^{18} 
R/S, semivariogram analysis, DFA, PSA (Whittle estimator and logperiodogram regression) 
Bestperforming technique: Whittle estimator for the power spectral density 
Review of selected papers that simulate asymptotic longrange persistent time series to examine the performance of techniques that quantify longrange dependence
Reference 
Noises used^{a} (i) Probability distribution (ii) Technique to create (iii) Number of values, N 
Techniques used^{a} 
Comments^{a} 

Taqqu et al. (1995) 
(i) Gaussian distributed (ii) Selfsimilar noises; FARIMA^{b} (iii) N = 10^{5} 
Aggregated variance, differenced variance, absolute values of the aggregated series, Higuchi’s method, DFA, R/S, PSA (standard, modified, Whittle estimator) 
Bestperforming technique: Whittle estimator, DFA 
Caccia et al. (1997) 
(i) Gaussian distributed (ii) Selfsimilar noises and their aggregated series (iii) N = 2^{6}, …, 2^{17} 
Several types of dispersional analysis, R/S (standard and detrended) 
Dispersional analysis outperforms R/S 
Cannon et al. (1997) 
(i) Gaussian distributed (ii) Selfsimilar noises and their aggregated series (iii) N = 2^{6}, …, 2^{17} 
Scaled windowed variance methods, R/S 
Scaled window variance is the same as DFA, minimal time series length to get confidence intervals of size smaller than 0.2 is N = 2^{15} 
Taqqu and Teverovsky (1998) 
(i) Gaussian and Levy distributed; FARIMA based on exponential, lognormal, Levy, and Paretodistributed noises (ii) Selfsimilar noises; FARIMA^{b} (iii) N = 10^{5} 
Aggregated variance, differenced variance, absolute values of the aggregated series, Higuchi’s method, DFA, R/S, PSA (standard, modified, Whittle estimator) 
DFA and absolute values of the aggregated series are sensitive to the distributions, shortterm correlations can strongly bias the results 
Velasco (2000) 
(i) Gaussian distributed and ARFIMA based on lognormal, uniform, exponential, and t5 distributed noises (ii) ARFIMA^{b} (iii) N = 2^{9} 
Several types of PSA (including MLE)  
Audit et al. (2002) 
(i) Gaussian distributed (ii) FARIMA^{c} (iii) N = 2^{6}, …, 2^{14} 
DFA, several types of wavelet analysis 
WTMM outperforms the other estimators 
Whitcher (2004) 
(i) Gaussian distributed (ii) Seasonal longmemory processes (iii) N = 2^{7}, 2^{9}, 2^{10} 
Several types of wavelet analysis (including MLE)  
Delignieres et al. (2006) 
(i) Gaussian distributed (ii) Selfsimilar noises and their aggregated series, with and without added white noises (iii) N = 2^{6}, …, 2^{11} 
R/S, PSA, DFA, dispersional analysis, MLE of the ACF, scaled window variance 
Bestperforming technique: MLE of the ACF; the paper focuses on short time series 
Stadnytska and Werner (2006) 
(i) Gaussian distributed (ii) ARIMA, ARFIMA^{d} (iii) N = 100, 200, 300, …, 2500 
PSA (exact maximum likelihood technique), conditional sum of squares for estimating short and longrange parameters 
Both techniques are comparable 
Boutahar et al. (2007) 
(i) Gaussian distributed (ii) Selfsimilar noises; ARFIMA^{b,d} (iii) N = 3 × 10^{1}, 2 × 10^{2}, 10^{3}, 10^{4} 
R/S, Higuchi’s method, several PSA incl. Whittle (for ARFIMA) 
Bestperforming technique: Whittle estimator 
Mielniczuk and Wojdyłło (2007) 
(i) Gaussian distributed (ii) Selfsimilar noises; FARIMA^{b} (iii) N = 2^{9}, …, 2^{15} 
DFA, R/S (standard and adjusted), wavelet analysis, PSA(Whittle estimator)  
Boutahar (2009) 
(i) Gaussian distributed (ii) AR, ARMA, ARFIMA^{b} (iii) N = 1 × 10^{2}, 5 × 10^{2}, 10^{3} 
R/S (standard and modified), several PSA including Whittle (for ARFIMA) 
For short time series, a modified R/S statistics performs best, PSA is recommended for longer time series 
Faÿ et al. (2009) 
(i) Gaussian distributed, Box–Cox transforms of ARFIMA time series (ii) ARFIMA^{f}; DARFIMA^{f,g} (iii) N = 2^{9}, 2^{12} 
Several types of PSA and wavelet analysis including MLE 
Fourier and wavelet techniques are found to be comparable 
StroeKunold et al. (2009) 
(i) Gaussian distributed (ii) ARFIMA^{b} (iii) N = 2^{8}, …, 2^{11} 
Several types of PSA incl. MLE, R/S, DFA, Higuchi’s method 
Bestperforming technique: Whittle estimator 
7.4 Hurst Rescaled Range Analysis Results (β _{Hu})
 (a)
Range of theoretical applicability: As Hurst rescaled range analysis can be applied to stationary time series only, it is theoretically appropriate only for fractional noises, –1.0 < β _{model} < 1.0.
 (b)
Dependence on β _{model}: The results of the Hurst rescaled range analysis are given in Fig. 21 where we see that the performance test results β _{Hu} deviate strongly from the dashed diagonal line (β _{model} = β _{Hu}) and that only over (approximately) the range 0.0 < β _{model} < 1.0 do the largest 95 % confidence intervals (for N = 512) intersect with some part of the biasfree case (β _{model} = β _{Hu}); as the number of elements N increases, the 95 % confidence intervals for β _{Hu} decrease in size, and therefore there are fewer cases where the 95 % confidence intervals for β _{Hu} overlap with β _{model}. In terms of the bias, unbiased results are found only for fractional noises with a strength of persistence of β _{model} ≈ 0.5. For less persistent noises, β _{model} < 0.5, the strength of persistence is overestimated, and for more persistent noises, β _{model} > 0.5, it is underestimated. Apart from the poor general performance, the random error (confidence intervals) of β _{Hu} are rather small (Tables 4, 5).
 (c)
Dependence on the onepoint probability distribution: In Fig. 26a we see that at β _{model} = 0.8 the systematic error (bias) increases with the asymmetry (c _{v} = 0.0 to 2.0) of the onepoint probability distribution while the random error (which is proportional to the 95 % confidence interval size) stays constant. In contrast (Fig. 27a), at β _{model} = 0.8, both the systematic error (bias) and random error (confidence interval sizes) are very robust (they do not vary a lot) to changes of the tail parameter (a = 1.0 to 2.0) of the fractional noise.
 (d)
Discussion: Our results presented in Figs. 21 and 26a show that the systematic error (bias) gets smaller as the time series length N grows from 512 to 4,096. If we consider a broader range of time series lengths (supplementary material), this can be seen more clearly. For example, consider a FGN with β _{model} = −0.8, and then our simulations result in \( \bar{\beta }_{\text {Hu}} \) = −0.42 (N = 4,096), −0.45 (N = 8,192), −0.47 (N = 16,384), −0.49 (N = 32,768), −0.51 (N = 65,536), and −0.53 (N = 131,072), and thus, the value of β _{model} = −0.8 is very slowly approached. The bias of Hurst rescaled range analysis is a finitesize effect; Bassingthwaighte and Raymond (1995) and Mehrabi et al. (1997) have shown for fractional Gaussian noises and motions that for very long sequences, the correct value of β _{model} will be approached by β _{Hu}.
 (e)
Rescaled range (R/S) analysis brief conclusions: For most cases, it is inappropriate to use Hurst rescaled range (R/S) analysis for the types of selfaffine fractional noises and motions (i.e. Gaussian, lognormal, and Levy distributed) considered in this paper, and correspondingly many of the time series found in the Earth Sciences.
7.5 Semivariogram Analysis Results (β _{Ha})
 (a)
Range of theoretical applicability: The range of β _{Ha} is the interval 1.0 < β _{model} < 3.0, so semivariogram analysis is appropriate for fractional motions only.
 (b)
Dependence on β _{model}: Fig. 22a,b,c and Tables 4 and 5 demonstrates that for fractional Gaussian noises (FGN), fractional Levy noises (FLevyN), and fractional lognormal noises constructed with the Box–Cox transform (FLNN_{a}), unbiased results are found over much (but not all) of the interval 1.0 < β _{model} < 3.0, with larger values of the bias at the interval borders; larger biases also occur for short time series. For persistence strength β _{model} > 2.0 (more persistent than Brownian motion), semivariograms applied to realizations of lognormal noises and motions based on the Schreiber–Schmitz algorithm (Fig. 22d, FLNN_{b}) result in values of β _{PS} ≈ 2.0, reflecting a failure of this algorithm for this particular setting of the parameters. Our simulations indicate that the Schreiber–Schmitz algorithm does not work for constructing noises that are asymmetric and nonstationary; thus, we cannot discuss the corresponding performance.
 (c)
Dependence on the onepoint probability distribution: For FGN, FLevyN, and FLNN_{a}, Fig. 22, the confidence interval size depends on the strength of longrange persistence: they are small around β _{model} ≈ 1.0, increase up to β _{model} ≈ 2.5, and then decrease for larger values of the persistence strength. It appears plausible to increase the range of applicability of semivariogram analysis to fractional noises (–1.0 < β _{model} < 1.0) by analysing their aggregated series, but only if the original series has a symmetric (or nearsymmetric) probability distribution. In Fig. 27b, we see that at β _{model} = 0.8 changes of the heavytail parameter of fractional Levy noises from a = 0.0 to 1.0 impact the systematic error (bias) in a complex way, while the random error remains almost constant and very large.
 (d)
Discussion: Gallant et al. (1994), Wen and SindingLarsen (1997), and Malamud and Turcotte (1999a) have discussed the bias of Ha for time series and came to very similar conclusions. Wen and SindingLarsen (1997) pointed out (1) that longer lags τ lead to more accurate estimates of Ha (consequently, we have used here long lags up to N/4) and (2) that semivariogram analysis is applicable to incomplete (i.e. gap containing) measurement data. For time series that are incomplete (i.e. values in an otherwise equally spaced time series are missing), only lagged pairs of values which are not affected by the gaps are considered in the summation of (Eq. 16).
 (e)
Semivariogram analysis brief conclusions: Semivariogram analysis is appropriate for 1.0 < β < 3.0, introduces little bias, but the resulting estimates are rather uncertain. It is appropriate for time series with asymmetric onepoint probability distributions, but should not be applied if that distribution is heavy tailed.
7.6 Detrended Fluctuation Analysis Results (β _{DFA})
 (a)
Range of theoretical applicability: Detrended fluctuation analysis (here performed with the quadratic trend removed, i.e. DFA2) can be applied to all persistence strengths considered in our synthetic fractional noises and motions (Sect. 4.2).
 (b)
Dependence on β _{model}: For fractional Gaussian, Levy, and lognormal noises and motions, detrended fluctuation analysis is just slightly biased (Fig. 23; Tables 4, 5). It shows a weak overestimation for the strongly antipersistent noises (−1.0 < β _{model} < −0.7) in particular for the very short time series (N = 512, N = 1,024). For fractional lognormal noises and motions created by Box–Cox transforms (FLNN_{a}), β _{DFA} overestimates the strength of persistence for antipersistent noises (β _{model} < 0.0) and slightly underestimates for fractional noises and motions with 0.5 < β _{model} < 1.5 (Fig. 23c). For fractional lognormal noises and motions created by the Schreiber–Schmitz algorithm (FLNN_{b}, Fig. 23d), our simulations show large values of the bias for β _{model} ≥ 2.0. This bias is a consequence of the construction of the FLNN_{b} rather than a limitation of detrended fluctuation analysis.
The random error (which is proportional to the 95 % confidence interval size) of detrended fluctuation analysis (Fig. 23) depends on the correlations of the investigated time series: for fractional noises and motions of all considered onepoint probability distributions, the sizes of the confidence intervals increase with the persistence strength. For thintailed fractional noises and motions (i.e. Gaussian and lognormal), the confidence intervals for fractional Brownian motions (β _{model} = 2.0) are twice as big as for white noises (β _{model} = 0.0) (Fig. 23; Tables 4, 5). So, the stronger the strength of persistence in a times series, the more uncertain will be the result of detrended fluctuation analysis.
 (c)
Dependence on the onepoint probability distribution: For fractional lognormal noises (constructed by Box–Cox transform), the negative bias and the random error (proportional to the confidence interval size) are increasing gradually for increasing coefficients of variations (Fig. 26b, FLNN_{a}). If the fractional lognormal noises are created by the Schreiber–Schmitz algorithm (Fig. 26b, FLNN_{b}) and have positive persistence and a moderate asymmetry (0.0 < c _{v} ≤ 1.0), β _{DFA} is unbiased. However, for fractional noises and motions with strongly asymmetric onepoint probability distribution (1.0 < c _{v} < 2.0) and data sets that have a small number of total values, detrended fluctuation analysis underestimates β _{model} (Fig. 26b). The corresponding 95 % confidence intervals grow with increasing asymmetry. They are bigger than those of β _{DFA} for fractional lognormal noises constructed by the Box–Cox transform (Fig. 26b, Table 4). Detrended fluctuation analysis is unbiased for fractional Levy noises with positive persistence strength and different tail exponents, a (Fig. 27c). The corresponding confidence intervals grow with decreasing tail exponent, a.
 (d)
Discussion: It is important to note that the random error of β _{DFA} which arises from considering different realizations of fractional noises and motions is different from (and in case of positive persistence, β _{model} > 0.0, much larger than) the regression error of β _{DFA} gained by linear regression of the log(fluctuation function) versus log(segment length). The very small regression error originates in the statistical dependence of the difference between the fluctuation function of a particular noise and the average (over many realizations of the noise) fluctuation function. As a consequence, the regression error should not be used to describe the uncertainty of the measured strength of persistence.
In the case of fractional Levy noises with very heavy tails (a ≪ 2) (Fig. 27c), we do not recommend the use of detrended fluctuation analysis, as the error bars become very large with increasing a (Fig. 27c). In this case, the modified version of detrended fluctuation analysis suggested by Kiyani et al. (2006) which has not been ‘benchmarked’ in our paper might be an option.
The performance of detrended fluctuation analysis (DFA) has been studied extensively (Taqqu et al. 1995; Cannon et al. 1997; Pilgram and Kaplan 1998; Taqqu and Teverovsky 1998; Heneghan and McDarby 2000; Weron 2001; Audit et al. 2002; Xu et al. 2005; Delignieres et al. 2006; Mielniczuk and Wojdyłło 2007; StroeKunold et al. 2009) for different types of fractional noises and motions and asymptotic longrange persistent time series (Tables 6, 7). In some of these studies (Taqqu et al. 1995; Pilgram and Kaplan 1998; Xu et al. 2005), it was demonstrated to be the bestperforming technique. In other studies, DFA has been found to have low systematic error (bias) and low random error (confidence intervals) but was slightly outperformed by maximum likelihood techniques (Taqqu and Teverovsky 1998; Audit et al. 2002; Delignieres et al. 2006; StroeKunold et al. 2009).
 (e)
Detrended fluctuation analysis brief conclusions: Detrended fluctuation analysis is almost unbiased for fractional noises and motions, and the random errors (proportional to the confidence interval sizes) are small for fractional noises. It is inappropriate for time series whose onepoint probability distributions are characterized by very heavy tails.
7.7 Power Spectral Analyses Results β _{PS(bestfit)} and β _{PS(Whittle)}
 (a)
Range of theoretical applicability: Power spectralbased techniques β _{PS(bestfit)} and β _{PS(Whittle)} can be applied to all persistence strengths considered in our fractional noises and motions (Sect. 4.2).
 (b)
Dependence on β _{model}: Symmetrically distributed (i.e. Gaussian and Levydistributed fractional noises) power spectralbased techniques used for evaluating the strength of longrange persistence perform very well (Figs. 24, 25; Tables 4, 5). They are (1) unbiased (\( \bar{\beta }_{\text{PS}} = \beta_{\text{model}} \)), and (2) the size of confidence intervals of β _{PS} depends on the length of the fractional noise or motion but not on the strength of longrange persistence, β _{model}. For fractional Levy noises, power spectral techniques are very exact as the related confidence intervals are very tight. For fractional Levy motions with a β _{model} ≥ 3.0, the β _{PS} becomes slightly biased; the strength of persistence is overestimated in particular for the shorter time series. Looking specifically at fractional Levy noises with different strong heavy tails (Fig. 27d), we find (1) an unbiased performance of β _{PS} and (2) that heavier tails cause smaller systematic error.
 (c)
Dependence on the onepoint probability distribution: For the fractional noises and motions with asymmetric distributions, namely the two types of fractional lognormal noises, the performance depends on how these noises and motions are created (Figs. 24c,d, 25c,d, 26c, 27d; Tables 4, 5): if they are constructed by applying a Box–Cox transform to a fractional Gaussian noise (Figs. 24c, 25c; Tables 4, 5), we find for the antipersistent noises considered here, −1.0 < β _{model} < 0.0, the strength of longrange persistence, β _{PS}, is overestimated while for 0.0 < β _{model} < 1.0, it is underestimated. Because the systematic (bias) and random error is very small compared to β _{model}, the underestimation is somewhat hard to see on the figures themselves, but becomes much more apparent in the supplementary material. This effect of under and overestimation of β _{model} is stronger if fractional lognormal noises with a more asymmetric onepoint probability distribution (larger coefficients of variations, c _{v}) are considered. One can also see (Fig. 26c), for fractional lognormal noises and motions, the confidence interval size gradually grows with increasing asymmetry (increasing c _{v}).
If the fractional lognormal noises are constructed by the Schreiber–Schmitz algorithm (Figs. 24d, 25d), then power spectral techniques perform fairly convincingly in the range of persistence −1.0 < β _{model} < 1.8. For persistence strength β _{model} > 2.0 (more persistent than Brownian motion), spectral techniques result in values of β _{PS} ≈ 2.0, reflecting a failure of the Schreiber–Schmitz algorithm for this particular setting of the parameters. The confidence intervals are equally sized for the entire considered range of persistence strength, but they are approximately 10 % larger than the confidence intervals of fractional Gaussian noises (Figs. 24a, 25a). For a fixed β _{model}, the error bar sizes rise with growing asymmetry (larger coefficients of variations, c _{v}) (Fig. 26c). For highly asymmetric noises (c _{v} > 1.0), the strength of longrange persistence is underestimated.
For the fractional Levy noises, we find that the performance does not depend on the heavytail parameter. Figure 27d presents the performance test result for a persistence strength of β _{model} = 0.8; the power spectral technique is unbiased, and the random error (proportional to the confidence intervals) is about the same across all considered values of the exponent a.
 (d)
Discussion: If the performance of the maximum likelihood estimator, β _{PS(Whittle)}, is compared to the performance of the logperiodogram regression, β _{PS(bestfit)}, we find that both techniques perform very similarly, except that β _{PS(Whittle)} represents a slightly more exact estimator (Tables 4, 5). The real advantage, however, is that the Whittle estimator also gives the random error, \(\sigma \)(β _{PS(Whittle)}), for any single time series considered.
In Fig. 28a we give the random error (standard deviation of the Whittle estimator, \(\sigma \)(β _{PS(Whittle)}), also called the standard error of the estimator, see Sect. 7.1) as a function of the longrange persistence of 100 realizations (each) of FGN processes created to have −1.0 ≤ β _{model} ≤ 4.0 and four time series lengths N = 256, 1,024, 4,096, and 16,384. In Fig. 28b we give \(\sigma \)(β _{PS(Whittle)}) of 100 realizations (each) of four probability distributions (FGN, FLNN c _{v} = 0.5, FLNN c _{v} = 1.0, FLevyN a = 1.5) with β _{model} = 0.5, as a function of time series length N = 64 to 65,536. For both panels and each set of process parameters in Fig. 28, we also give the maximum likelihood estimate, the Cramér–Rao bound (CRB) (Sect. 6.4, Eq. 28), for each set of 100 realizations. Both yaxes in Fig. 28 are logarithmic, as is the xaxis for Fig. 28b.In Fig. 28a we observe that the random error of the Whittle estimator, \(\sigma \)(β _{PS(Whittle)}), slightly increases as a function of persistence strength, β _{model}, for −1.0 < β _{model} < 2.8. In contrast, the CRB is slightly increasing as a function of β _{model} over the range −1.0 < β _{model} < 0.0 and then decreases by an order of magnitude, over the range 0.0 < β _{model}< 2.0, after which it remains constant. The general shape of the four curves for CRB and the four curves for \(\sigma \)(β _{PS(Whittle)}) do not depend on the length of the time series, N. The CRB is systematically smaller than the random error, (β _{PS(Whittle)}). The ratio CRB/\(\sigma \)(\(\sigma \) β _{PS(Whittle)}) changes significantly for different ranges of β _{model}. Therefore, knowing only the CRB value will not give knowledge about the magnitude of the random error. We therefore do not recommend using the CRB as an estimate of the random error.
All eight curves in Fig. 28b show a powerlaw dependence on the time series length N (and scale with N ^{−0.5}). The Cramér–Rao bound measure is a lower bound for the random error and depends very little on the onepoint probability of the fractional noise or motion. We see here that the Cramér–Rao bounds are systematically smaller than the standard errors, in other words the standard deviations of β _{PS(Whittle)} calculated for many realizations, \(\sigma \)(β _{PS(Whittle)}). The mean standard error is smallest for the fractional Levy noises and largest for the fractional lognormal noises, with the largest \(\sigma \)(β _{PS(Whittle)}) for the higher coefficient of variation. The ratio CRB/\(\sigma \)(β _{PS(Whittle)}) changes with the onepoint probability distribution but not with the time series length N.
If the performance of these power spectral techniques is considered for time series with N = 4,096 elements, we find (Tables 4, 5): (1)
Power spectral techniques are free of bias for fractional noises and motions with symmetric distributions and they expose a significant bias for time series with strong asymmetric probability distributions.
 (2)
The random error (proportional to the confidence interval sizes) is rather small, as in the case of symmetrically distributed time series, 95 % of the β _{PS} occupy an interval of length 0.2 or smaller.
For fractional noises and motions with an asymmetric probability distribution, power spectral techniques are less certain. The more asymmetric the time series is, the more uncertain is the estimated strength of longrange persistence. Spectral techniques that estimate the strength of longrange persistence are common in statistical time series analysis, particularly in the econometrics and physics communities, and their performance has been intensively investigated (Schepers et al. 1992; Gallant et al. 1994; Taqqu et al. 1995; Mehrabi et al. 1997; Wen and SindingLarsen 1997; Pilgram and Kaplan 1998; Taqqu and Teverovsky 1998; Heneghan and McDarby 2000; Velasco 2000; Weron 2001; Eke et al. 2002; Delignieres et al. 2006; Stadnytska and Werner 2006; Boutahar et al. 2007; Mielniczuk and Wojdyłło 2007; Boutahar 2009; Faÿ et al. 2009; StroeKunold et al. 2009; see also Tables 6 and 7). The most common approach in the literature is to fit models using MLE to time series that are characterized by short and longrange dependence. In most cases, the considered time series have a Gaussian onepoint probability distribution.
 (1)
 (e)
Power spectral analysis brief conclusions: Power spectral techniques have small biases and small random errors (tight confidence intervals).
8 Discussion of Overall Performance Test Results
8.1 Overall Interpretation of Performance Test Results
A comparison of the systematic error (bias) of the five techniques (Fig. 29) shows that DFA (Fig. 29c) and spectral techniques (Fig. 29d,e) have small biases (green cells in the panels) over most of the range of β _{model} considered, that is, for most fractional noises and motions. Large biases for DFA and spectral techniques (red or purple cells in Fig. 29c,d,e panels) indicate over or underestimation of the persistence strengths and occur only for antipersistent fractional lognormal noises (FLNN_{a}, β _{model} < −0.2) and for a minority of highly persistent fractional Levy motions (FLevyN, 1.0 < a < 1.2). In contrast, Hurst rescaled range analysis (Fig. 29a) leads to results with small biases only for fractional noises with 0.0 < β _{model} < 0.8, and semivariogram analysis (Fig. 29b) has small biases only if the persistence strength is in the range 1.2 < β _{model} < 2.8 and the onepoint probability distribution does not have too heavy a tail (i.e. FLevyN with a > 1.2). Overall, when examining the five panels in Fig. 29, one can see (green cells) that DFA and the spectral analysis techniques are generally applicable for all β _{model}, whereas rescaled range analysis (with limitations) is appropriate for −1.0 < β _{model} < 1.0, and semivariogram analysis (again, with limitations) is appropriate for 1.0 < β _{model} < 3.0.
If the random errors (σ _{ x }(\({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\))) of the five techniques are compared (Fig. 30), the smallest overall random errors (horizontal bars that are very thin or zero) are found for rescaled range analysis (Fig. 30a), and then spectral techniques (Fig. 30d,e) with the Whittle estimator having slightly smaller overall random errors. DFA (Fig. 30c) has overall the largest random error when considering all strengths of persistence (β _{model}) and variety of probability distributions and increases gradually as β _{model} increases. In contrast, when examining semivariogram analysis (Fig. 30b), it shows the largest variation of random errors of all the techniques, particularly large for 1.0 < β _{model} < 3.0.
The overall performance of the techniques is given by the rootmeansquared error, RMSE = ((systematic error [Fig. 29])^{2} + (random error [Fig. 30])^{2})^{0.5} (Eq. 30) which is displayed graphically in Fig. 31. In this figure, the length of the horizontal bar in each panel cell represents RMSE on a scale of 0.0 to 3.0, where (as above) each of the 546 cells in the panel is a combination of process parameters (−1.0 < β _{model} < 4.0; 21 different onepoint probability distribution parameter combinations) for which 100 realizations were produced. To highlight different magnitudes of RMSE, each cell has been coloured, such that green represents ‘low’ values of RMSE (0.0 to 0.1), yellow ‘medium’ values of RMSE (0.1 to 0.5), and red ‘high’ values of RMSE (0.5 to 3.0).
Figure 31 illustrates that the performance of the bestfit and Whittle spectral techniques (Fig. 31d,e) generally performs the best (compared to the other three techniques) across a large range of β _{model} and onepoint probability types (FLevyN, FGN, and FLNN_{a}) as evidenced by the large ‘green’ regions (i.e. 0.0 ≤ RMSE ≤ 0.1). However, one also can observe for these spectral techniques (Fig. 31d,e, yellow [0.1 < RMSE ≤ 0.5] and red [RMSE > 0.5] cells) that care should be taken for very heavytailed fractional noises with large persistence values (FLevyN, 1.0 ≤ a ≤ 1.3, and β _{model} > 2.0), and for fractional lognormal noises (FLNN_{a}) that are antipersistent (β _{model} < 0.0) or with weak persistence (0.0 < β _{model} < 1.0) and c _{v} > 0.8. DFA (Fig. 31c), although it is in general applicable over all β _{model}, does not perform as well as the spectral analysis techniques (Fig. 31d,e) as evidenced by a large number of yellow cells (0.1 < RMSE ≤ 0.5) and a few red cells (RMSE > 0.5), particularly for FLevyN across most β _{model}. Semivariogram analysis (Fig. 31b) has large RMSE (red cells) for β _{model} ≤ 0.4 and β _{model} ≥ 3.6 (across FLevyN, FGN, and FLNN_{a}), whereas rescaled range analysis (Fig. 31a) has large RMSE (red cells) for β _{model} ≤ −0.6 and β _{model} ≥ 1.6. The other cells for both semivariogram (Fig. 31b) and rescaled range analysis (Fig. 31a) mostly exhibit medium RMSE (yellow cells) except for narrow bands of 0.2 < β _{model} < 0.6 (rescaled range analysis) and 1.2 < β _{model} < 1.6 where the cells exhibit low RMSE (green cells).
We believe, based on the results shown in Figs. 29, 30, 31, that power spectral analysis techniques (bestfit and Whittle) are acceptable for most practical applications as they are almost unbiased and give tight confidence intervals. Furthermore, based on these figures, detrended fluctuation analysis is appropriate for fractional noises and motions with positive persistence and with nonheavytailed and nearsymmetric onepoint probability distributions; it is not appropriate for asymmetric or heavytailed distributions. Semivariogram analysis was unbiased for 1.2 < β _{model} < 2.8 and might be used for doublechecking results, if needed, for an aggregated series, but the large random errors for parts of the range over which results are unbiased need to be considered. We do not recommend the use of Hurst rescaled range analysis as it is only appropriate either for very long sequences (with more than 10^{5} data points) (Bassingthwaighte and Raymond 1994) or for fractional noises with a strength of longrange persistence close to β _{model} ≈ 0.5.

[0.12, 0.24] (Gaussian distribution)

[0.16, 0.27] (lognormal distribution with moderate asymmetry, c _{v} = 0.6, constructed by Box–Cox transform)

[0.10, 0.34] (Levy distribution with a = 1.5)
For antipersistent noises (β < 0.0), we find a systematic overestimation of the modelled strength of longrange persistence. Rangarajan and Ding (2000) showed that a Box–Cox transform of an antipersistent noise with a symmetric onepoint probability distribution is not just changing the distribution (to an asymmetrical one); the Box–Cox transform effectively superimposes a white noise on the antipersistent noise, which causes a weakening of the antipersistence (i.e. β becomes larger). This implies that, for applications, if antipersistence or weak persistence is identified for an asymmetrically distributed time series, values of longrange persistence that are more negative might be needed for appropriately modelling the original time series. In this situation, we recommend applying a complementary Box–Cox transform to force the original time series to be symmetrically distributed. Then, one should consider the strength of longrange persistence for both the original time series and the transformed time series, discussing both in the results. If a given time series (or realization of a process) has a symmetric onepoint probability distribution, one can always aggregate the series and analyse the result (see Sects. 3.5 and 3.6).
With regard to lognormal distributed noises and motions, the results of our performance tests are sensitive to the construction technique used (Box–Cox vs. Schreiber–Schmitz). In this sense, our ‘benchmarks’ seem to confront the construction of the noises or motions rather than to evaluate the techniques used to estimate the strength of longrange dependence. Nevertheless, both ways of constructing fractional lognormal noises and motions are commonly used. If a lognormal distributed natural process like river runoff is measured, either the original data (in linear coordinates) can be examined, or the logarithm of the data can be taken. Our simulations show that the strength of longrange dependence can alter when going from the original to logtransformed values and vice versa. The Schreiber–Schmitz algorithm creates lognormal noises and motions that have a given powerlaw dependence of the power spectral density on frequency, whereas the Box–Cox transform creates lognormal noises and motions based on realizations of fractional Gaussian noises and motions with a given β _{model}. The Box–Cox transform will slightly change the powerlaw dependence (for the FGN) of the power spectral densities on frequency, leading to values of β _{PS} that are systematically (slightly) different from β _{model}.
8.2 The Use of Confidence Interval Ranges in Determining LongRange Persistence
From an applied point of view, it is important to discuss the size of the uncertainties (both systematic and random errors) of the estimated strength of longrange persistence. If a Gaussiandistributed time series with N data points is given that is expected to be selfaffine, then the power spectral techniques have a negligible systematic error (bias) and a random error (σ _{ x }(β _{PS})) of approximately 2N ^{−0.5}. If we take as an actual example power spectral analysis (bestfit) applied to 100 realizations of a fractional Gaussian noise with β _{model} = 0.2 and three lengths N = 32,768, 4,096, and 256, the average result (supplementary material) of the applied technique is, respectively, \( \bar{\beta }_{{\rm PS}(\text{bestfit})} = 0.201,\,\,0.192,\;0.204 \) giving biases = 0.001, 0.008, and 0.004. The random errors for β _{PS(bestfit)} at N = 32,768, 4,096, and 256 are, respectively, σ _{ x }(β _{PS(bestfit)}) = 0.011, 0.030, 0.139, compared to the theoretical random error of 2 N ^{−0.5} = 0.011, 0.031, 0.125. The actual random error and the theoretical error are closer as N gets larger, with for N = 32,768 a negligible percentage difference between the two values, N = 4,096 a 3 % difference, and N = 256 a 11 % difference. For power spectral analysis (Whittle), this same behaviour of the random error (2 N ^{−0.5}) can be seen in Fig. 28b, where there is a powerlaw dependence of (σ _{ x }(β _{PS})) on time series length N (dashed lines, blue triangle).
Confidence intervals (Sect. 7.2) are constructed as \( \bar{\beta }_{\text{PS}} \pm 1.96\;\sigma_{x} \left( {\beta_{\text{PS}} } \right) \). Therefore, if we take the example given above for 100 realizations of a FGN constructed to have β _{model} = 0.2 and N = 16,384, the 95 % confidence intervals are \( \bar{\beta }_{{\text{PS}}({\text{bestfit}})} \pm 1.96\;\sigma_{x}(\beta_{{\text{PS}}({\text{bestfit}})}) = 0.201 \pm (1.96 \times 0.011),\) giving (within the 95 % confidence intervals) 0.179 < β _{PS(bestfit)} < 0.223. If we do the same for the two other lengths, then for N = 4,096, 0.132 < β _{PS(bestfit)} < 0.252, and for N = 512, −0.074 < β _{PS(bestfit)} < 0.482. The confidence interval sizes grow rapidly as the number of elements N decreases, such that, for N = 256, we are unable to confirm (within the 95 % confidence interval) that longrange persistence is in fact present—the confidence interval contains the value β _{PS} = 0.0. Values of β _{PS} that are closer to or at zero are likely to occur for shortterm persistent and white (uncorrelated) noises. Thus, if we want to use this analysis technique for showing that a time series with N = 256 elements is longrange persistent (and not β = 0.0), the confidence interval must not contain zero, requiring either β _{PS} > 0.25 or β _{PS} < −0.25, where we have used 1.96 × (2 N ^{−0.5}) to derive these limits. In the case of nonsymmetric onepoint probability distributions, the larger systematic errors (biases) shift the confidence intervals even more for β _{PS}, leading to other (sometimes larger) thresholds for identifying longrange persistence.
Similar considerations can be made for the other three techniques (\({\beta }_{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}\)). Since these techniques are less reliable, the resultant thresholds will be larger and the two thresholds will not be symmetric with respect to zero due to biases. In such cases longrange persistence can only be identified if β _{model} has a very high or very low value. In summary, it might become difficult to identify longrange persistence for nonGaussian or rather short or nonperfect fractional noises or motions.
Another important aspect of our analysis is stationarity, in other words to decide whether a given time series can be appropriately modelled as a fractional noise (β < 1.0) or a fractional motion (β > 1.0). The value of β = 1.0 is the strength dividing (weakly) stationary noises from nonstationary motions. For this decision, essentially the same technique as described above can be applied where we inferred whether a time series is longrange persistent (β > 0.0) or antipersistent (β < 0.0). However, the analysis is now restricted to confidence intervals for β _{DFA}, β _{PS(bestfit)}, and β _{PS(Whittle)}. Hurst rescaled range (R/S) and semivariogram analysis cannot be applied because the critical value of β = 1.0 is at the edge of applicability for both techniques. For investigating whether a time series is a fractional noise (stationary) or motion (nonstationary), one can check all three confidence intervals as to whether they contain β = 1.0 within their lower or upper bounds. If this is the case, the only inference one can make is that the time series is either a noise or a motion, but not specifically one or the other. If all three confidence intervals have an upper bound that is less than β = 1.0, then one can infer that the time series is a fractional noise (and not a motion).
9 BenchmarkBased Improvements to the Measured Persistence Strength of a Given Time Series
9.1 Motivation
In the previous sections, we have studied how the different techniques that measure longrange persistence perform for benchmark time series. These time series are realizations of processes modelled to have a given strength of persistence (β _{model}), a prescribed onepoint probability distributions and a fixed number of values N. Our studies have shown that the measured strength of longrange persistence of a given time series realization can deviate from the persistence strength of the processes underlining the benchmark fractional noises and motions due to systematic and random errors of the techniques. Therefore, using these benchmark selfaffine time series, we can have a good idea—based on their β _{model}, onepoint probability distribution and N—about the resultant distribution of \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\) for each different technique, including any systematic errors (biases) and random errors. To aid a more intuitive discussion in the rest of this section, we will use the subscript word ‘measured’ for the estimators of longrange persistence that are calculated using different techniques, β _{measured} = \({\beta }_{{[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}]}}\), where, as before, Hu, Ha, DFA, and PS represent the technique applied.
In practice, we are often confronted with a single time series and want to state whether or not this time series is longrange persistent and, if so, how strong this persistence is and how accurately this strength has been measured. As we have seen already, different techniques can be applied for analysing this single time series, with each technique having its own set of systematic and random errors. Thus, the inverse problem of that discussed in the preceding two sections must be solved: the strength of longrange persistence of what would be the bestmodelled fractional noise or motion, β _{model}, is sought, based on the time series length N, its onepoint probability distribution, and the β _{measured} persistence strength results of the technique applied. From this, assuming that the time series is selfaffine, we would like to infer the ‘true’ strength of persistence β _{model} (and corresponding confidence intervals). To explore this further, we will use in Sect. 10 the data sets presented in Fig. 1 as case examples. If they are analysed to derive parameters for models, then the 95 % confidence intervals of the persistence strength β _{model} have to be obtained from the computed β _{measured} and from other parameters of the time series such as the onepoint probability density and the time series length.
As discussed in Sect. 7.1, the variable β _{model} is a measure of the process that we have designed to have a given strength of longrange persistence (and onepoint probability distribution); the time series (our benchmarks) are realizations of that process. These benchmark time series have a distribution of β _{measured}, but with systematic and random errors within that ensemble of time series, due to (1) finitesize effects of the time series length N and (2) inherent biases in the construction process itself (e.g., for strongly asymmetric onepoint probability distributions). These biases in the construction are difficult to document, as most research to date addresses biases in the techniques to estimate longrange persistence, not in the construction. For symmetric onepoint probability distributions (Gaussian, Levy), each realization of the process, if N were very large (i.e. approaching infinity), would have a strength of longrange persistence equal to β _{model}, in other words equal to the value for which the process was designed (e.g., Samorodnitsky and Taqqu 1994; Chechkin and Gonchar 2000; Enriquez 2004).
One can never know the ‘true’ strength of longrange persistence β of a realization of a process. Therefore, an estimate of β is introduced based on a given technique, which itself has a set of systematic and random errors. The result of each technique performed on a synthetic or a realtime series is β _{measured}, which therefore includes both any systematic errors within the realizations and the technique itself. Given a time series with a given length N and onepoint probability distribution, we can perform a given technique which gives β _{measured}. If we believe that longrange persistence is present, we can improve on our estimate of β _{measured} by using (1) the ensemble of benchmark time series performance results from Sect. 7 of this paper and (2) our knowledge of the number of values N and onepoint probability of the given time series. This benchmarkbased improvement is using the results of our performance techniques, which are all based on an ensemble of time series that are realizations of a process designed to have a given β _{model}, and which we now explore. The rest of this section is organized as follows. We first provide an analytical framework for our benchmarkbased improvement of an estimator (Sect. 9.2), followed by a derivation of the conditional probability distribution for β _{ model } given β _{measured} (Sect. 9.3). This is followed by some of the practical issues to consider when calculating benchmarkbased improved estimators (Sect. 9.4) and a description of supplementary material for the user to do their own benchmarkbased improved estimations (Sect. 9.5). We conclude by giving benchmarkbased improved estimators for some example time series (Sect. 9.6).
9.2 BenchmarkBased Improvement of Estimators
In order to solve the inverse problem described in Sect. 9.1, we apply a technique from Bayesian statistics (see Gelman et al. 1995). This technique will incorporate the performance, that is, the systematic and random error of the particular technique which is discussed in Sect. 7 (see Figs. 21, 22, 23, 24, 25).
In practice, performing the procedure as schematically illustrated in Fig. 34 (i.e. with a twodimensional histogram) is doable, but requires a sufficiently small bin size for β _{model} and many realizations, such that an interpolation can be made in both directions. Therefore, we would like to derive an equation for \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} \beta_{\text{measured}} } \right), \) and, from this, derive \( \beta_{\text{measured}}^{*} , \) a benchmarkbased improvement to a given β _{measured}. We do this in the next section.
9.3 Deriving the Conditional Probability Distribution for \( {\boldsymbol{\beta}}_{{\mathbf{model}}} \) Given β _{measured}

For fixed β _{model}, the distribution \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{measured}}} \beta_{\text{model}} } \right) \) can be approximated by a Gaussian distribution.

The mean value of \( P\left( {{\boldsymbol{\beta}}_{{\mathbf{measured}}} \beta_{\text{model}} } \right) \) is monotonically growing as a function of β _{model}.
9.4 Practical Issues When Calculating the Benchmarkbased Improved Estimator \( \beta_{\text{measured}}^{*} \)
 (A)
For the time series of interest, determine its onepoint probability distribution and note its time series length, N.
 (B)
Measure the strength of longrange dependence of the time series β _{measured} using a specific technique [Hu, Ha, DFA, PS].
 (C)
Construct benchmark fractional noises and motions which are realizations of processes with different strength of longrange persistence, β _{model}, but with length N and onepoint probability distributions equal to those of the analysed time series. We have provided (supplementary material) files with fractional noises and motions drawn from 126 sets of parameters and an R program to create these and other synthetic noises and motions (see Sect. 4.3 for further description).
 (D)
Use the fractional noises and motions constructed in (C) and the technique used in (B) to determine numerically \( \bar{\boldsymbol{\beta} }_{\mathbf{measured  model}} \) and \( \boldsymbol{\sigma}_{{{\boldsymbol{\beta}}_{\mathbf{measured  model}} }}^{2} \), for a range of β _{model} from β _{min} to β _{max}, such that step size for successive β _{model} results in \( \bar{\boldsymbol{\beta} }_{\mathbf{measured  model}} \) and \( \boldsymbol{\sigma}_{{{\boldsymbol{\beta}}_{\mathbf{measured  model}} }}^{2} \) which are sufficiently smooth. Interpolation within the step size chosen (e.g., linear, spine) might be necessary. We have given these performance results measures (supplementary material) for fractional noises and motions with about 6,500 different sets of parameters (see Sect. 7.3 for further description).
 (E)
Apply Eq. 36 to determine the ‘posterior’ of the longrange persistence strength, \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} \beta_{\text{measured}} } \right) \).
 (F)
Determine the benchmarkbased improved estimator for the time series, \( \beta_{\text{measured}}^{*} \), its 95 % confidence intervals from the mean and 95 % confidence intervals of the distribution obtained in (E).
In the case of unbiased techniques, we find \( {\boldsymbol{\beta }}_{\mathbf{measured  model}}={\boldsymbol{\beta}}_{\mathbf{model}}.\) If, in addition, the variance \( \boldsymbol {\sigma_{{\beta}_{\mathbf{measured  model}} }}^{2} \) does not depend on β _{ model }, then \( \boldsymbol{\sigma_{{\beta}_{\mathbf{measured  model}} }}^{2} \) = σ ^{2} where σ ^{2} is now a constant. An example of an unbiased technique where the variance does not depend on β _{ model } is power spectral analysis applied to time series with symmetric onepoint probability distributions. For this case, the distribution defined in Eq. (36) simplifies to a Gaussian distribution with a mean value of β _{ model } and a variance of σ ^{2}, giving \( P\left({\boldsymbol{\beta}}_{{\mathbf{model}}}\beta_{\text{measured}}\right)\sim{\text{Gaussian}}\left({\boldsymbol{\beta}}_{\mathbf{model}} ,{\sigma}^{\text{2}}\right).\) This implies, for this case, that (Eq. 37) the benchmarkbased improved estimator \( {\boldsymbol{\beta}}_{\mathbf{measured}}^{*} = {\boldsymbol{\beta}}_{\mathbf {model}} . \) However, in contrast, in power spectral analysis applied to time series with asymmetric onepoint probability distributions and for the three other techniques considered in this paper for both symmetric and asymmetric onepoint probability distributions, either the techniques are biased or the variance \( \boldsymbol{\sigma}_{{\beta}_{\mathbf{measured  model}}}^{2} \) changes as a function of β _{ model }. In these cases the corresponding distributions \( P\left( {{\boldsymbol{\beta}}_{{{\mathbf{model}}}} \beta_{\text{measured}} } \right) \), as defined in Eq. (36), are asymmetric, and also any confidence intervals (2.5 and 97.5 % of the probability distribution) are asymmetric with respect to the mean of the probability distribution, \( \beta_{\text{measured}}^{*} \).
9.5 Benchmarkbased Improved Estimators: Supplementary Material Description
We have provided (supplementary material) an Excel spreadsheet which allows a user to determine conditional probability distributions based on a usermeasured β _{measured} for a time series, and the benchmark performance results discussed in this paper. In Fig. 35 we show example of three Supplementary Material Excel Spreadsheet screenshots.
The second sheet ‘InterpolSheet’ (Fig. 35b) allows the user to input in the yellow box the usermeasured β _{measured} for their specific time series, and then, based on the closest match of their time series to the sheet ‘PerfTestResults’ parameters of onepoint probability distribution type, number of values N, and technique used, to input the mean and standard deviation of the benchmark results for −1.0 < β _{ model } < 4.0. In this example, it is assumed the user has a time series with the parameters given for Fig. 35a (FLNN_{a}, c _{v} = 0.8, N = 512), has applied power spectral analysis (bestfit), and has usermeasured value of β _{measured} = 0.75. The spreadsheet automatically interpolates the performance test results, which have step size Δβ _{model} = 0.2_{,} to Δβ _{model} = 0.01, using linear interpolation, and then calculates β _{measured} ^{ * } , the benchmarkbased improvement to the usermeasured value, along with the 97.5 and 2.5 percentiles (i.e. the 95 % confidence intervals).
The third sheet ‘CalibratedProbChart’ (Fig. 35c) shows the calibrated probability distribution of β _{ model } conditioned on the usermeasured value for beta (measure of the strength of longrange persistence) and benchmark time series, \( P\left( {{\boldsymbol{\beta} }_{{\mathbf{model}}} \beta_{\text{measured}} = 0.75} \right), \) showing graphically the mean of the distribution (this gives the value for β _{measured} ^{ * } ) and the 97.5 and 2.5 percentiles of that distribution.
9.6 Benchmarkbased improved estimators for example time series
Now we come back to the example of fractional lognormal noises discussed in Sect. 5 and presented and preanalysed in Fig. 14 and the properties of the corresponding \( \beta_{\text{measured}} = \beta_{\left[{\text{Hu,}}\,{\text{Ha,}}\,{\text{DFA,}}\,{\text{PS}}({\text{bestfit}}),\,{\text{PS}}({\text {Whittle}}) \right]} \) presented in Figs. 21, 22, 23, 24, 25 and Tables 4, 5. Take, for example, a time series with N = 1,024 data points whose onepoint probability distribution is a lognormal with a coefficient of variation of c _{v} = 0.5 and created to have β _{model} = 1.0. The four functions—rescaled range, detrended fluctuation function, semivariogram, and power spectral density—result in a powerlaw dependence on the segment length, lag, or the frequency. In other words, the analyses expose longrange persistence. The corresponding powerlaw exponents are related to the strength of longrange persistence as mentioned in Sects. 5 and 6 and given in Table 3. The measured strength of longrange persistence has been determined as β _{Hu} = 0.78, β _{Ha} = 1.34, β _{DFA} = 0.99, β _{PS(bestfit)} = 0.99, and β _{PS(Whittle)} = 0.98. We now apply the scheme in Sect. 9.4 to obtain the five calibrated distributions, \( P\left( { {\boldsymbol{\beta}}_{{\mathbf{model}}}}{\beta_{\text{measured}}} \right) \), conditioned on the five β _{measured} values for each technique (see Fig. 34 for an illustration).
We will now apply our benchmarkbased improved estimators in the context of three geophysical examples.
10 Applications: Strength of LongRange Persistence of Three Geophysical Records
We now return to the three data series presented in Fig. 1 and apply the techniques explored in this paper to them to investigate the longrange persistence properties of the underlying processes.
Results of five longrange persistence techniques^{a} applied to the three environmental data series presented in Fig. 1 shown are computed persistence strengths achieved by the five techniques and the corresponding benchmarkbased improvement estimates with 95 % confidence intervals
Variable name 
Proxy for palaeotemperature, δ^{18}O 
Elkhorn river discharge 
Auroral electrojet (AE) index 
Differenced AE index 

x _{temp} 
x _{discharge} 
x _{AE} 
Δx _{AE}  
Relevant figures 
Fig. 1a (time series) Fig. 2a (lagged plot) Fig. 37 (results) 
Fig. 1b (time series) Fig. 2b (lagged plot) Fig. 38 (results) 
Fig. 1c (time series) Fig. 2c (lagged plot) Fig. 39 (results) 
Fig. 1d (time series) Fig. 2d (lagged plot) 
Number of data points, N 
500 
26,662 
1,440 
1,440 
Sampling time, Δ 
20 years 
1 day 
1 min 
1 min 
Onepoint probability distribution (and its estimated parameters) 
Gaussian μ = –34.9 ‰ \(\sigma \) = 0.44 ‰ 
Lognormal μ = 38.7 m^{3} s^{−1} \(\sigma \) = 64.9 m^{3} s^{−1} c _{v} = 1.68 
Levy a = 1.40  
Hurst rescaled range (R/S) analysis (β _{Hu}) 
β _{Hu} = 0.42 \( \beta_{\text{Hu}}^{*} \) = 0.37 0.08 < \( \beta_{\text{Hu}}^{*} \) < 0.65 
β _{Hu} = 0.66 \( \beta_{\text{Hu}}^{*} \) = 0.78 0.46 < \( \beta_{\text{Hu}}^{*} \) < 1.07 (1 year < l < 24 years) 
β _{Ηu} = 1.02 \( \beta_{\text{Hu}}^{*} \) = 0.92 0.54 < \( \beta_{\text{Hu}}^{*} \) < 1.58 
β _{Ηu} = 0.12 \( \beta_{\text{Hu}}^{*} \) = 0.83 −0.20 < \( \beta_{\text{Hu}}^{*} \) < 1.83 
Semivariogram analysis (β _{Ha}) 
β _{Ha} = 1.11 \( \beta_{\text{Ha}}^{*} \) = 0.66 0.31 < \( \beta_{\text{Ha}}^{*} \) < 1.17 
β _{Ha} = 1.03 \( \beta_{\text{Ha}}^{*} \) = 0.65 −0.76 < \( \beta_{\text{Ha}}^{*} \) < 1.46 (1 year < l < 24 years) 
β _{Ha} = 2.18 \( \beta_{\text{Ha}}^{*} \) = 2.17 1.82 < \( \beta_{\text{Ha}}^{*} \) < 2.53 (1 min < l < 60 min) 
β _{Ha} = 1.01 \( \beta_{\text{Ha}}^{*} \) = 0.11 −0.98 < \( \beta_{\text{Ha}}^{*} \) < 0.79 
Detrended fluctuation analysis (β _{DFA}) 
β _{DFA} = 0.43 \( \beta_{\text{DFA}}^{*} \) = 0.47 0.23 < \( \beta_{\text{DFA}}^{*} \) < 0.73 
β _{DFA} = 0.40 \( \beta_{\text{DFA}}^{*} \) = 0.40 0.01 < \( \beta_{\text{DFA}}^{*} \) < 0.84 (1 year < l < 24 years) 
β _{DFA} = 2.01 \( \beta_{\text{DFA}}^{*} \) = 2.04 1.78 < \( \beta_{\text{DFA}}^{*} \) < 2.28 
β _{DFA} = 0.13 \( \beta_{\text{DFA}}^{*} = 0.13\) −0.16 < \( \beta_{\text{DFA}}^{*} \) < 0.39 
Power spectral analysis^{b} (logperiodogram regression) (β _{PS(bestfit)}) 
\( \beta_{\text{PS(bestfit)}} \) = 0.46 \( \beta_{\text{PS(bestfit)}}^{*} \) = 0.46 0.26 < \( \beta_{\text{PS(bestfit)}}^{*} \) < 0.65 
\( \beta_{\text{PS(bestfit)}} \) = 0.60 \( \beta_{\text{PS(bestfit)}}^{*} \) = 0.69 0.26 < \( \beta_{\text{PS(bestfit)}}^{*} \) < 1.10 (f < 1 year^{−1}) 
\( \beta_{\text{PS(bestfit)}} \) = 1.92 \( \beta_{\text{PS(bestfit)}}^{*} \) = 1.92 1.82 < \( \beta_{\text{PS(bestfit)}}^{*} \) < 2.00 
\( \beta_{\text{PS(bestfit)}} \) = 0.11 \( \beta_{\text{PS(bestfit)}}^{*} \) = 0.12 0.01 < \( \beta_{\text{PS(bestfit)}}^{*} \) < 0.20 
Power spectral analysis^{b} (Whittle estimator) (β _{PS(Whittle)}) 
\( \beta_{\text{PS(Whittle)}} \) = 0.54 \( \beta_{\text{PS(Whittle)}}^{*} \) = 0.53 0.38 < \( \beta_{\text{PS(Whittle)}}^{*} \) < 0.68 
\( \beta_{\text{PS(Whittle)}} \) = 0.71 \( \beta_{\text{PS(Whittle)}}^{*} \) = 0.81 0.43 < \( \beta_{\text{PS(Whittle)}}^{*} \) < 1.16 (f < 1 year^{−1}) 
\( \beta_{\text{PS(Whittle)}} \) = 1.92 \( \beta_{\text{PS(Whittle)}}^{*} \) = 1.92 1.84 < \( \beta_{\text{PS(Whittle)}}^{*} \) < 1.99 
\( \beta_{\text{PS(Whittle)}} \) = 0.05 \( \beta_{\text{PS(Whittle)}}^{*} \) = 0.06 −0.03 < \( \beta_{\text{PS(Whittle)}}^{*} \) < 0.12 
The benchmarkbased improved values of the three remaining techniques (not considering confidence intervals) lie in the interval \( 0.37 < \beta_{[{\text{Hu}},\,{\text{Ha}},\,{\text{PS}}({\text{bestfit}}),\, {\text{PS}}({\text{Whittle}})]}^{*} < 0.47. \) The corresponding 95 % confidence intervals for each technique overlap, but they are different in total size, ranging from 0.30 for the Whittle estimator (95 % confidence intervals: \( 0.38 < \beta_{\text{PS(Whittle)}}^{*} < 0.68 \)) to 0.57 for rescaled range analysis (\( 0.08 < \beta_{\text{Hu}}^{*} < 0.65 \)). Since all of these confidence intervals do not contain β = 0.0, longrange persistence is qualitatively confirmed. Another important aspect of our analysis is stationarity, that is, if our time series can be modelled as a fractional noise (β < 1.0) or a fractional motion (β > 1.0). As explained in Sect. 8.2, we have to determine or diagnose whether the values in the confidence intervals just discussed are all smaller or all larger than β = 1.0. We find that these confidence intervals are covered by the interval [0.0, 1.0]. Therefore, we can conclude that the palaeotemperature series can be appropriately modelled by a fractional noise (i.e. β < 1.0).
For quantifying the strength of selfaffine longrange persistence, one interpretation would be to take the most certain estimator (based on the narrowest 95 % confidence interval range) \( \beta_{{{\text{PS}}\left( {\text{Whittle}} \right)}}^{*} \) which says that with a probability of 95 %, the persistence strength β ranges between 0.38 and 0.68. Another interpretation would be that based on the results in this paper, the DFA, PS(bestfit), and PS(Whittle) techniques were much more robust (small systematic and random errors) for normally distributed noises and motions compared to (R/S), and thus to state that this palaeotemperature series exhibits longrange persistence with a selfaffine longrange persistence strength \( \beta_{\left[ {\text{DFA,PS}}({\text{bestfit}}),{\text{PS}}({\text{Whittle}}) \right]}^{*} \) between 0.46 and 0.53, with combined 95 % confidence intervals for \( \beta_{\left[ {\text{DFA,PS}}({\text{bestfit}}),{\text{PS}}({\text{Whittle}}) \right]}^{*} \) between 0.23 and 0.73. In other words, there is weak longrange positive selfaffine persistence.
The persistence strengths for the low frequency domain (Table 8) obtained by the benchmarkbased improvement techniques (\( \beta_{{\left[ {\text{Hu,\,DFA,\,PS}} \right]}}^{*} \)) range between 0.65 and 0.81. The corresponding 95 % confidence intervals are very wide, ranging from the widest, 0.26 < \( \beta_{\text{PS(bestfit)}}^{*} \)< 1.10, to the ‘narrowest’, \( 0.46 < \beta_{\text{Hu}}^{*} < 1.07; \) however, all of them do include a ‘common’ range for the persistence strength interval \( 0.46 < \beta_{{\left[ {\text{Hu,\,DFA,\,PS}} \right]}}^{*} < 0.84. \) These very uncertain results are caused by both the very asymmetric onepoint probability density and the consideration of very long segments (l > 1.0 year) or, respectively, very low frequencies. Based on the performance results for realizations of lognormally distributed fractional noises (Sect. 7), we believe that the best estimators are PS(bestfit) and PS(Whittle). If we use the limits of both of these, then we can conclude that this discharge series exposes selfaffine longrange persistence with strength \( \beta_{\left[{\text{PS}}({\text{bestfit}}),\,{\text{PS}}({\text{Whittle}}) \right]}^{*} \) between 0.69 and 0.81, and 95 % confidence intervals for the two combined between 0.26 and 1.16. In other words, there is longrange positive persistence with a weak to medium strength. As the 95 % confidence intervals contain the value \( \beta_{\left[ {\text{PS}}({\text{bestfit}}),\,{\text{PS}}({\text{Whittle}}) \right]}^{*} \) = 1.0, we cannot decide whether our time series is a fractional noise (β < 1.0) or fractional motion (β > 1.0).
For both the palaeotemperature and discharge time series, we have modelled them as showing positive longrange persistence. For these data types, both shortrange and longrange persistent models have been applied by different authors. For example, for both data types, Granger (1980) and Mudelsee (2007) model the underlying processes as the aggregation of shortmemory processes with different strength of short memory.
The third data set, the geomagnetic auroral electrojet (AE) index data, sampled per minute for 01 February 1978 (Fig. 1c), contains N = 1,440 values. The differenced AE index (\( \Delta x_{\text{AE}} (t) = x_{\text{AE}} (t)  x_{\text{AE}} (t  1) \)) is approximately Levy distributed (doublesided power law) with an exponent of a = 1.40 (Fig. 1d). The four functions that characterize the strength of longrange dependence show a powerlaw scaling, and the corresponding estimated strengths of longrange dependence for the AE index are as follows (Table 8; Fig. 39): β _{Hu} = 1.02, β _{Ha} = 2.18, β _{DFA} = 2.01, β _{PS(bestfit)} = 1.92, and β _{PS(Whittle)} = 1.92, and for the differenced AE index are as follows (Table 8): β _{Hu} = 0.12, β _{Ha} = 1.01, β _{DFA} = 0.13, β _{PS(bestfit)} = 0.11, and β _{PS(Whittle)} = 0.05.
Based on Sect. 7 performance results for realizations of Levydistributed fractional noises, we believe that the best estimators are PS(bestfit) and PS(Whittle). If we use the limits of both of these, then we conclude (Table 8) that the AE index is characterized by \( \beta_{\left[ {\text{PS}}({\text{bestfit}}), {\text{PS}}({\text{Whittle}}) \right]}^{*} = 1.92 \), and 95 % confidence intervals for the two combined between 1.82 and 2.00. In other words, there is a strong longrange positive persistence, close to a LevyBrownian motion. Watkins et al. (2005) have analysed longer series (recordings of an entire year) of the AE index and described it as a fractional Levy motion with a persistence strength of β = 1.90 (standard error of 0.02) with a Levy distribution (a = 1.92). With respect to the strength of longrange persistence, our results for the AE index are very similar to that of Watkins et al. (2005), and our 95 % confidence intervals for β _{Ha}, β _{DFA}, and β _{PS}, do not conflict with a value of β = 1.90.
In order to apply the benchmarkbased improvement technique to the differenced AE index, performance tests were run for Levydistributed (a = 1.40) fractional noises with N = 1,440 data points. The results for \( \beta_{\left[{\text{Hu}},\,{\text{Ha}},\,{\text{DFA}},\,{\text{PS}}({\text{bestfit}}),\,{\text{PS}}({\text{Whittle}}) \right]}^{*} \) are given in Table 8. If we use the limits for both PS(bestfit) and PS(Whittle), then we conclude that the differenced AE index is characterized by \( \beta_{\left[ {\text{PS}}({\text{bestfit}}),\,{\text{PS}}({\text{Whittle}}) \right]}^{*} \) between 0.06 and 0.12, and 95 % confidence intervals for the two combined between −0.03 and 0.20. In other words, there is longrange positive persistence with weak strength. This persistence strength is very close to β = 0, and so our differenced AE index can be considered close to a white Levy noise. We concluded above that the AE index is characterized by \( \beta_{\text{PS}}^{*} = 1.92 \) [95 % confidence: 1.82 to 2.00] and here that the differenced AE index is characterized by \(\beta_{\text{PS}}^{*} = 0.06\, {\text{to}}\, 0.12 \) [95 % confidence: −0.03 to 0.20]. This is not unreasonable as (Sect. 3.6) the longrange persistence strength of a symmetrically distributed fractional noise or motion will be shifted by +2 for aggregation and −2 for the first difference (this case). The difference in the two adjusted measured strengths of longrange persistence for the original and differenced AE index is slightly smaller than two. We believe that this is caused by nonlinear correlations in the data.
We observe that when considering DFA applied to the differenced AE index series, the size of the resultant 95 % confidence intervals (\(  0.16 < \beta_{\text{DFA}}^{*} < 0.39 \)) is two to three times bigger than that of the spectral techniques \((0.01 < \beta_{\text{PS(bestfit)}}^{*} < 0.20,\; 0.03 < \beta_{\text{PS(Whittle)}}^{*} < 0.12) \). This confirms the results we presented in Sect. 7 for the analysis of synthetic noises: in the case of fractional Levy noises, DFA has larger random errors (proportional to the confidence interval sizes) than power spectral techniques.
The three geophysical time series considered here have all been equally spaced in time. However, unequally spaced time series in the geophysics community are common (unequally spaced either through missing data or through events that do not occur equally in time). For an example of a longrange persistence analysis of an unequally spaced time series (the Nile River) see Ghil et al. (2011).
We have considered three very different geophysical time series with different onepoint probability distributions: a proxy for palaeotemperature (Gaussian), discharge (lognormal), and AE index (Levy). For each, we have shown that the estimated strength of longrange persistence can often be more uncertain than one might usually assume. In each case, we have examined these time series with conventional methods that are commonly used in the literature (Hurst rescaled range analysis, semivariogram analysis, detrended fluctuation analysis, and power spectral analysis), and we have complemented these results with benchmarkbased improvement estimators, putting the results from each technique into perspective.
11 Summary and Discussion
 (1)
Hurst rescaled range analysis is not recommended;
 (2)
Semivariogram analysis is unbiased for 1.2 ≤ β ≤ 2.8, but has large random error (standard deviation or confidence intervals).
 (3)
Detrended fluctuation analysis is well suited for time series with thintailed probability distributions and persistence strength of β > 0.0.
 (4)
Spectral techniques overall perform the best of the techniques examined here: they have very small systematic errors (i.e. are unbiased), with small random error (i.e. tight confidence intervals and small standard deviations) for positive persistent noises with a symmetric onepoint distribution, and they are slightly biased for noises or motions with an asymmetric onepoint probability distribution and for antipersistent noises.
In order to quantify what is the most likely strength of persistence for a fixed time series length and onepoint probability distribution, a calibration scheme based on benchmarkbased improvement statistics has been proposed. The most useful result of our benchmarkbased improvement is realistic confidence intervals for the strength of persistence with respect to the specific properties of the considered time series. These confidence intervals can be used to demonstrate longrange persistence in a time series: if the upper and lower values of the 95 % confidence interval for a persistence strength β do not contain the value β = 0.0, then the considered series can be interpreted (in a statistical sense) to be longrange persistent.
Another outcome of our investigation is that typical confidence intervals for the strength of longrange persistence are asymmetric with respect to the benchmarkbased improved estimator, \( \beta_{\text{measured}}^{*} \). The only exception (i.e. symmetric confidence intervals) corresponds to spectral analysis of time series with symmetric onepoint probability distributions.
In this context, we emphasize that for time domain techniques the standard deviation of the persistence strength cannot be calculated as the regression error of the linear regression (e.g., for log(DFA) vs. log(segment length), log(R/S) vs. log(segment length), and log(semivariogram) vs. log(lag)). This would be possible only if the fluctuations around the average of the measured functions, \( \overline{{\log(\text{DFA})}}\), \( \overline{{\log({R}/{S})}}\), and \( \overline{{\log(\text {semivariograms})}}\), were independent of the abscissa (log(length) or log(lag)). However, as we characterize highly persistent time series, these fluctuations are also persistent and the assumption of independence cannot be held to be true.
One aspect of our study found limitations in the Schreiber–Schmitz algorithm. It turned out that the Schreiber–Schmitz algorithm can construct fractional noises and motions with symmetric onepoint probability distributions and with persistence strength between –1.0 ≤ β ≤ 1.0. However, highly asymmetric probability distributions and with large strengths of persistence (β > 1.0) can lead to resultant time series with a persistence strength that is systematically smaller than the one that is modelled.
In the literature, the performance of detrended fluctuation analysis and spectral analysis has been benchmarked using synthetic time series with known properties (e.g., Taqqu et al. 1995; Pilgram and Kaplan 1998; Malamud and Turcotte 1999a; Eke et al. 2002; Penzel et al. 2003; Maraun et al. 2004). Our current investigations for quantifying longrange persistence of selfaffine time series have shown that the systematic errors of both techniques (DFA and spectral analysis) are comparable, while the random errors of spectral analysis are lower, resulting in the fact that a total rootmeansquared error (RMSE, which takes into account both the systematic and random errors) is also lower for spectral analysis over a broad range of persistence strengths and probability distribution types. However, as the analysed time series might have nonlinear correlations, both DFA and spectral analysis should be applied, as the nonlinear nature of the correlations (even if the time series is also selfaffine) can strongly influence and give very different results for the two techniques applied (see Rangarajan and Ding 2000). Detrended fluctuation analysis is also subject to practical issues, such as choice of the trend function to use.
We recommend investigation of selfaffine longrange persistence of a time series by applying power spectral and detrended fluctuation analysis. In the case of time series with heavytailed or strongly asymmetric onepoint probability distributions, benchmarkbased improvement statistics for the strength of longrange persistence, which is based on a large range of model time series simulations, is required. If the considered time series are not robustly selfaffine, but also have shortrange correlations or have periodic signals superimposed, then the proposed framework must be appropriately modified. To aid the reader, extensive supplementary material is provided, which includes (1) fractional noises with different strengths of persistence and onepoint probability distributions, along with R programs for producing them, (2) the results of applying different longrange persistence techniques to realizations from over 6,500 different sets of process parameters, (3) an Excel spreadsheet to do benchmarkbased improvements on the measured persistence strength for a given time series, and (4) a PDF file of all figures from this paper in highresolution.
Many time series in the Earth Sciences exhibit longrange persistence. For modelling purposes it is important to quantify the strength of persistence. In this paper, we have shown that techniques that quantify persistence can have systematic errors (biases) and random errors. Both types of errors depend on the measuring technique and on parameters of the considered time series such as the onepoint probability distribution, the length of the time series, and the strength of selfaffine longrange persistence. We have proposed the application of benchmarkbased improvement statistics in order to calibrate the measures for quantifying persistence with respect to the specific properties (length, probability distribution, and persistence strength) of the considered time series. Thus, the uncertainties (systematic and random errors) of the persistence measurements obtained might be better contextualized. We give three examples of ‘typical’ geophysics data series—temperature, discharge, and AE index—and show that the estimated strength of longrange persistence is much more uncertain than might be usually assumed.
Acknowledgments
This research was supported by the European Commission Framework 6 Project 12975 (NEST) Extreme events: Causes and consequences (E2–C2), by the Stifterverband für die deutsche Wissenschaft, and by the Claussen–Simon–Stiftung. A.W. would like to thank Jan Nagler for valuable discussions about the analysis of the three environmental data sets and Maximilian Puelma Touzel for a reading of the manuscript (both from Max–Planck Institute for Dynamics and SelfOrganization, Göttingen). We also thank two reviewers, Manfred Mudelsee (Climate Risk Analysis, Hannover, Germany) and WolfGerrit Früh (HeriotWatt University, Edinburgh, Scotland), both of whom provided indepth reviews and comments which substantially improved this manuscript.
Supplementary material
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.