1 Introduction

Complex-valued time series arise in many scientific fields of interest, for example digital communication and signal processing (Curtis 1985; Martin 2004), environmental series (Gonella 1972; Lilly and Gascard 2006; Adali et al. 2011) and physiology (Rowe 2005). Modelling and analysis of such series in the complex domain is not only natural, but also convenient. In addition, complex-valued time series models are often able to represent more realistic behaviour in observed physical processes; see, for example, Mandic and Goh (2009) and Sykulski et al. (2017). A particular modelling aspect which has received recent attention is the property of impropriety or noncircularity, describing series whose statistics are not rotationally invariant in the complex plane [for a precise definition, the reader is directed to Sykulski and Percival (2016)]. Such models of improper processes have seen growing interest in the statistics community; see, for example, Schreier and Scharf (2003), Rubin-Delanchy and Walden (2008) and Mohammadi and Plataniotis (2015). Furthermore, complex-valued analysis of real-valued data has been shown to be beneficial in a number of settings; see, for example, Olhede and Walden (2005) and Hamilton et al. (2017). For a comprehensive introduction to complex-valued signals, we refer the reader to Schreier and Scharf (2010); see Adali et al. (2011) and Walden (2013) for recent advances in modelling complex-valued signals.

Recently, there has been an increased interest in models for complex-valued stochastic processes exhibiting long-range dependence (i.e. persistent) behaviour, which has seen extensions of real-valued process modelling frameworks for the complex-valued fractional Brownian motion (fBM) and Matérn processes, see, respectively, Coeurjolly and Porcu (2017) and Lilly et al. (2017), as well as for (improper) fractional Gaussian noise (Sykulski and Percival 2016). For these constructions, just as for real-valued processes (Hurst 1951; Mandelbrot and Ness 1968), the degree of memory can still be quantified by means of a single parameter, the Hurst exponent parameter (Amblard et al. 2012; Sykulski and Percival 2016). Accurate estimation of the Hurst parameter offers valuable insight into a multitude of modelling and analysis tasks, such as model calibration and prediction (Beran et al. 2013; Rehman and Siddiqi 2009; Knight et al. 2017).

Complex-valued processes, both proper (circular) and improper (noncircular), are relevant across fields such as oceanography and geophysics (Adali et al. 2011; Sykulski et al. 2017), where data are typically difficult to acquire and will frequently suffer from omissions/ missingness or be irregularly sampled (see, e.g. Fig. 1). In the next section, we describe datasets arising in environmental science that feature missing observations, which can be examined for long memory with a complex-valued representation. However, we note here that data from other scientific areas may benefit from analysis with our proposed methodology; see Sect. 6 for further discussion.

1.1 Persistence in wind series

Our motivating data example in this article arises from climatology. More specifically, wind series have been analysed extensively in the literature for modelling local weather patterns and spread of pollutants, as well as global climate dynamics. Long memory in wind series has been established by a number of authors; see, for example, Haslett and Raftery (1989), Chang et al. (2012) and Piacquadio and de la Barra (2014) and references therein. Specifically, Hurst exponent estimates for wind speed series on a range of sampling resolutions, including the 5 min scale considered here, have been shown to be in the range 0.7–0.9, indicating strong long-range dependence; see, for example, Fortuna et al. (2014). Accurate Hurst exponent estimation is used for accurate forecasting of wind speed, for example to assess future power yields (Haslett and Raftery 1989; Bakker and van den Hurk 2012).

Wind speed analysis in the literature is predominantly performed using real-valued data, such as (magnitude) wind speed series. However, more recently, a number of authors have advocated modelling wind measurements as complex-valued, developing analysis tools which exploit both speed and directional information of wind time series; see, for example, Goh et al. (2006) and Tanaka and Mandic (2007). These complex-valued modelling approaches have resulted in methodology for improved prediction for series such as those considered in this article (Mandic et al. 2009; Dowell et al. 2014). To our knowledge, long memory estimation for stationary time series is exclusively performed using real-valued time series. In this article, we analyse the degree of persistence (long memory intensity) exhibited by complex-valued wind measurements, i.e. series which have both wind speed and direction, using new complex-valued Hurst estimation methodology we propose here.

Fig. 2
figure 1

a Real component of the Wind A data series; b imaginary component of the Wind A data series; c real component of the Wind B data series; d imaginary component of the Wind B data series. Red triangles indicate missing data locations. (Color figure online)

The wind series we consider in this article consists of two datasets measured at a 5 min resolution from the Iowa Department of Transport’s Automated Weather Observing System (AWOS). The (speed and angular) measurements for both datasets are available at http://mesonet.agron.iastate.edu/AWOS/. We firstly analyse data obtained from the Atlantic Municipal Airport (AIO) monitoring site over a period from 15 April 2017 until 30 April 2017. Whilst the sampling interval for the measurements is reported as 5 min, due to a number of reasons, for example faulty recording devices, the data in fact feature missingness which results in a mix of sampling intervals—our first dataset has intervals ranging from 5 to 15 min.

Fig. 3
figure 2

a Autocorrelation for (a) the real component of the Wind A series from Fig. 1; b the imaginary component of the Wind A series; c the real component of the Wind B series from Fig. 1; and d the imaginary component of the Wind B series (all treated as regularly spaced). Both components of the two datasets show autocorrelation at large lags, indicating persistent behaviour

Since we have both speed and directional information for the dataset, we shall view the series using a complex-valued representation. The real and imaginary components of the series are shown in Fig. 1a, b, together with the locations of the missing data (depicted by triangles). The length of the first series is \(n=3131\) with an overall rate of missingness of 12%. Similar datasets from the Iowa monitoring system have been previously studied in the literature for the non-missing case but not in the context of Hurst estimation; see, for example, Tanaka and Mandic (2007) and Adali et al. (2011).

To explore the potential persistence in wind series, we examine the autocorrelation in the real and imaginary parts of the series, shown in Fig. 2a, b for the Wind A series. For these data, both components show highly significant autocorrelation over a range of lags, indicating long memory.

To further illustrate potential benefits of a more considered analysis approach for such data, we also investigate a dataset from the same monitoring site but for a different time periods, specifically, 30 April 2017 until 14 May 2017. For this dataset, the majority of the data are observed at a spacing of 5 min, but a significant amount have intra-measurement sampling between 10 and 20 min resulting from a missingness proportion of \(20\%\); the series is of length \(n=2942\). We have specifically chosen to examine this second time period due to its high degree of missingness. The two components of the complex-valued series can be seen in Fig. 1c, d (triangles indicate missing series values).

Similar observations about potential long memory characteristics can be made for the second complex-valued wind series. In particular, both real and imaginary components of the series show considerable autocorrelation over a large range of lags (Fig. 2c, d).

In addition, plotting the series in the complex plane, we see that both datasets exhibit a rotational behaviour, due to the angular component of the series (Fig. 3). The series are not symmetric, exhibiting clear noncircularity, suggesting a model which allows for impropriety is appropriate for analysis [for an in-depth discussion of these properties, the reader is directed to e.g. Sykulski and Percival (2016)]. This reflects similar observations on impropriety shown for other Iowa AWOS data in Adali et al. (2011), as well as other wind series (Mandic and Goh 2009).

Fig. 4
figure 3

Scatter plot of real and imaginary series values for a the Wind A data and b the Wind B series shown in Fig. 1. Both series exhibit noncircular (improper) characteristics

1.2 Aim and structure of the paper

A feature of many geophysical series, such as described in Sect. 1.1, is that there is a need to jointly analyse both components of a bivariate signal in order to reveal a common behaviour. Due to the natural representation in the complex plane, one mathematical solution is to combine the two pieces of information into a single, complex-valued series and analyse its properties (Mandic and Goh 2009). Adopting this approach thus calls for analysis techniques capable of dealing with complex-valued data. Additionally, for many applications the process sampling structure is inherently irregular, as the two components may be measured at irregular times, or the data may be blighted by missingness due to measurement device failures. In the real-valued case, the common practice of preprocessing the data to mitigate against irregular or missing observations results in inaccuracies in long memory estimation by traditional methods. More specifically, there is now well-documented evidence that preprocessing by imputation or interpolation as well as data aggregation leads to overestimation of persistence; see, for example, Beran et al. (2013), Zhang et al. (2014) or Knight et al. (2017).

In practice, to the authors’ best knowledge, the only technique that permits Hurst exponent estimation for complex-valued processes is that of Coeurjolly and Porcu (2017) which tackles the setting of regularly sampled (proper) complex-valued fractional Brownian motion. Motivated by the serious implications of inaccurate estimation in the real-valued setting, in this work we propose the first methodological approach that answers the timely challenge of accurate assessment of long memory persistence for complex-valued processes featuring regular or irregular sampling (including missingness).

At the heart of our methodology is a second generation wavelet-based approach. The reasoning behind this choice is twofold: (1) (classical) wavelets have proved to be very successful in the context of regularly sampled (real-valued) time series with long memory and are considered the ‘right domain’ of analysis (Flandrin 1998), and (2) for irregularly sampled (real-valued) processes, or those featuring missingness, the wavelet lifting algorithm of Knight et al. (2017) has provided a first long memory estimation solution and was shown to yield competitive results even for regularly sampled data.

The main contributions of the work in this paper are as follows. We propose (1) a novel lifting algorithm designed to work on complex-valued data with a potentially irregular sampling structure and (2) a Hurst parameter estimator for complex-valued processes sampled with a regular or irregular structure. Our method will be shown to improve on real-valued Hurst estimation results, including for regularly spaced data.

The remainder of this article is organized as follows. We begin, in Sect. 2, by reviewing (complex-valued) long memory processes and giving an overview of wavelet lifting transforms. Section 3 introduces our novel complex-valued lifting transform, establishes its iterative bases construction and theoretical results on its decorrelation properties. Section 4 demonstrates how these properties can be exploited to design our proposed lifting-based Hurst exponent estimation procedure for complex-valued data sampled with irregularity/ missingness. Section 5.1 contains a simulation study evaluating the performance of our new method using synthetic data. In Sect. 5.2, we consider the application of our approach to the wind series datasets introduced in Sect. 1.1, discussing the potential consequences of our analysis. Finally, Sect. 6 outlines some avenues of future work and discusses other potential applications.

2 Review of complex-valued processes, long-range dependence and wavelet lifting

2.1 Complex-valued processes

Let us denote a (complex-valued) second-order stationary time series by \(\{ X_t \}\) and its autocovariance function as \(\gamma _{X}(t_i-t_j) = {\mathbb {E}}(X_{t_i}\overline{X}_{t_j})\), under the assumption that \({\mathbb {E}}(X_t) = 0\) and denoting by \(\overline{\cdot p}\) complex conjugation. As the autocovariance function \(\gamma _{X}\) does not completely characterize a complex-valued time series, we also make use of its complementary or pseudo-covariance, \(r_{X}(t_i-t_j)={\mathbb {E}}(X_{t_i}{X}_{t_j})\), again assuming \({\mathbb {E}}(X_t) = 0\). In general, both autocovariances are complex-valued and have the properties of Hermitian symmetry and symmetry, respectively [see, e.g. Sykulski and Percival (2016)].

In many applications, such as radar and communications, processes are assumed to have the property that \(r_{X}(\cdot p)=0\) (Neeser and Massey 1993; Picinbono 1994; Adali et al. 2011); such processes are known as proper or circularly symmetric and are completely determined by their autocovariance \(\gamma _X\). In contrast, applications such as those described in Schreier and Scharf (2010), Adali et al. (2011) and Chandna and Walden (2017) deal with improper processes, whereby there exists a lag \(\tau \) such that \(r_{X}(\tau ) \ne 0\). Another often encountered property is that of time reversibility; for complex-valued processes, Didier and Pipiras (2011) have shown that time reversibility results in complex-valued processes with real-valued autocovariances, which is precisely the setting under which Sykulski and Percival (2016) develop their exact simulation method for improper stationary Gaussian processes.

2.2 Long memory and its estimation

Classical literature for long-range behaviour of real-valued processes shows that persistence is often characterized by a parameter, such as the Hurst exponent, H, introduced to the literature by Hurst (1951) in hydrology and its estimation is treated across a large body of established literature, for example Beran et al. (2013). Mandelbrot and Ness (1968) introduced self-similar and related processes with long memory, along with the associated statistical inference. Extensions of fractional Brownian motion to the complex-valued case, defined as a self-similar Gaussian process with stationary increments, are dealt with in, for example, Coeurjolly and Porcu (2017) and Lilly et al. (2017). Put simply, the property of self-similarity amounts to the preservation of the process’ statistical properties in the face of rescaling, thus naturally fostering the definition of the Hurst exponent.

Just as in the real-valued case, a complex-valued self-similar process \(\{ X_t \}\) with parameter H satisfies \(X (at) \overset{d}{=} a^H X(t)\) for \(a>0\), \(H \in (0,1)\) and where \(\overset{d}{=}\) means equal in distribution (Coeurjolly and Porcu 2017). Note that the self-similarity definition implies that both the real and imaginary strands of the complex-valued process \(\{ X_t \}\) evolve according to the same exponent H. The property of self-similarity results into the fBM spectrum to behave as \(f_{X}(\omega ) = A^2 |\omega |^{-2\delta }\) for frequencies \(\omega \), a constant A and \(\delta \in (1/2,3/2)\). The spectral slope parameter \(\delta \) is linked to the aspect ratio of process rescaling for self-similar behaviour as \(H=\delta -1/2 \in (0,1)\) and also determines the degree of persistence in the differenced version of the process, the fractional Gaussian noise (Lilly et al. 2017). An example of such a process is the improper fractional Gaussian noise with the pseudo-covariance proportional to the autocovariance (both real-valued), both proportional to \(\tau ^{2\delta -3}\) (Sykulski and Percival 2016; Lilly et al. 2017).

Definition 1

(Lilly et al. 2017) A stationary (finite variance) complex-valued process \(\{ X_t \}\) with real-valued autocovariance \(\gamma _X\) is said to have long memory if \(\gamma _X(\tau ) \sim c_{\gamma } |\tau |^{-\beta }\) as \(|\tau | \rightarrow \infty \) and \(\beta \in (0,1)\), where \(\sim \) means asymptotic equality. In other words, the process autocovariance displays long-term decay.

Equivalently, the autocovariance Fourier pair, namely the spectral density, has the property that \(f_{X}(\omega ) \sim c_f |\omega |^{-\alpha }\) for frequencies \(\omega \rightarrow 0\) and \(\alpha \in (0,1)\) with \(\alpha =1-\beta =2H-1\). In general, if \(0.5<H<1\) the process exhibits long memory, with higher H values indicating stronger dependence, whilst if \(0<H<0.5\) the process has short memory. An improper fractional Gaussian noise constructed as outlined above (Sykulski and Percival 2016) with \(1<\delta <3/2\) thus has long memory (\(-\beta =2\delta -3=2H-2\in (-1,0)\); hence, \(1/ 2<H<1\)).

For real-valued time series, estimation of the Hurst exponent H traditionally takes place in the time domain (Mandelbrot and Taqqu 1979; Bhattacharya et al. 1983; Taqqu et al. 1995; Giraitis et al. 1999; Higuchi 1990; Peng et al. 1994) and/ or in the frequency domain by means of connections to Fourier or wavelet spectrum decay, for example Lobato and Robinson (1996), McCoy and Walden (1996), Whitcher and Jensen (2000) and Abry et al. (2013). Recent works that deal with long memory estimation in various settings are Vidakovic et al. (2000), Shi et al. (2005), Hsu (2006), Jung et al. (2010) and Coeurjolly et al. (2014). Some authors have recently considered Hurst estimation using complex-valued wavelets in the regularly spaced real-valued image context; see Nelson and Kingsbury (2010), Jeon et al. (2014) and Nafornita et al. (2014). Reviews comparing several techniques for Hurst exponent estimation (for real-valued series) can be found in, for example, Taqqu et al. (1995). Even when only considering real-valued data, Knight et al. (2017) show that methods designed for regularly spaced data often fail to deliver a robust estimate if the time series is subject to missing observations or has been sampled irregularly, and in this context they propose a lifting-based approach for Hurst estimation. Whilst this approach serves well when the process is real-valued, it cannot cope with complex-valued processes. Coeurjolly and Porcu (2017) propose a method of estimation in the setting of (circular) complex-valued fractional Brownian motion assuming a regular sampling structure, but cannot readily cope with sampling irregularity or measurement dropout/ missingness.

2.3 Wavelet lifting paradigm for irregularly sampled real-valued data

The lifting algorithm, first introduced by Sweldens (1995), constructs ‘second-generation’ wavelets adapted for non-standard data settings, such as intervals, surfaces, as well as irregularly spaced data. Lifting has since been used successfully for a variety of statistical problems dealing with real-valued signals, including nonparametric regression, spectral estimation and long memory estimation; see, for example, Trappe and Liu (2000), Nunes et al. (2006), Knight et al. (2012), Knight et al. (2017) and Hamilton et al. (2017). For a recent review of lifting, the reader is directed to Jansen and Oonincx (2005).

As our proposed lifting transform and subsequent long memory estimation method both make use of a recently developed lifting transform, the lifting one coefficient at a time (LOCAAT) transform of Jansen et al. (2001), Jansen et al. (2009), we shall briefly introduce it next.

Suppose a real-valued function \(f(\cdot p)\) is observed at a set of n, possibly irregular, locations or time points, \(\underline{x}=(x_{1},\, \ldots , \, x_{n})\) and is represented by \(\{(x_{i},f(x_i)=f_{i})\}_{i=1}^{n}\). The lifting algorithm of Jansen et al. (2001) begins with the \(\underline{f} = (f_{1},\, \ldots , \, f_{n})\) values, known as scaling function values, together with an interval associated with each location, \(x_i\), which represents the ‘span’ of that point. By performing LOCAAT, we aim to transform the initial \(\underline{f}\) into a set of, say, L coarser scaling coefficients and \((n-L)\) wavelet or detail coefficients, where L is a desired ‘primary resolution’ scale. This is achieved by repeating three steps: split, predict and update. In the algorithm of Jansen et al. (2001), the split step is performed by choosing a point to be removed (‘lifted’), \(j_n\), say. We denote this point by \((x_{j_{n}},f_{j_{n}})\) and identify its set of neighbouring observations, \({\mathscr {I}}_{n}\). The predict step estimates \(f_{j_{n}}\) by using regression over the neighbouring locations \({\mathscr {I}}_{n}\). The prediction error (the difference between the true and predicted function values), \(d_{j_{n}}\) or detail coefficient, is then computed by

$$\begin{aligned} d_{j_{n}}=f_{j_{n}}-\sum _{i\in {\mathscr {I}}_{n}}a^{n}_{i}f_{i}, \end{aligned}$$
(1)

where \((a^{n}_{i})_{i\in {\mathscr {I}}_{n}}\) are the weights resulting from the regression procedure. For points with only one neighbour, the prediction is simply \(d_{j_{n}}=f_{j_{n}}-f_{i}\). This prediction via regression can of course be carried out using a variety of weights. Notably, Hamilton et al. (2017) proposed to use two (rather than just one) prediction filters and encompassed the detail information into complex-valued wavelet coefficients. As more information was extracted from the signal, this approach was shown to improve results for nonparametric regression and spectral/ coherence estimation settings, but nevertheless is limited to real-valued signals. The update step consists of updating the f-values of the neighbours of \(j_n\) used in the predict step using a weighted proportion of the detail coefficient:

$$\begin{aligned} f_{i}^{\mathrm{(updated)}}:=f_{i}+b^{n}_{i}d_{j_{n}},\quad i\in {{\mathscr {I}}_{n}}, \end{aligned}$$
(2)

where the weights \((b^{n}_{i})_{i\in {\mathscr {I}}_{n}}\) are subject to the constraint that the algorithm preserves the signal mean value (Jansen et al. 2001, 2009). The interval lengths associated with the neighbouring points are also updated to account for the effect of the removal of \(j_n\). In effect, this attributes a portion of the interval associated with the removed point to each neighbour.

These split, predict and update steps are then repeated on the updated signal, and after each iteration a new wavelet coefficient is produced. Hence, after say \((n-L)\) removals, the original data are transformed into L scaling and \((n-L)\) wavelet coefficients. This is similar in spirit to the classical discrete wavelet transform (DWT) step which takes a signal vector of length \(2^\ell \) and through filtering operations produces \(2^{\ell -1}\) scaling and \(2^{\ell -1}\) wavelet coefficients.

An attractive feature of lifting schemes, including the LOCAAT algorithm, is that the transform can be inverted easily by reversing the split, predict and update steps.

The current scarcity of Hurst estimation techniques for complex-valued processes, in a uniform, but even more so in a non-uniform sampling setting, and the effectiveness of the lifting transform in representing irregularly sampled information, jointly motivate our proposed approach to tackle this analysis problem: firstly we propose a novel lifting transform able to cope with irregularly sampled complex-valued processes, and secondly we construct a long memory estimator using the corresponding complex-valued lifting coefficients. Notably, the proposed method is suitable for regularly or irregularly sampled processes, both real- and complex-valued; in particular, Hurst estimation is addressed for improper complex-valued processes that have real-valued covariances, as introduced in Sykulski and Percival (2016), as well as for proper complex-valued series, as described in Coeurjolly and Porcu (2017).

3 A new lifting algorithm for complex-valued signals and its properties

In this section, we introduce our proposed lifting algorithm for a complex-valued function and establish its decorrelation properties.

3.1 Proposed \(\mathbb {C}^2\)-LOCAAT algorithm for complex-valued signals

Suppose now a complex-valued function \(f(\cdot p)\) is observed at a set of n, possibly irregular, locations or time points, \(\underline{x}=(x_{1},\, \ldots , \, x_{n})\) and is represented by \(\{(x_{i},f(x_i)=f_{i})\}_{i=1}^{n}\). Our proposed algorithm builds a redundant transform that starts with the complex-valued signal \(\underline{f} = (f_{1},\, \ldots , \, f_{n})\in \mathbb {C}^n\) and transforms it into a set of, say, R coarse (complex-valued) scaling coefficients and \(2 \times (n-R)\) (complex-valued) detail coefficients, where R is the desired primary resolution scale. As is usual in lifting, our algorithm reiterates the three steps—split, predict and update—in a modified version, as described below.

At the first stage (n) of the algorithm, denote the smooth coefficients as \(c_{n,k} = f_{k}\), the set of indices of smooth coefficients by \(S_n= \{1,\dots ,n\}\) and the set of indices of detail coefficients by \(D_n= \emptyset \). The sampling structure is accounted for using the distance between neighbouring observations, and at stage n we define the span of \(x_k\) as \(s_{n,k}=\frac{x_{k+1}-x_{k-1}}{2}\).

At the next stage (\(n-1\)), the proposed algorithm proceeds as follows:

Split Choose a point to be removed and denote its index by \(j_{n}\). Typically, points from the densest sampled regions are removed first, but other predefined removal choices are also possible, as we shall discuss below. We shall often refer to the removal order as a trajectory, following Knight and Nason (2009).

Predict The set of neighbours (\(J_{n}\)) of the point \(j_n\) is identified. Note that the set of neighbours is indexed by n as the choice will depend on the removal stage (via the points remaining at that stage). The predict step estimates \(c_{n,j_n}=f_{j_{n}}\) by using regression over the neighbouring locations \(J_{n}\) and two prediction schemes, a strategy first suggested by Hamilton et al. (2017) for real-valued signals. Each prediction scheme is defined by its respective filter, \(\mathbf {L}\) and \(\mathbf {M}\), orthogonal on each other. The filter \(\mathbf {L}\) corresponds to the (possibly) linear regression choice as is usual in LOCAAT. The filter \(\mathbf {M}\) is linked to \(\mathbf {L}\) through a specific set of properties, discussed in detail in Hamilton et al. (2017) and described in step 2 of Algorithm 1. Both filters are constructed such that the corresponding wavelet coefficients of any constant polynomial are 0 (known in the wavelet literature, as possessing (at least) one vanishing moment).

The prediction residuals following the use of each filter are given by

$$\begin{aligned} \lambda _{j_{n}}= & {} l_{j_{n}}^{n}c_{n,j_{n}} - \sum _{i \in J_{n}} l_{i}^{n}c_{n,i}, \end{aligned}$$
(3)
$$\begin{aligned} \mu _{j_{n}}= & {} m_{j_{n}}^{n}c_{n,j_{n}} - \sum _{i \in J_{n}} m_{i}^{n}c_{n,i}, \end{aligned}$$
(4)

where \(\{l^{n}_{i}\}_{i \in J_{n} \cup \{j_{n}\} }\) and \(\{m^{n}_{i}\}_{i \in J_{n} \cup \{j_{n}\}}\) are the prediction weights associated with filters \(\mathbf {L}\) and \(\mathbf {M}\); as is typical in LOCAAT, we take \(l_{j_{n}}^{n}=1\).

Our proposal is to obtain two complex-valued detail (wavelet) coefficients by combining the two prediction residuals as follows:

$$\begin{aligned} d^{(1)}_{j_{n}}= & {} \lambda _{j_{n}} + \mathrm {i}\,\mu _{j_{n}}, \end{aligned}$$
(5)
$$\begin{aligned} d^{(2)}_{j_{n}}= & {} \lambda _{j_{n}} - \mathrm {i}\,\mu _{j_{n}}. \end{aligned}$$
(6)

Note that if the original signal is real-valued, then \(\underline{d}^{(2)}=\overline{\underline{d}}^{(1)}\) and all we need is \(\underline{d}^{(1)}\). However, when the process is complex-valued as is the case here, \(\underline{d}^{(2)} \ne \overline{\underline{d}}^{(1)}\) and we need both \(\underline{d}^{(1)}\) and \(\underline{d}^{(2)}\). This is in contrast to Hamilton et al. (2017), where the information from the two prediction schemes is corroborated into just one complex-valued wavelet coefficient, and although its naive implementation on the real and imaginary process strands would yield two sets of complex-valued wavelet coefficients, it would not be obvious how to best combine their information.

Update In the update step, both the (complex-valued) smooth coefficients \(\{c_{n,i}\}\) and (real-valued) spans of the neighbours \(\{s_{n,i}\}\) are updated according to filter \(\mathbf {L}\):

$$\begin{aligned} c_{n-1,i}= & {} c_{n,i} + b_{i}^{n} \lambda _{j_{n}}, \nonumber \\ s_{n-1,i}= & {} s_{n,i} + l_{i}^{n}s_{n,j_{n}} \quad \forall i \in J_{n}, \end{aligned}$$
(7)

where \(b_{i}^{n} = (s_{n,j_{n}}s_{n-1,i})/(\sum _{i \in J_{n}}s_{n-1,i}^{2})\) are the update weights, again computed so that the mean of the signal is preserved (Jansen et al. 2009). Updating the neighbours’ spans accounts for the modification to the sampling grid induced by removing one of the observations, and using just one filter for update [akin to the approach of Hamilton et al. (2017)] ensures the use of a common scale across both \(\underline{d}^{(1)}\) and \(\underline{d}^{(2)}\).

The observation \(j_{n}\) is then removed from the set of smooth coefficients; hence, after the first algorithm iteration, the index set of smooth coefficients is \(S_{n-1}= \{1,...,n\}{\backslash }\{j_n\}\) and the index set of detail coefficients is \(D_{n-1}= \{j_n\}\). The algorithm is then reiterated until the desired primary resolution level R has been achieved. In practice, the choice of the primary level R in LOCAAT lifting schemes is not crucial provided it is sufficiently low (Jansen et al. 2009), with \(R=2\) recommended by Nunes et al. (2006).

The three steps are then repeated on the updated signal, and each repetition yields two new wavelet coefficients. After points \(j_n,j_{n-1},\dots , j_{R+1}\) have been removed, the function can be represented as a set of \(2 \times (n-R)\) detail coefficients, \(\{d^{(1)}_{j_k} \}_{k \in D_{n-R}}\) and \(\{d^{(2)}_{j_k} \}_{k \in D_{n-R}}\), and R smooth coefficients, \(\{c_{r-1,i} \}_{i \in S_{n-R}}\), thus resulting in a redundant transform. An algorithmic description of \(\mathbb {C}^2\)-LOCAAT appears in Algorithm 1.

figure a

The proposed algorithm can then be easily inverted by recursively ‘undoing’ the update, predict and split steps described above for the first filter (\(\mathbf {L}\)). More specifically, the inverse transform can be performed by the steps

Undo Update \(c_{n,i} = c_{n-1,i} - b_{i}^{n} \lambda _{j_{n}}, \ \forall i \in J_{n}\)

Undo Predict

$$\begin{aligned} c_{n,j_{n}}= & {} \frac{\lambda _{j_{n}} - \sum _{i \in J_{n}} l_{i}^{n}c_{n,i}}{l_{j_{n}}^{n}} \qquad \text{ or } \end{aligned}$$
(8)
$$\begin{aligned} c_{n,j_{n}}= & {} \frac{\mu _{j_{n}} - \sum _{i \in J_{n}} m_{i}^{n}c_{n,i}}{m_{j_{n}}^{n}}. \end{aligned}$$
(9)

Undoing either predict (8) or (9) step is sufficient for inversion.

A few remarks on our proposed \(\mathbb {C}^2\)-LOCAAT lifting algorithm are now in order.

Transform matrix representation As with any linear transform, the algorithm that determines one set of detail coefficients, say \(\underline{d}^{(1)}\), can also be represented using a matrix transform, i.e. \(\underline{d}^{(1)}=W^{(c)}\underline{f}\), where \(W^{(c)}\) is a \(n \times n\) matrix with complex-valued entries. When expressed as a matrix transform, our proposed \(\mathbb {C}^2\)-LOCAAT algorithm for a complex-valued process (\(\underline{f}\)) can be expressed as

$$\begin{aligned} \underline{d}= & {} \left( \begin{array}{c} W^{(c)}\\ \overline{W}^{(c)} \end{array} \right) \underline{f} \end{aligned}$$
(10)
$$\begin{aligned}= & {} \left( \begin{array}{c} \underline{d}^{(1)}\\ \underline{d}^{(2)} \end{array} \right) , \end{aligned}$$
(11)

with \(\underline{d}^{(1)}=W^{(c)}\underline{f}\) and \(\underline{d}^{(2)}=\overline{W}^{(c)}\underline{f}\).

Wavelet lifting scales and artificial levels The (\(\log _2\)) span associated with an observation at the last stage before its removal, say \(\log _2(s_{k,j_{k}})\) for the detail coefficient \(d_{j_k}\) obtained at stage k, is used as a (continuous) measure of scale—this indirectly stems from the fact the wavelets are not dyadically scaled versions of a single mother wavelet. As the notion of scale of lifting wavelets is continuous, Jansen et al. (2009) group wavelet functions of similar (continuous) scales into ‘artificial’ levels, to mimic the dyadic levels of classical wavelets [see Jansen et al. (2001), Jansen et al. (2009) for more details]. We also adopt this strategy to group the complex-valued wavelet coefficients produced using our \(\mathbb {C}^2\)-LOCAAT algorithm. An alternative is to group the coefficients via their interval lengths into ranges \((2^{j-1}\alpha _0,2^{j}\alpha _0]\), where \(j \ge 1\) and \(\alpha _0\) is the minimum scale. This construction more closely resembles classical wavelet dyadic scales, but both produce similar results. Note that by construction, the \(\mathbb {C}^2\)-LOCAAT transform crucially uses a common scale for both real and imaginary parts, and it is this feature that ensures that information is obtained on the same scale at every step.

Choice of removal order The lifting algorithms in Sects. 2.3 and 3.1 are inherently dependent on the order in which points are removed as the algorithm progresses. Jansen et al. (2009) remove points in order from the finest continuous scale to the coarsest, to mimic the DWT, which produces coefficients at the finest scale first, then at progressively coarser scales. However, in our proposed \(\mathbb {C}^2\)-LOCAAT scheme, we can choose to remove points according to a predefined path (or trajectory) \(T=(x_{o_{1}}, \, \ldots ,\,x_{o_{n}})\), where \((o_{1}, o_{2},\, \ldots ,\, o_{n})\) is a permutation of the set \(\{1, \, \ldots , \,n\}\). Knight and Nason (2009) introduced the nondecimated lifting transform, which proposes examining data using P bootstrapped paths from the space of n! possible trajectories. Aggregating the information obtained via this approach typically improves estimator variance and accuracy, not only in the long memory estimation context (Knight et al. 2017), but also for, for example nonparametric regression (Knight and Nason 2009). This strategy will be embedded in our proposed methodology in Sect. 4.

3.2 Refinement equations for the scaling and wavelet functions under \(\mathbb {C}^2\)-LOCAAT

Although not explicitly apparent, the wavelet lifting construction induces a biorthogonal (second generation) wavelet basis construction; see, for example Sweldens (1995). In the real-valued lifting one coefficient at a time paradigm, as the algorithm progresses, scaling and wavelet functions decomposing the frequency content of the signal are built recursively according to the predict and update Eqs. (1) and (2) (Jansen et al. 2009). Also, the (dual) scaling functions are defined recursively as linear combinations of (dual) scaling functions at the previous stage.

Let us now investigate the basis decomposition afforded by our proposed \(\mathbb {C}^2\)-LOCAAT transform, as a result of performing the split, predict and update steps. As our construction involves two prediction filters, we decompose f on two biorthogonal bases. Our construction is reminiscent of the dual-tree complex wavelet transform (\(\mathbb {C}\)WT) (Kingsbury 2001; Selesnick et al. 2005) which employs two separate classical wavelet transforms, but fundamentally differs through the construction of linked orthogonal filters.

In our proposed construction, let us denote the two scaling function and wavelet biorthogonal bases by \(\left\{ \underline{\varphi }^{(1)}, \underline{\tilde{\varphi }}^{(1)},\underline{\psi }^{(1)}, \underline{\tilde{\psi }}^{(1)} \right\} \) and \(\left\{ \underline{\varphi }^{(2)}, \underline{\tilde{\varphi }}^{(2)},\underline{\psi }^{(2)}, \underline{\tilde{\psi }}^{(2)} \right\} \), respectively. We now explore their relationships and recursive construction.

At stage r, the complex-valued signal f can be decomposed on each basis as

$$\begin{aligned} f(x)=\sum _{\ell \in D_{r}} d^{(i)}_{\ell } \psi ^{(i)}_{\ell }(x)+ \sum _{k \in S_{r}} c^{(i)}_{r,k}\varphi ^{(i)}_{r,k}(x), \quad i =1,2, \end{aligned}$$
(12)

with \(d^{(i)}_{\ell }=<f,\tilde{\psi }^{(i)}_{\ell }>\) and \(c^{(i)}_{r,k}=<f,\tilde{\varphi }^{(i)}_{r,k}>\) for both bases \(i=1,2\), where the inner product is as usual defined on \(L^2(\mathbb {C})\). As the update step is the same for both bases, it follows that \(c^{(1)}_{r,k}=c^{(2)}_{r,k}\). Hence, denote \(c_{r,k}=<f,\tilde{\varphi }^{(1)}_{r,k}>=<f,\tilde{\varphi }^{(2)}_{r,k}>\), for all rk and thus the dual scaling functions coincide under both bases. In what follows, we shall denote these by \(\tilde{\varphi }_{r,k}\).

Proposition 1

Suppose we are at stage \(r-1\) of the \(\mathbb {C}^2\)-LOCAAT algorithm. The recursive construction of the primal scaling and wavelet functions corresponding to the coefficients \(\underline{d}^{(1)}\), in terms of the functions at the previous stage r, is given by

$$\begin{aligned}&\varphi ^{(1)}_{r-1,j}(x)=\varphi ^{(1)}_{r,j}(x)+ \tilde{a}^r_{j} \varphi ^{(1)}_{r,j_r}(x), \text{ if } j \in J_r, \end{aligned}$$
(13)
$$\begin{aligned}&\varphi ^{(1)}_{r-1,j}(x)=\varphi ^{(1)}_{r,j}(x), \text{ if } j \notin J_r, \end{aligned}$$
(14)
$$\begin{aligned}&\psi _{j_r}^{(1)}(x) = \frac{\overline{a}^r_{j_r}}{{\arrowvert }{a}^r_{j_r} {\arrowvert }^2}\varphi ^{(1)}_{r,j_r}(x) - \sum _{j \in J_r} b^r_j \varphi ^{(1)}_{r-1,j}(x), \end{aligned}$$
(15)

where \(a^r_j=\ell ^r _{j} + \mathrm {i}\,m^r_{j}\) and \(\tilde{a}^r_{j}=\frac{\overline{a}^r_{j_r}a^r_{j}}{|a^r_{j_r}|^2}\).

Similarly, the recursive construction for the primal scaling and wavelet functions corresponding to the coefficients \(\underline{d}^{(2)}\), in terms of the functions at the previous stage r, is given by

$$\begin{aligned}&\varphi ^{(2)}_{r-1,j}(x)=\varphi ^{(2)}_{r,j}(x)+ \overline{\tilde{a}}^r_{j} \varphi ^{(2)}_{r,j_r}(x), \text{ if } j \in J_r, \end{aligned}$$
(16)
$$\begin{aligned}&\varphi ^{(2)}_{r-1,j}(x)=\varphi ^{(2)}_{r,j}(x), \text{ if } j \notin J_r, \end{aligned}$$
(17)
$$\begin{aligned}&\psi _{j_r}^{(2)}(x) = \frac{{a}^r_{j_r}}{{\arrowvert }{a}^r_{j_r} {\arrowvert }^2}\varphi ^{(2)}_{r,j_r}(x) - \sum _{j \in J_r} b^r_j \varphi ^{(2)}_{r-1,j}(x). \end{aligned}$$
(18)

For the corresponding dual bases, the recursive constructions are given by

$$\begin{aligned}&\tilde{\varphi }_{r-1,j}(x) = \tilde{\varphi }_{r,j}(x) + b^r_{j} \tilde{\psi }^{L}_{j_r}(x), \quad \forall j \in J_r, \end{aligned}$$
(19)
$$\begin{aligned}&\tilde{\varphi }_{r-1,j}(x) = \tilde{\varphi }_{r,j}(x), \quad \forall j \notin J_r, \end{aligned}$$
(20)
$$\begin{aligned}&\tilde{\psi }^{(1)}_{j_r}(x) = a^r_{j_r} \tilde{\varphi }_{r,j_r}(x) - \sum _{j \in J_r} a^r_{j} \tilde{\varphi }_{r,j}(x), \end{aligned}$$
(21)
$$\begin{aligned}&\tilde{\psi }^{(2)}_{j_r}(x)= \overline{a}^r_{j_r} \tilde{\varphi }_{r,j_r}(x) - \sum _{j \in J_r} \overline{a}^r_{j} \tilde{\varphi }_{r,j}(x), \end{aligned}$$
(22)

where \(\tilde{\psi }^{L}\) denotes the dual wavelet function corresponding to the \(\mathbf {L}\)-filter only.

The proof can be found in ‘Appendix A, Section A.1’.

Summarizing, the two bases can be represented as \(\{ \underline{\varphi }^{(1)}, \underline{\tilde{\varphi }}, \underline{\psi }^{(1)}, \underline{\tilde{\psi }}^{(1)} \}\) and \(\{ \overline{\underline{\varphi }}^{(1)}, \underline{\tilde{\varphi }}, \overline{\underline{\psi }}^{(1)}, \underline{\tilde{\psi }}^{(2)} \}\) and their recursive construction established above will be used in obtaining the formal properties required to justify our proposed long memory estimation approach.

3.3 Decorrelation properties of the \(\mathbb {C}^2\)-LOCAAT algorithm

Wavelet transforms are known to possess good decorrelation properties; see in the context of long memory processes, for example, Abry et al. (2000), Jensen (1999), Craigmile et al. (2001) for classical wavelets, and Knight et al. (2017) for lifting wavelets constructed by means of LOCAAT. The decorrelation property amounts to the consequent removal of the long memory in the wavelet domain, and thus estimation of the Hurst exponent can be carried out in this simplified context. Therefore, we next provide mathematical evidence for the decorrelation properties of the \(\mathbb {C}^2\)-LOCAAT algorithm and these will subsequently benefit our proposed long memory estimation procedure (see Sect. 4). The statement of Proposition 2 (next) aims to establish decorrelation results similar to earlier ones concerning regular wavelets (see, e.g. Abry et al (2000, p. 51) for fractional Gaussian noise, Jensen (1999, Theorem 2) for fractionally integrated processes or Theorem 5.1 of Craigmile and Percival (2005) for fractionally differenced processes) and lifting wavelets [see Proposition 1 in Knight et al. (2017)]. In what follows, we establish the decorrelation properties for the proposed complex-valued lifting transform \(\mathbb {C}^2\)-LOCAAT in a more general data setting than previously considered for lifting wavelets, involving complex-valued stationary processes with real-valued autocovariances that may be proper or improper in nature.

Proposition 2

Let \(X = \{X_{t_i}\}_{i=0}^{N-1}\) denote a (zero-mean) stationary long memory complex-valued time series with Lipschitz continuous spectral density \(f_{X}\). Assume the process is observed at irregularly spaced times \(\{t_i\}_{i=0}^{N-1}\), and let \(\{ \{ c_{R,i}\}_{i\in \{0, \ldots , N-1\} \setminus \{j_{N-1},\ldots ,j_{R-1}\} } , \{ \underline{d}_{j_r} \}_{r=R-1}^{N-1} \}\) be the \(\mathbb {C}^2\)-LOCAAT transform of X, where \(\underline{d}_{j_r}=\left( {d}_{j_r}^{(1)} \quad {d}_{j_r}^{(2)}\right) ^T\). Then, both sets of detail coefficients \(\{ d^{(1)}_{j_r} \}_{r}\) and \(\{ d^{(2)}_{j_r} \}_{r}\) have autocorrelation and pseudo-autocorrelation whose magnitudes decay at a faster rate than for the original process.

The proof can be found in ‘Appendix A, Section A.2’ and uses similar arguments to the proof of Proposition 1 in Knight et al. (2017), adapted for the \(\mathbb {C}^2\)-LOCAAT algorithm and complex-valued setting we address here. Just as for LOCAAT (Knight et al. 2017), Proposition 2 above assumes no specific lifting wavelet and we conjecture that if smoother lifting wavelets were employed, it might be possible to obtain even better rates of decay.

4 Long memory parameter estimation using complex wavelet lifting (\(\mathbb {C}\)LoMPE)

As the newly constructed wavelet domain through \(\mathbb {C}^2\)-LOCAAT displays small magnitude autocorrelations, we now focus on the wavelet coefficient variance and show that the \(\log _2\)-variance of each of the complex-valued lifting coefficients \(d^{(1)}\) and \(d^{(2)}\) is linearly related to their corresponding artificial scale level, a result paralleling classical and real-valued lifting wavelet results. This result suggests a Hurst parameter estimation method for potentially irregularly sampled long memory processes that take values in the complex (\(\mathbb {C}\)) domain.

Proposition 3 next establishes a result similar to that in Proposition 2 of Knight et al. (2017) by taking into account the specific \(\mathbb {C}^2\)-LOCAAT construction and thus extends the scope of Hurst estimation methodology to irregularly sampled complex-valued processes.

Proposition 3

Let \(X=\{X_{t_i}\}_{i=0}^{N-1}\) denote a (zero-mean) complex-valued long memory stationary time series with finite variance and spectral density \(f_{X}(\omega ) \sim c_f |\omega |^{-\alpha }\) as \(\omega \rightarrow 0\), for some \(\alpha \in (0,1)\). Assume the series is observed at irregularly spaced times \(\{t_i\}_{i=0}^{N-1}\), and transform the observed data X into a collection of lifting coefficients, \(\{ {d}^{(1)}_{j_r} \}_r\) and \(\{ {d}^{(2)}_{j_r} \}_r\), via application of \(\mathbb {C}^2\)-LOCAAT from Sect. 3.1.

Let r denote the stage of \(\mathbb {C}^2\)-LOCAAT at which we obtain the wavelet coefficients \(d^{(\ell )}_{j_r}\) (with \(\ell =1,2\)), and let its corresponding artificial level be \(j^\star \). Then, denoting by \(|\cdot p|\) the \(\mathbb {C}\)-modulus, we have for some constant K

$$\begin{aligned} (\sigma ^{(\ell )}_{j^{\star }})^2 = {\mathbb {E}}\left( {\arrowvert }{d}^{(\ell )}_{j_r} {\arrowvert }^2\right) \sim 2^{j^{\star }(\alpha - 1)} \times K. \end{aligned}$$
(23)

The proof can be found in ‘Appendix A, Section A.3’. This result suggests a long memory parameter estimation method for an irregularly sampled, complex-valued time series, described in Algorithm 2, which we shall refer to as \(\mathbb {C}\)LoMPE (Complex-valued Long Memory Parameter Estimation Algorithm). Section 5.1, next, will show that our proposed \(\mathbb {C}\)LoMPE methodology below not only adds a new much needed tool in the estimation of long memory for complex-valued processes, but also improves Hurst exponent estimation for real-valued processes, sampled both regularly and irregularly.

figure b

5 Simulated performance of \(\mathbb {C}\)LoMPE and real data analysis

5.1 Simulated performance of \(\mathbb {C}\)LoMPE

In what follows, we investigate the performance of our Hurst parameter estimation technique for complex-valued series. We simulated realizations of two types of long memory processes, namely circularly symmetric complex fractional Brownian motion, as introduced in Coeurjolly and Porcu (2018), and improper complex fractional Gaussian noise (with real-valued covariances) as described in Sykulski and Percival (2016),Footnote 1 investigating series of lengths of 256, 512 and 1024. These lengths were chosen to reflect realistic data collection scenarios—long enough for the Hurst parameter (a low-frequency asymptotic quantity) to be reasonably estimated, whilst reflecting lengths of datasets encountered in practice.

To investigate the effect of sampling irregularity on the performance of our method, we simulated datasets with different levels of random missingness (5–20%), which are representative of degrees of missingness reported in many application areas, for example in paleoclimatology and environmental series (Broersen 2007; Junger and Ponce de Leon 2015).

We compared results across the range of Hurst parameters \(H=0.6, \ldots , 0.9\). Each set of results is taken over \(K=100\) realizations and \(P=50\) lifting trajectories. Our \(\mathbb {C}\)LoMPE technique was implemented using modifications to the code from the liftLRD package (Knight and Nunes 2016) and CNLTreg package (Nunes and Knight 2017) for the R statistical programming language (R Core Team 2013), both available on CRAN. The measure we use to assess the performance of the methods is the mean squared error (MSE) defined by

$$\begin{aligned} {\text {MSE}} = K^{-1} \sum _{k=1}^{K} (H-\hat{H}^{k})^2. \end{aligned}$$
(25)

In the case of regularly spaced circularly symmetric fractional Brownian motion (i.e. 0% missingness), we compare our \(\mathbb {C}\)LoMPE estimation technique with the recent estimation method in Coeurjolly and Porcu (2017) (denoted ‘CP’).Footnote 2

Table 1 Mean squared error (\(\times 10^3\)) for fractional Brownian motion series featuring different degrees of missing observations for a range of Hurst parameters for the \(\mathbb {C}\)LoMPE estimation procedure. Boxed numbers indicate best result for the regularly spaced setting. Numbers in brackets are the estimation errors’ standard deviation

Table 1 reports the mean squared error for our \(\mathbb {C}\)LoMPE estimator on the complex-valued fractional Brownian motion series for different degrees of missingness (0% up to 20%). In the case of regularly spaced series, our estimation method works well when compared to the ‘CP’ method. This is pleasing since the “CP” method is designed for regularly spaced series, whereas \(\mathbb {C}\)LoMPE is specifically designed for irregularly spaced series. The tables also show that the \(\mathbb {C}\)LoMPE technique is robust to the presence of missingness, attaining good performance even for high degrees of missingness (20%).

Table 2 Mean squared error (\(\times 10^3\)) for fractional Gaussian noise featuring different degrees of missing observations for a range of Hurst parameters for the \(\mathbb {C}\)LoMPE estimation procedure. Numbers in brackets are the estimation errors’ standard deviation

For the complex-valued fractional Gaussian noise, Table 2 demonstrates that our \(\mathbb {C}\)LoMPE estimation technique performs well for regular and irregular settings, with only a slight degradation in performance for increasing missingness.

We also studied the empirical bias of our estimator for both types of long memory process. For reasons of brevity, we do not report these results here, but these can be found in Appendix B in the supplementary material. As for the mean squared error results above, there is a small drop in performance with increasing missingness, and our estimator performs only slightly worse in terms of bias when compared to the ‘CP’ method.

Real-valued processes To assess whether our complex-valued approach achieves performance gains for real-valued processes, we repeated the simulation study from Knight et al. (2017) for a number of long memory processes. In particular, we studied the performance of our estimator for real-valued fractional Brownian motion, fractional Gaussian noise and fractionally integrated series, for a range of Hurst parameters and levels of missingness. The processes were simulated via the fArma add-on package (Wuertz et al. 2013). We compare our method with the real-valued lifting technique of Knight et al. (2017), shown to perform well in a number of settings. Again, for brevity, we do not report these bias results here, but they can be found in Appendix B in the supplementary material. The results show that our method is competitive with the real-valued estimation method in Knight et al. (2017), achieving better results (in terms of MSE and bias) in the majority of cases for fractional Gaussian noise and fractionally integrated series. For fractional Brownian motion, we observe that our method achieves gains in mean square error, albeit at a cost of a decrease in bias performance. These results agree with other studies using complex-valued wavelet methodology, which is shown to outperform its real-valued counterpart in a variety of applications, from denoising (Barber and Nason 2004 to Hurst estimation in the (real-valued) image context (Nelson and Kingsbury 2010; Jeon et al. 2014; Nafornita et al. 2014). This is due to the use of two rather than just one filter, thus eliciting more information from the signal under analysis.

5.2 Analysis of complex-valued wind series with \(\mathbb {C}\)LoMPE

In this section, we provide a more detailed long memory analysis of the complex-valued wind series described in Sect. 1.1. More specifically, we applied our \(\mathbb {C}\)LoMPE Hurst estimation method to the (detrended) irregularly sampled wind series to assess its persistence properties. The estimated Hurst parameter was \(\hat{H}_{\mathbb {C}}=0.86\) for the Wind A series and \(\hat{H}_{\mathbb {C}}=0.8\) for the Wind B series, based on \(P=50\) lifting trajectories. Both of these estimates indicate moderate long memory.

To highlight potential differences with other approaches, we also performed the LoMPE technique of Knight et al. (2017) to each of the real and imaginary components of the two series. In addition, we also estimated the Hurst exponent using the Knight et al. (2017) method for the two magnitude series, since such series (i.e. data without directional information) are most commonly analysed in the literature. The Hurst exponent estimates are denoted by \(\hat{H}_{{\mathbb {R}}}\) and \(\hat{H}_{{\mathscr {I}}}\) for the real and imaginary component series, and \(\hat{H}_{Mod}\) for the magnitude series. The estimates are summarized in Table 3.

Table 3 Hurst parameter estimates for the Wind A and Wind B data from complex-valued series using \(\mathbb {C}\)LoMPE and from real-valued component and magnitude series using LoMPE

For the Wind A dataset, our \(\mathbb {C}\)LoMPE technique estimates the persistence as between those of the real and imaginary components, and higher than that of the magnitude series. In contrast, for the Wind B dataset, the estimate from our complex-valued approach coincides with the result for the series derived from the \(\mathbb {C}\)-modulus. This analysis highlights that ignoring the dependence structure between the real and imaginary components of the series may result in misestimation. Hence, we recommend an approach that uses the complex-valued structure of the data, thus accounting for its intrinsic rotary structure and dependence, not visible by only using the traditional magnitude series or individual real and imaginary strands.

Fig. 7
figure 6

a Autocorrelation for the magnitude wind series for the Wind A series from Fig. 1 (treated as regularly spaced); b autocorrelation for the magnitudeWind B dataset from Fig. 1 (treated as regularly spaced). The dependence structure is markedly different to that shown for the real and imaginary series components shown in Fig. 2

It could also be argued that these differences in estimates are unsurprising, since the dependence structure for the magnitude series, shown in Fig. 4, is visibly different to that of the real and imaginary component series shown in Fig. 2. We argue that our estimation of the long memory parameter for this series is more reliable than that in the currently existing literature, as our proposed algorithm naturally encompasses both the complex-valued and improper features of wind series. A complex-valued analysis using our approach could hence provide more accurate long memory information, reducing miscalibration of predictive climate models. We further suggest that this precision would provide more certainty when assessing renewable energy resource potential, as discussed in, for example, Bakker and van den Hurk (2012).

6 Discussion

Hurst exponent estimation is a recurrent topic in many scientific applications, with significant implications for modelling and data analysis. One important aspect of real-world datasets is that their collection and monitoring are often not straightforward, leading to missingness, or to the use of proxies with naturally irregular sampling structures. In parallel, in many applications of interest there is a natural complex-valued representation of data. To this end, this article has proposed the first Hurst estimation technique for complex-valued processes with sampling missingness or irregularity, and in doing so it has also constructed a novel lifting algorithm able to work on complex-valued data sampled with irregularity. Until the work in this article, Hurst estimation methods have not been able to exploit the wealth of signal information in such data, whilst also coping with irregular sampling regimes. Our \(\mathbb {C}\)LoMPE wavelet lifting methodology was shown to give accurate Hurst estimation for a variety of complex-valued fractional processes and is suitable for both proper and improper complex-valued processes. Simulations demonstrate that the technique is robust to estimation with significant degrees of missingness, as well as in the non-missing (regular) setting.

We have demonstrated the use of our \(\mathbb {C}\)LoMPE technique in an application arising in environmental science. Through our analysis of wind speed data, we have shown that embedding directional wind information in the analysis can lead to significantly different Hurst exponent estimates when compared to only considering real-valued information, such as magnitude series. This highlights that not exploiting a complex-valued data representation in this setting can potentially result in misleading conclusions being drawn about wind persistence. This in turn has a subsequent impact on parameters in climate models and inefficiencies in resource management decisions.

Whilst the development of our proposed complex-valued Hurst estimator was motivated by an application in climatology, we believe that the work in this article has sufficient generality to have appeal in other settings. We thus conclude this article with outlining some example applications in which our methodology is potentially beneficial.

Data from neuroimaging studies Functional magnetic resonance imaging (fMRI) data continue to enjoy popularity in the neuroscience community due to their non-invasive acquisition and data richness; see, for example, Aston and Kirch (2012) for an accessible introduction to the area from the statistical perspective. In particular, fMRI studies often measure information on blood flow in the brain; these voxel-level data are used to investigate neuronal activity of participants during task-based experiments, and many authors have asserted that such time courses possess fractional noise structure, see, for example, Bullmore et al. (2003). Evaluation of the Hurst exponent in this context has been shown to be important in characterizing brain activity under a range of conditions, indicating different levels of cognitive effort (Park et al. 2010; Ciuciu et al. 2012; Churchill et al. 2016). Despite data collection being performed in a controlled set-up, recent work has highlighted the need for tailored statistical methodology to cope with both unbalanced designs, as well as missingness, which can feature in fMRI data for a number of reasons (Lindquist 2008; Ferdowsi and Abolghasemi 2018). In actuality, fMRI scanners record both phase and magnitude information, though most studies only use the magnitude image for analysis. As a result, there has been a recent body of work dedicated to complex-valued analysis of fMRI data, most notably by Rowe and collaborators [see, e.g. Rowe (2005) and Rowe (2009) and Adrian et al. (2018)]. Such an approach has shown improvements over real-valued methods for a range of analysis tasks; see also the work by Adali and collaborators (Calhoun et al. 2002; Li et al. 2011; Rodriguez et al. 2012). Thus, our methodology has the potential of taking advantage of the full complex-valued image information whilst also coping with the inherent non-uniform sampling.

Ocean surface measurement devices There is a long-standing history of studying ocean circulation using GPS-tracked ocean buoy drifters, see e.g. Osborne et al. (1989). Since these trajectories are measured in the longitude-latitude plane, they are often converted to complex-valued vector series; see, for example, Sykulski et al. (2017). It has long been observed that due to the buffeting motion of ocean currents, positional drifter trajectories often exhibit fBM-like behaviour, whilst their velocity over time resembles fGn characteristics (Sanderson and Booth 1991; Summers 2002; Qu and Addison 2010; Lilly et al. 2017). In this context, accurate Hurst exponent estimation is useful in indicating the intensity of ocean turbulence, giving evidence towards particular theorized dynamical regimes (Osborne et al. 1989). These in turn can provide insight into initial conditions and origin of ocean circulation. Moreover, the trajectories often display rotary characteristics (Elipot and Lumpkin 2008; Elipot et al. 2016). Due to the interrupted nature of satellite coverage and the possibility of measurements from multiple satellite orbits, the temporal sampling of the trajectories are typically highly non-uniform. In addition, due to the irregular sampling structure, the data are often interpolated prior to analysis (Elipot et al. 2016). One aspect of exploration in this setting could be to contrast Hurst estimation using our proposed methodology with/without data interpolation to investigate its effect, since previous work substantiates that such processing can produce bias (in the context of Hurst exponent estimation) for real-valued series (Knight et al. 2017). It would also be interesting to investigate modifications to our technique to parameter estimation for Matérn processes discussed in Lilly et al. (2017).