Advertisement

Mathematical and Computational Foundations of Recurrence Quantifications

Chapter
Part of the Understanding Complex Systems book series (UCS)

Abstract

Real-world systems possess deterministic trajectories, phase singularities and noise. Dynamic trajectories have been studied in temporal and frequency domains, but these are linear approaches. Basic to the field of nonlinear dynamics is the representation of trajectories in phase space. A variety of nonlinear tools such as the Lyapunov exponent, Kolmogorov–Sinai entropy, correlation dimension, etc. have successfully characterized trajectories in phase space, provided the systems studied were stationary in time. Ubiquitous in nature, however, are systems that are nonlinear and nonstationary, existing in noisy environments all of which are assumption breaking to otherwise powerful linear tools. What has been unfolding over the last quarter of a century, however, is the timely discovery and practical demonstration that the recurrences of system trajectories in phase space can provide important clues to the system designs from which they derive. In this chapter we will introduce the basics of recurrence plots (RP) and their quantification analysis (RQA). We will begin by summarizing the concept of phase space reconstructions. Then we will provide the mathematical underpinnings of recurrence plots followed by the details of recurrence quantifications. Finally, we will discuss computational approaches that have been implemented to make recurrence strategies feasible and useful. As computers become faster and computer languages advance, younger generations of researchers will be stimulated and encouraged to capture nonlinear recurrence patterns and quantification in even better formats. This particular branch of nonlinear dynamics remains wide open for the definition of new recurrence variables and new applications untouched to date.

Keywords

Diagonal Line Recurrence Plot Unstable Periodic Orbit Recurrence Quantification Analysis Recurrence Point 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1.1 Phase Space Trajectories

Systems in nature or engineering typically exist in either quasi-stationary states or in non-stationary states as they move or transition between states . These complicated processes derive mostly from complex systems (nonlinear, many coupled variables, polluted by noise, etc.) and defy meaningful analysis. Still, approximate investigations of these processes remain an important focus among numerous scientific disciplines (e.g. meteorology). To the extent that systems are deterministic (rule-driven) there still remains the hope and challenge of describing dynamical system changes to such a degree rendering it possible to predict future states of the system (e.g. make forecasts). Practically, the usual aim is to find mathematical models which can be adapted to the real processes (mimicry) and then used for solving given problems. The measuring of a state (which leads to observations of the state but not to the state itself) and subsequent data analysis are the first steps toward the understanding of a process. Well known and approved methods for data analysis are those based on linear concepts as estimations of moments, correlations, power spectra, or principal components analyses etc. In the last two decades this zoo of analytical methods has been enriched with methods of the theory of nonlinear dynamics. Some of these new methods are rooted in the topological analysis of the phase space of the underlying dynamics or on an appropriate reconstruction of it [1, 2]

The state of a system can be described by its d state variables
$$\displaystyle{ x_{1}(t),\,x_{2}(t),\ldots,\,x_{d}(t), }$$
(1.1)
for example the two state variables temperature and pressure in a thermodynamic system. The d state variables at time t form a vector \(\mathbf{x}(t)\) in a d-dimensional space which is called phase space . This vector moves in time and in the direction that is specified by its velocity vector
$$\displaystyle{ \dot{\mathbf{x}}(t) = \partial _{t}\mathbf{x}(t) =\mathbf{ F}(x). }$$
(1.2)
The temporary succession of the phase space vectors forms a trajectory (phase space trajectory , orbit ). The velocity field \(\mathbf{F}(x)\) is tangent to this trajectory. For autonomous systems the trajectory must not cross itself. The time evolution of the trajectory explains the dynamics of the system, i.e., the attractor of the system. If \(\mathbf{F}(x)\) is known, the state at a given time can be determined by integrating the equation system [Eq. (1.2)]. However, a graphical visualization of the trajectory enables the determination of a state without integrating the equations. The shape of the trajectory gives hints about the system; periodic or chaotic systems have characteristic phase space portraits.

The observation of a real process usually does not yield all possible state variables. Either not all state variables are known or not all of them can be measured. Most often only one observation u(t) is available. Since measurements result in discrete time series , the observations will be written in the following as u i , where t = iΔ t and Δ t is the sampling rate of the measurement. [Henceforth, variables with a subscribed index are in this work time discrete (e.g. \(\mathbf{x}_{i}\), R i, j ), whereas a braced t denotes continuous variables (e.g. \(\mathbf{x}(t)\), \(\mathbf{R}(t_{1},t_{2})\)).]

Couplings between the system’s components imply that each single component contains essential information about the dynamics of the whole system. Therefore, an equivalent phase space trajectory, which preserves the topological structures of the original phase space trajectory, can be reconstructed by using only one observation or time series, respectively [2, 3]. A method frequently used for reconstructing such a trajectory \(\hat{\mathbf{x}}(t)\) is the time delay method: \(\hat{\mathbf{x}}_{i} = (u_{i},\,u_{i+\tau },\,\ldots,u_{i+(m-1)\tau })^{T}\), where m is the embedding dimension and τ is the time delay (index based; the real time delay is τΔ t). The preservation of the topological structures of the original trajectory is guaranteed if m ≥ 2d + 1, where d is the dimension of the attractor [2].

Both embedding parameters, the dimension m and the delay τ, have to be chosen appropriately. Different approaches are applicable for the determination of the smallest sufficient embedding dimension [1, 4]. Here we focus on an approach which uses the number of false nearest neighbours.

There are various methods that use false nearest neighbours in order to determine the embedding dimension. The basic idea is that by decreasing the dimension an increasing amount of phase space points will be projected into the neighbourhood of any phase space point, even if they are not real neighbours. Such points are called false nearest neighbours (FNNs). The simplest method uses the amount of these FNNs as a function of the embedding dimension in order to find the minimal embedding dimension [1]. Such a dimension has to be taken where the FNNs vanish. Other methods use the ratios of the distances between the same neighbouring points for different dimensions [4, 5].

Random errors and low measurement precision can lead to a linear dependence between the subsequent vectors \(\mathbf{x}_{i}\). Hence, the delay has to be chosen in such a way that such dependences vanishes. One possible means of determining the delay is by using the autocovariance function \(C(\tau ) =\langle u_{i}\,u_{i-\tau }\rangle\) (using the assumption 〈u i 〉 = 0).

A delay may be appropriate when the autocovariance approaches zero. This minimizes the linear correlation between the components but does not have to mean they are independent. However, the converse is true: if two variables are independent they will be uncorrelated. Therefore, another well established possibility for determining the delay is the mutual information [6]
$$\displaystyle{ I(\tau ) = -\sum _{\varphi,\,\psi }p_{\varphi,\,\psi }(\tau )\log \frac{p_{\varphi,\,\psi }(\tau )} {p_{\varphi }\,p_{\psi }}. }$$
(1.3)
Here \(p_{\varphi,\,\psi }(\tau )\) is the joint probability that \(u_{i} =\varphi\) and \(u_{i+\tau } =\psi\). \(p_{\varphi }\) and p ψ are the probabilities that u i has the value φ and ψ, respectively. In order to simplify the notations, we use \(p_{u_{i}} = p_{\varphi }\), \(p_{u_{i+\tau }} = p_{\psi }\) and \(p_{u_{i},\,u_{i+\tau }} = p_{\varphi,\,\psi }(\tau )\). The mutual information is not a function of the variables \(\varphi\) and ψ but of the joint probability \(p_{\varphi,\,\psi }(\tau )\). It is the average of the information about a value after a delay τ, which can be yielded from the knowledge of the current value. The best choice for the delay is where I(τ) has its smallest local minimum. The advantage of the mutual information vs. the autocovariance function is that it finds the nonlinear interrelations and, hence, determines such a delay which fulfils the criterion of independence. The experience has shown that the delay is sometimes overestimated by auto-correlation and mutual information.

An alternative approach for finding optimal embedding parameters is using recurrence plots [7]. First we create a recurrence plot (RP) with a high embedding dimension (m = 2025). Then we decrease progressively the dimension until a significant change in the RP results. In particular, we are interested in a RP that is cleaned from single points and where linear structures dominate [8]. Since this change is due to a topological change of the phase space trajectory caused by the occurrence of FNNs, the current dimension plus a few dimensions should be sufficient for the embedding. However, this criterion has to be considered with the utmost caution because with high embedding dimensions (e.g., m = 10 would be enough) we can get spurious recurrences which can create an RP with a large amount of diagonal lines even for stochastic data [9]. Non-optimal embedding parameters can cause many interruptions of diagonal lines in the RP, small blocks, or even diagonal lines perpendicular to the LOI (this corresponds to parallel trajectory segments running in opposite time direction).

A phase space reconstruction can be used in order to estimate characteristic properties of the dynamical system. For reviews on corresponding methods see for example [10, 11] or [12]. Besides, the phase space reconstruction is the starting point for the construction of a recurrence plot.

1.2 Recurrence Plots

1.2.1 Definition of Recurrence Plots

Natural processes can have a distinct recurrent behaviour, e.g., periodicities (as seasonal or Milanković cycles), but also irregular cyclicities (as El Niño/Southern Oscillation). Moreover, the recurrence of states, in the meaning that states are arbitrary close after some time, is a fundamental property of deterministic dynamical systems and is typical for nonlinear or chaotic systems [12, 13, 14].

Recurrences in the dynamics of a dynamical system can be visualised by the recurrence plot (RP), introduced by Eckmann et al. in 1987 [15]. The RP represents the times at which states \(\mathbf{x}_{i}\) in a phase space recur.

The original intention was to provide a tool which can easily provide insights into even high-dimensional dynamical systems, those phase space trajectories are otherwise very difficult to visualise [15, 16]. A RP enables us to investigate the m-dimensional phase space trajectory through a two-dimensional representation of its recurrences (Fig. 1.1). Such recurrence of a state at time i at a different time j is pictured within a two-dimensional squared matrix R with dots, where both axes are time axes [9]:
$$\displaystyle{ R_{i,j}^{m,\,\varepsilon _{i} } =\varTheta \left (\varepsilon _{i} -\left \|\mathbf{x}_{i} -\mathbf{ x}_{j}\right \|\right ),\quad \mathbf{x}_{i} \in \mathbb{R}^{m},\quad i,j = 1\ldots N, }$$
(1.4)
where N is the number of considered states x i ; \(\varepsilon _{i}\) is a threshold distance , ∥ ⋅ ∥ a norm, and Θ(⋅ ) the Heaviside function.
Fig. 1.1

(a) Segment of the phase space trajectory of the Lorenz system, Eq. (1.34) (parameters r = 28, σ = 10, \(b = \frac{8} {3}\)) [17] by using its three components and (b) its corresponding recurrence plot. A point of the trajectory at j which falls into the neighbourhood [gray circle in (a)] of a given point at i is considered as a recurrence point [black point on the trajectory in (a)]. This is marked with a black point in the RP at the location (i, j). A point outside the neighbourhood [small circle in (a)] causes a white point in the RP. The radius of the neighbourhood for the RP is \(\varepsilon = 5\)

Since \(R_{i,i} = 1\ (i = 1\ldots N)\) by definition, the RP has a black main diagonal line, the line of identity (LOI), with an angle of π∕4. It has to be noted that a single recurrence point at (i, j) does not contain any information about the current states at the times i and j. However, from the totality of all recurrence points it is possible to reconstruct the phase space trajectory [9, 18, 19].

In practice it is not useful and largely impossible to find complete recurrences in the sense \(\mathbf{x}_{i} \equiv \mathbf{ x}_{j}\) (e. g. the state of a chaotic system would not recur exactly to the initial state but approaches the initial state arbitrarily close). Therefore, a recurrence is defined as a state \(\mathbf{x}_{j}\) is sufficiently close to \(\mathbf{x}_{i}\). This means that those states \(\mathbf{x}_{j}\) that fall into an m-dimensional neighbourhood of size \(\varepsilon _{i}\) centered at \(\mathbf{x}_{i}\) are recurrent. These \(\mathbf{x}_{j}\) are called recurrence points. In Eq. (1.4), this is simply expressed by the Heaviside function and its argument \(\varepsilon _{i}\).

In the original definition of the RPs, the neighbourhood is a ball (i.e., L 2-norm is used) and its radius is chosen in such a way that it contains a fixed number of closest states \(\mathbf{x}_{j}\) [15]. With such a neighbourhood, the radius \(\varepsilon _{i}\) changes for each \(\mathbf{x}_{i}\) (\(i = 1\ldots N\)) and \(R_{i,j}\not =R_{j,i}\) because the neighbourhood of \(\mathbf{x}_{i}\) does not have to be the same as that of \(\mathbf{x}_{j}\). This property leads to an asymmetric RP, but all columns of the RP have the same recurrence density. We denote this neighbourhood as fixed amount of nearest neighbours (FAN). However, the most commonly used neighbourhood is that using a metric and a fixed radius \(\varepsilon _{i} =\varepsilon,\forall i\). A metric and a fixed radius ensures that \(R_{i,j} = R_{j,i}\), i.e., a symmetric RP. The type of neighbourhood that should be used depends on the application [9, 20]. For example, the FAN neighbourhood is useful for nonstationary data, for bivariate recurrence investigations using cross recurrence plots, or for the comparison of RPs of different systems, because it is not necessary to normalise the time series beforehand and it allows an investigation on the basis of comparable recurrence structures [9].

The most commonly used norms are the L 2-norm (Euclidean norm) and the L -norm (Maximum or Supremum norm). The L -norm is often used because it is independent of the phase space dimension, easier to calculate, and allows some analytical expressions [21, 22, 23]. However, this latter choice is more prone to noise or outliers, because it depends on a single point that if it is an extreme, it will from a statistical point of view, thus, result in an outlying distance and results in a wrong measure.

The recurrence threshold \(\varepsilon\) is a crucial parameter in the RP analysis. Although several works have contributed to this discussion [9, 22, 24, 25], a general and systematic study on the recurrence threshold selection remains an open task for future work. Nevertheless, recurrence threshold selection is a trade-off of to have a threshold as small as possible but at the same time a sufficient number of recurrences and recurrence structures. In general, the optimal choice of \(\varepsilon\) depends on the application and the experimental conditions (e.g., noise), but in all cases it is desirable that the smallest threshold possible is chosen.

A “rule of thumb” for the choice of the threshold \(\varepsilon\) is to select it as a few per cent (not larger than 10 %) of the maximum phase space diameter [26, 27, 28]. For classification purpose and signal detection, a better choice is to select \(\varepsilon\) between 20 and 40 % of the signal’s standard deviation σ [25].

However, the influence of noise can necessitate a larger threshold, because noise would distort any existing structure in the RP. Higher threshold may preserve these structures [9]. Suggestions from literature show that this threshold should be a few per cent of the maximum phase space diameter [26] and should not exceed 10 % of the mean or the maximum phase space diameter [27, 28]. Using the recurrence point density of the RP, the threshold can be chosen from the analysis of this measure in respect to a changing threshold [7]. The threshold can then be found by looking for a scaling region in the recurrence point density. However, this may not work for nonstationary data. For this case Zbilut et al. [7] have suggested to choose \(\varepsilon\) so that the recurrence point density is approximately 1 %. For noisy periodic processes, [24] have suggested to use the diagonal structures within the RP in order to determine an optimal threshold. Their criterion minimizes the fragmentation and thickness of the diagonal lines in respect to the threshold. Recent studies about RPs in our group have revealed a more exact criterion for choosing this threshold. This criterion takes into account that a measurement of a process is a composition of the real signal and some observational noise with standard deviation. In order to get similar results by using RPs, a threshold has to be chosen which is five times larger than the standard deviation of the observational noise [22]. This criterion holds for a wide class of processes.

For specific purposes (e.g., quantification of recurrences), it can be useful to exclude the LOI from the RP, as the trivial recurrence of a state with itself might not be of interest. Moreover, due to the finite threshold value \(\varepsilon\), further long diagonal lines can occur directly below and above the LOI for smooth or high resolution data (highly sampled data). Therefore, the diagonal lines in a small corridor around the LOI correspond to the tangential motion of the phase space trajectory, but not to different orbits. Thus, for quantification purposes it is better to exclude this entire predefined corridor and not only the LOI. This step corresponds with suggestions to exclude the tangential motion as it is done for the computation of the correlation dimension (known as Theiler correction or Theiler window) [29] or for the alternative estimators of Lyapunov exponents [30] in which only those phase space points are considered that fulfil the constraint \(\vert j - i\vert \geq w\). Theiler has suggested using the autocorrelation time as an appropriate value for w[29], and Gao et al. state that \(w = (m - 1)\tau\) would be a sufficient approach [30]. However, in a visual representation of an RP it is better to keep the LOI.

For the definition of a recurrence, other metrics or criteria can be used. Some extensions of recurrence definitions have been proposed in order to improve the representation and quantification of the characteristic recurrence structure [9].

For example, the perpendicular recurrence plot has been suggested in order to reduce the effects of the tangential motion. The perpendicular RP is defined as
$$\displaystyle{ R_{i,j}^{m,\,\varepsilon } =\varTheta \left (\varepsilon -\left \|\mathbf{x}_{ i} -\mathbf{ x}_{j}\right \|\right ) \cdot \delta \left (\dot{\mathbf{x}}_{i} \cdot (\mathbf{x}_{i} -\mathbf{ x}_{j})\right ), }$$
(1.5)
where δ is the Delta function. This recurrence plot contains only those points \(\mathbf{x}_{j}\) that fall into the neigbourhood of \(\mathbf{x}_{i}\) and lie in the (m − 1)-dimensional subspace of \(\mathbb{R}^{m}\) that is perpendicular to the phasespace trajectory at \(\mathbf{x}_{i}\). These points correspond locally to those lying on a Poincaré section.

1.2.2 Structures in Recurrence Plots

The fundamental purpose of RPs is the visualization of higher dimensional phase space trajectories. Structural patterns in RPs reveals hints about the time evolution of these trajectories. Other distinct advantages of RPs is that not only can they operate on noisy data, they can also be applied to rather nonstationary data as well as rather short data sets. The RPs exhibit characteristic large scale and small scale patterns. The large scale appearance of RPs, their typology , can be classified as homogeneous, periodic, drift, and disrupted [9, 15]:
  • Homogeneous RPs typify stationary and autonomous systems in which relaxation times are short in comparison with the time spanned by the RP. An example of such an RP is that of a random time series (Fig. 1.2a).

  • RPs with diagonal oriented, periodic recurrent structures (diagonal lines, checkerboard structures) are hallmarks for oscillating systems. The illustration in Fig. 1.2b is a rather clear periodic system with two frequencies and a frequency ratio of four (the main diagonal lines are divided by four crossing short lines; irrational frequency ratios cause more complex periodic recurrent structures). However, even for those oscillating systems whose oscillations are not easily recognizable, the RPs can be used to find their oscillations [15].

  • Paling or darkening of recurrent points away from the LOI (drift) is caused by drifting systems with slowly varying parameters. Thus slow (adiabatic) changes in the dynamic over time brightens the RPs upper-left and lower-right corners (Fig. 1.2c).

  • White areas or bands in RPs indicate abrupt changes in the dynamic as well as extreme events (Fig. 1.2d). In these cases, RPs can be used to find and assess extreme and rare events by scoring the frequency of their repeats

Fig. 1.2

Characteristic typology of recurrence plots: (a) homogeneous (uniformly distributed noise), (b) periodic (super-positioned harmonic oscillations), (c) drift (logistic map \(x_{i+1} = 4x_{i}(1 - x_{i})\) corrupted with a linearly increasing term) and (d) disrupted (Brownian motion). These examples illustrate how different RPs can be. The used data have the length 400 (a, b, d) and 150 (c), respectively; no embeddings are used; the thresholds are \(\varepsilon = 0.2\sigma\) (a, c, d) and \(\varepsilon = 0.4\sigma\) (b)

Close inspection of the RPs reveals small scale structures (the texture) which consists as a combination of isolated dots (chance recurrences), dots forming diagonal lines (deterministic structures) as well as vertical/horizontal lines or dots clustering to inscribe rectangular regions (laminar states, singularities). These small scale structures are the basis for quantitative analysis of the RPs.
  • Single, isolated recurrence points can occur if states are rare, if they do not persist for any time or if they fluctuate heavily. However, they are not a unique sign of chance or noise (for example in maps).

  • A diagonal line \(R_{i+k,j+k} = 1\) (for \(k = 1\ldots l\), where l is the length of the diagonal line) occurs when a segment of the trajectory runs parallel to another segment, i.e., the trajectory visits the same region of the phase space at different times. The length of this diagonal line is determined by the duration of such similar local evolution of the trajectory segments. The direction of these diagonal structures can differ. Diagonal lines parallel to the LOI (angle π∕4) represent the parallel running of trajectories for the same time evolution. The diagonal structures perpendicular to the LOI represent the parallel running with contrary times (mirrored segments; this is often a hint for an inappropriate embedding). The lengths of diagonal lines in an RP are directly related to the ratio of determinism or predictability immanent to the dynamics. If a system is predictable, similar situations (states), i.e., R i, j  = 1, will lead to a similar future, i.e., the probability to have \(R_{i+1,j+1} = 1\) is high. Perfectly predictable systems, thus, would have have infinitely long diagonal lines in the RP. In contrast, the probability for \(R_{i+1,j+1} = 1\) is very low for stochastic systems, i.e., we find only single points or short lines. If the system is chaotic, close states will diverge exponentially in the future. The faster the divergence, i.e., the higher the Lyapunov exponent, the shorter the diagonals.

  • A vertical (horizontal) line \(R_{i,j+k} = 1\) (for \(k = 1\ldots v\), with v the length of the vertical line) marks a time length in which a state does not change or changes very slowly. It seems, that the state is trapped for some time. This is a typical behaviour of laminar states (intermittency) or systems that paused at singularities. Such structures can reveal discontinuities in the signal which portends special states of the system.

These small scale structures are the base of a quantitative analysis of the RPs which is introduced in Sect. 1.3.

The examples in Fig. 1.2 illustrate the appearance of RPs for sundry dynamics. A large number of single points with a vanishing line structures are caused by heavy fluctuations in the dynamic such as seen in uncorrelated noise (Fig. 1.2a). The relationship between periodically recurrent structures and oscillators is obvious as the exact recurrent dynamics score as long diagonal lines separated by a fixed distance (Fig. 1.2b). The non-regular appearance of both short and long diagonal lines is characteristic for chaotic processes (Fig. 1.2c). The uneven occurrence of extended black clusters and extended white areas corresponds to non-regular behavior in the system such as found with correlated (red) noise (Fig. 1.2d).

The structures in a RP can also be used to estimate the recurrence times that have been also used to characterize the dynamics of a system [31, 32, 33, 34]. The distance between recurrence points in a column of the RP corresponds to the duration until a state recurs. There are several possibilities of the recurrence time estimation: We can either measure the distance between the starting point of a recurrence structure and the starting point of the next recurrence structure [35] (Fig. 1.3a), or we measure the distance between the end point of a recurrence structure and the starting point of the next recurrence structure, i.e., we measure the length of the white vertical lines in a RP [34] (Fig. 1.3b). The more recurrence points are formed by the tangential motion (Fig. 1.4), the more extended the vertical recurrence structures are, leading to stronger differences in these two estimators. For such cases, recurrence times would be better estimated by using the (vertical) midpoints of the recurrence structures (Figs. 1.3c and 1.4c). Recurrence time definition based on distance between starting points is an upper limit and the estimator based on white vertical lines is a lower limit of the recurrence times estimation.

Fig. 1.3

RP illustrating the estimators of recurrence times (black arrows), based on (a) distance between starting points, (b) white vertical lines, and (c) distance between the midpoints of the recurrence structures

Fig. 1.4

Recurrence points of a state \(\mathbf{x}_{i}\) illustrating the differences in the estimators for the recurrence time T in case of tangential motion (the preceding and subsequent recurrence points along the trajectory). (a) The estimator as given in Fig. 1.3a corresponds to the time distance between the first points falling into the neighborhood of state i (here i − 2 and \(i + T - 2\)). (b) The estimator as given in Fig. 1.3b corresponds to the time distance between the last and first points falling into the neighborhood of state i (here i + 2 and \(i + T + 2\)). (c) The Poincaré recurrence time T for the illustrated case

Summarizing the aforementioned points we can establish the following list of recurrent pattern structures and their corresponding qualitative interpretations:
  1. 1.

    Homogeneity → the process is stationary

     
  2. 2.

    Fading or darkening to the upper left and lower right corners → the process is nonstationary or on a transient between states

     
  3. 3.

    White banding disruptions → the process is nonstationarity in which some states are rare or far from the normal or transitions may have occurred

     
  4. 4.

    Periodic patterns → the process has characteristic cyclicities with periods corresponding to the time distance between periodic structures

     
  5. 5.

    Single isolated points → the process is dominated with by heavy fluctuations or may even be stochastic

     
  6. 6.

    Diagonal lines (parallel to the LOI) → the process is deterministic in the periodic sense (long diagonals) or chaotic sense (short diagonals)

     
  7. 7.

    Diagonal lines (orthogonal to the LOI) → the process is characterized by the evolution of palindromic states (time reversed)

     
  8. 8.

    Vertical and horizontal lines forming rectangles → some states do not change or change slowly for some time (laminar states) or the process is halted at a singularity in which the dynamic is stuck in paused states

     

The visual interpretation of RPs requires some experience. The study of RPs from paradigmatic systems gives a good introduction into characteristic typology and texture. However, the quantification of RPs offers a more objective way for evaluating the system under investigation. As we will see, quantitative recurrence variables are defined and extracted from recurrence plots in a non-biased fashion.

1.3 Recurrence Quantifications

Visually RPs can provide some useful insights in the dynamics of dynamical systems. However, graphical displays with insufficient resolution to display RPs come with the disadvantage that users are forced to subjectively intuit and interpret patterns and structures presented within the recurrence plot. And different observers see things differently. To overcome the subjectivity of the methodology, in the early 1990s Zbilut and Webber introduced definitions and procedures to quantify RP structures [28, 36, 37]. They defined a set of five recurrence variables that functioned as complexity measures based on diagonal line structuring in RPs and coined the name recurrence quantification analysis (RQA).

1.3.1 Classical Recurrence Quantification Analysis

The first variable in RQA is percent recurrence (REC) or recurrence rate (RR)
$$\displaystyle{ \mathit{RR}(\varepsilon,N) = \frac{1} {N^{2} - N}\sum _{i\neq j=1}^{N}R_{ i,j}^{m,\,\varepsilon }, }$$
(1.6)
which simply enumerates or counts the black dots in the RP excluding the LOI. It is a measure of the relative density of recurrence points in the sparse matrix and is related to the definition of the correlation sum [38]. Large segments of data are required when RR is used as an estimator of the correlation sum. In the limit of long time series
$$\displaystyle{ P =\lim _{N\rightarrow \infty }\mathit{RR}(\varepsilon,N), }$$
(1.7)
is the probability of finding a recurrence point within the RP and is the probability that states will recur. With the knowledge of the probability ρ(x) of a state x of a stochastic process and for dimension m = 1 and using the maximum norm, RR can be analytically computed by using the convolution [23]
$$\displaystyle{ P_{o} =\rho (x) {\ast}\rho (x). }$$
(1.8)
This probability P o can be used to analytically describe the RQA measures for some systems [22, 23].
The following measures are based on line structures in the RP. First, we consider the histogram of the lengths of the diagonal structures in the RP,
$$\displaystyle{ H_{\text{D}}(l) =\sum _{ i,j=1}^{N}{\bigl (1 - R_{ i-1,j-1}\bigr )}{\bigl (1 - R_{i+l,j+l}\bigr )}\prod _{k=0}^{l-1}R_{ i+k,j+k}. }$$
(1.9)
The second variable in RQA is the percent determinism (DET), defined as the fraction of recurrence points that form diagonal lines
$$\displaystyle{ \mathit{DET} = \frac{\sum _{l=d_{\min }}^{N}l\,H_{\text{D}}(l)} {\sum _{i,j=1}^{N}R_{i,j}}. }$$
(1.10)
Systems possessing deterministic (rule-obeying) dynamics are characterized by diagonal lines indicating repeating recurrences within a state. For periodic signals the diagonal lines are long. For chaotic signals the diagonal lines are short. For stochastic signals the diagonal lines are absent, save for chance recurrences forming very short lines. DET can be interpreted as the predictability of the system more so for periodic behaviors than chaotic processes. But it must be realized that DET does not have the same strict meaning of the determinism of a process. The d min parameter sets the lower bounds on the definition of lines in the RP. The threshold d min excludes the diagonal lines which are formed by the tangential motion of the phase space trajectory. Typically, d min is set to 2. If d min is set to 1, DET and RR are identical. For d min > 2, this parameter serves as a filter, excluding the shorter lines and decreasing DET which is practically useful for the study of some dynamical systems. The choice of d min could be made in a similar way as the choice of the size for the Theiler window, but we have to take into account that a too large l min can worsen the histogram H D(l) and thus the reliability of the measure DET.
The derived variable ratio (RATIO) , has been defined as the ratio of DET to RR [36] and can be computed from the frequency distributions of the lengths of the diagonal lines
$$\displaystyle{ \mathit{RATIO} = N^{2}\, \frac{\sum _{l=l_{\mathit{min}}}^{N}l\,H_{D}(l)} {\left (\sum _{l=1}^{N}l\,H_{D}(l)\right )^{2}}. }$$
(1.11)
A heuristic study of physiological systems has revealed that RATIO is useful in discovering dynamic transitions, since during certain types of transitions the RR decreases while DET remains unchanged, thereby increasing RATIO [36].
The third variable in RQA is the maximal line length in the diagonal direction (D max )
$$\displaystyle{ D_{\max } =\mathop{\mathrm{ arg\,max\ }}\limits_{l}H_{D}(l) }$$
(1.12)
which is simply the length of the single longest diagonal within the entire RP. Since diagonal structures show the range in which a segment of the trajectory is rather close to another segment of the trajectory at a different time, these lines give a hint about the divergence of the trajectory segments. The smaller D max the more divergent the trajectories. Based on this idea it is obvious that there is a relationship between the largest positive Lyapunov exponent (if there is one in the considered system) and D max. Indeed, the relationship can be found by considering the (cumulative) frequency distribution of the lengths of the diagonal lines and the K 2 entropy that is a lower limit of the sum of the positive Lyapunov exponents [9].
Related to D max is the average diagonal line length
$$\displaystyle{ \langle D\rangle = \frac{\sum _{l=d_{\min }}^{N}l\,H_{D}(l)} {\sum _{l=d_{\min }}^{N}H_{D}(l)} }$$
(1.13)
which is the average time two segments of the trajectory are close to each other. In this case, 〈D〉 can be interpreted as the mean prediction time.
The fourth variable in RQA is the Shannon entropy of the frequency distribution of the diagonal line lengths (ENT)
$$\displaystyle{ \mathit{ENT} = -\sum _{l=d_{\min }}^{N}p(l)\ln p(l)\quad \mathrm{with}\quad p(l) = \frac{H_{D}(l)} {\sum _{l=d_{\min }}^{N}H_{D}(l)} }$$
(1.14)
which reflects the complexity of the deterministic structure in the system. Calibrated in bits/bin, the higher the ENT the more complex the dynamics, e.g., for uncorrelated noise or oscillations the value of ENT is rather small, indicating its low complexity. It can be noted that ENT depends sensitively on the bin number and, thus, may differ for different parameter choices of the same process (e.g. different thresholds \(\varepsilon\), different l min values, etc.), as well as for different data sets of course.
The measures introduced up to now, RR, DET, D max, etc. can also be computed separately for each diagonal parallel to the LOI and with distance k to the LOI. For example, the recurrence point density along a diagonal of distance k from the LOI is
$$\displaystyle{ \mathit{RR}_{k} = \frac{1} {N - k}\sum _{k,j=1}^{N-k}R_{ k,j}. }$$
(1.15)
We denote such diagonalwise computed measures with a subscribed index or, in general, an asterisk, e.g., RR or RR i . Diagonalwise calculated RQA measures are useful for studying the periodicity of a signal [39], to indicate periodic orbits [26, 40, 41], or to investigate the interrelationship between complex systems [42]. Moreover, RR k can be interpreted as the probability that a system occurs after k time steps. This measure can be used to study phase-synchronization [43].
The fifth RQA measure is the trend (TND), which is a linear regression coefficient over the recurrence point density RR of the diagonals parallel to the LOI, Eq. (1.15), as a function of the time distance between these diagonals and the LOI
$$\displaystyle{ \mathit{TND} = \frac{\sum _{i=1}^{\tilde{N}}(i -\tilde{ N}/2)(\mathit{RR}_{i} -\langle \mathit{RR}_{i}\rangle )} {\sum _{i=1}^{\tilde{N}}(i -\tilde{ N}/2)^{2}}. }$$
(1.16)
The trend gives information about the stationarity versus nonstationarity in the process. Quasi-stationary dynamics will have TND values that hover near 0. Nonstationary dynamics will have TND values far from 0 revealing drift in the dynamics possibly indicating that the system is en route between more stationary states. The TND computation excludes the edges of the RP (\(\tilde{N} < N\)) because the statistic lacks by reason of less recurrence points. The choice of \(\tilde{N}\) depends on the studied process. Whereas \(N -\tilde{ N} > 10\) should be sufficient for noise, this difference should be much larger for a process with some autocorrelation (ten times the order of magnitude of the autocorrelation time should always be enough). It should be noted that if the time dependent RQA (measures are computed in shifted windows, see Sect. 1.3.6) is used, TND will depend strongly on the size of the windows and may reveal contrary results for different window sizes [20].

1.3.2 Extended Recurrence Quantification Analysis

The five RQA variables defined above are based largely on the lengths, number and distributions of diagonal lines in RPs. That is, they are sensitive to parallel trajectories along different segments of the time series. Additional information about other geometrical structures is not included. But RPs have not only diagonal lines, but also vertical and horizontal elements as well. From these vertical lines, additional recurrence quantifications have been posited by Marwan et al. [44].

From these considerations, the sixth RQA measure is the laminarity
$$\displaystyle{ \mathit{LAM} = \frac{\sum _{l=v_{\mathit{min}}}^{N}lH_{V }(l)} {\sum _{i,j=1}^{N}R_{i,j}}, }$$
(1.17)
with
$$\displaystyle{ H_{\text{V}}(l) =\sum _{ i,j=1}^{N}{\bigl (1 - R_{ i,j-1}\bigr )}{\bigl (1 - R_{i,j+l}\bigr )}\prod _{k=0}^{l-1}R_{ i,j+k} }$$
(1.18)
as the histogram of lengths of vertical lines. LAM carries a definition analogous to that of definition of DET, that is LAM reports the percentage of recurrent points in vertical structures whereas DET reports the percentage of recurrent points in diagonal structures.

The computation of LAM is realized for those l that exceed a minimal length v min in order to decrease the influence of sojourn points. For iterated maps (as opposed to continues flows) we typically set v min  = 2. Since LAM quantifies the relative amount of vertical structuring over the entire RP, it also represents the frequency of occurrence of laminar states within the system. The length of the laminar phases in time is ignored, but LAM will decrease if the RP contains recurrent points that are more isolated than in vertical or diagonal structures.

Consequently, we define the seventh RQA measure as the average length of vertical structures
$$\displaystyle{ \mathit{TT} = \frac{\sum _{l=v_{\mathit{min}}}^{N}vH_{V }(l)} {\sum _{l=v_{\mathit{min}}}^{N}H_{V }^{\varepsilon }(l)}, }$$
(1.19)
which we call trapping time TT. The computation also uses the minimal length v min as in LAM, Eq. (1.17). TT contains information about the amount and the length of the vertical structures in the RP by reporting the mean time the system will abide at a specific state (how long the state is trapped).
The eighth and last RQA measure is the maximal length of the vertical structures V max measuring the longest vertical line in the RP
$$\displaystyle{ V _{\mathit{max}} =\mathop{\mathrm{ arg\,max\ }}\limits_{l}H_{V }(l) }$$
(1.20)
and is analogous to the diagonal measure D max Eq. (1.12). The dynamical interpretation of V max has not been clearly delineated, however, but it can be related to singular states in which the system is stuck in a holding pattern inscribing rectangles in the RP.

In contrast to the five basic RQA measures, these new measures are able to find chaos-chaos transitions [44]. Hence, these measures make the investigation of intermittency possible, even if they are only occurring in rather short and nonstationary time series. Since these measures are zero for periodic dynamics, chaos-order transitions can also be identified.

1.3.3 Recurrence Time Based Measures

We can also use recurrence times for the definition of measures of complexity. Most of these measures make use of the probability distribution of the recurrence times T, i.e., p(T). There are measures like recurrence period density entropy (entropy of p(T)) [33], the mean recurrence time, or the number of the most probable recurrence time [45]. These measures allow to distinguish between different types of dynamics (like periodic, chaotic, or stochastic) or the onset of dynamical transitions (e.g., from chaos to strange non-chaotic attractors).

1.3.4 Complex Network Based Quantification

A RP can be considered as the adjacency matrix of an undirected, unweighted complex network [46]. This allows the application of measures from complex networks statistics, like clustering coefficient, betweenness coefficient, or average shortest path length [47, 48, 49]. In the last years this new field has been developed fast and has shown that these new measures add an additional and complement view to the recurrence quantification analysis. In Chap.  4, a comprehensive overview on this topic will be given.

1.3.5 Advanced Quantification

Diagonal lines in a RP allow for the calculation of dynamical invariants, like the Rényi entropy of second order (correlation entropy). In the definition of the second order Rényi entropy an attractor is covered by partitions of boxes of size \(\varepsilon\), \(1,2,\ldots,M(\varepsilon )\). Then, we measure the joint probability \(p(i_{1},\ldots,i_{l})\) that x(i) is in box i 1, x(i+1) in box i 2, …, and \((i + l - 1)\) finally in box i l , and get the Rényi entropy [50] as
$$\displaystyle{ K_{2} = -\lim _{\varDelta t\rightarrow 0}\ \lim _{\varepsilon \rightarrow 0}\ \lim _{\mathop{\l }\rightarrow \infty }\frac{1} {l\varDelta t}\sum _{i_{1},\ldots,i_{l}}p^{2}(i_{ 1},\ldots,i_{l}), }$$
(1.21)
with Δ t the sampling time step. This measure is directly related to the number of possible trajectories that the system can take for l time steps in the future. If the system is perfectly deterministic in the classical sense, there will be only one possibility for the trajectory to evolve and, hence, K 2 = 0. In contrast, for purely stochastic systems the number of possible future trajectories increases to infinity so fast, that K 2 → . Chaotic systems are characterised by a finite value of K 2, as they belong to a category between pure deterministic and pure stochastic systems. The inverse of K 2 has units of time and can be interpreted as the mean prediction time of the system.
The sum of the probabilities \(p(i_{1},\ldots,i_{l})\) can be approximated by the probability p i (l) of finding a sequence of l points in boxes of size \(\varepsilon\) centred at the points \(x(i),\ldots,x(i + (l - 1))\), that can be estimated by the RP:
$$\displaystyle{ p_{i}(l) =\lim _{N\rightarrow \infty } \frac{1} {N}\sum _{s=1}^{N}\prod _{ k=0}^{l-1}\mathbf{R}_{ i+k,s+k}. }$$
(1.22)
Using this approximation, the second order Rényi entropy is [21, 23]
$$\displaystyle{ K_{2}(l) = -\frac{1} {l\,\varDelta t}\ln \left (\mathit{pc}(l)\right ) = -\frac{1} {l\,\varDelta t}\ln \left ( \frac{1} {N^{2}}\sum _{t,s=1}^{N}\prod _{ k=0}^{l-1}\mathbf{R}_{ t+k,s+k}\right ), }$$
(1.23)
where p c (l) is the probability to find a diagonal of at least length l in the RP.
On the other hand, the l-dimensional correlation sum can be used to define K 2 [51]. This definition of K 2 can also be expressed by means of RPs and yields the relationship [23]
$$\displaystyle{ \ln p^{c}(l) \sim \varepsilon ^{D_{2} }e^{-K_{2}(\varepsilon )\tau }. }$$
(1.24)
D 2 is the correlation dimension of the system under consideration [52]. K 2 can be estimated from the slope of a logarithmic plot of p c (l) which corresponds to − K 2 Δ t for large l.

From the explanations above we have seen that the RP can be used to estimate probabilities, where the neighbourhood around a state \(\mathbf{x}_{i}\) corresponds to a certain binning. Based on this considerations, we can pursue in this direction and also calculate further measures which use probabilities, as, e.g., the generalised mutual information.

The mutual information quantifies the amount of information that we obtain from the measurement of one variable on another. It has become a widely applied measure to quantify dependencies within or between time series (auto and cross mutual information). The time delayed generalised mutual information (redundancy) I q (τ) of a system \(\mathbf{x}_{i}\) is defined by [50]
$$\displaystyle{ I_{q}^{\mathbf{x}}(\tau ) = 2H_{ q} - H_{q}(\tau ). }$$
(1.25)
H q is the qth-order Rényi entropy of \(\mathbf{x}_{i}\) and H q (τ) is the qth-order joint Rényi entropy of \(\mathbf{x}_{i}\) and \(\mathbf{x}_{i}+\tau\):
$$\displaystyle{ H_{q} = -\ln \sum \limits _{i}p^{q}(i),\qquad H_{ q}(\tau ) = -\ln \sum \limits _{i,j}p^{q}(i,j;\tau ), }$$
(1.26)
where p(i) is the probability that the trajectory visits the ith box and p(i, j; τ) is the joint probability that the trajectory is first in box i and after some time τ in box j. Hence, for the case q = 2 we can use the RP to estimate H 2
$$\displaystyle{ H_{2} = -\ln \left ( \frac{1} {N^{2}}\sum _{i,j=1}^{N}\mathbf{R}_{ i,j}\right ) }$$
(1.27)
and H q (τ)
$$\displaystyle{ H_{2}(\tau ) = -\ln \left ( \frac{1} {N^{2}}\sum _{i,j=1}^{N}\mathbf{R}_{ i,j}\mathbf{R}_{i+\tau,j+\tau }\right ) = -\ln \left ( \frac{1} {N^{2}}\sum _{i,j=1}^{N}\mathbf{JR}_{ i,j}^{\mathbf{x},\mathbf{x}}(\tau )\right ), }$$
(1.28)
where \(\mathit{JR}_{i,j}(\tau )\) denotes the delayed joint recurrence matrix (see Sect. 1.4.2). Then, the second order generalised mutual information can be estimated from a RP by [23]
$$\displaystyle{ I_{2}^{\mathbf{x}}(\tau ) =\ln \left ( \frac{1} {N^{2}}\sum \limits _{i,j=1}^{N}\mathbf{JR}_{ i,j}^{\mathbf{x},\mathbf{x}}(\tau )\right ) - 2\ln \left ( \frac{1} {N^{2}}\sum \limits _{i,j=1}^{N}\mathbf{R}_{ i,j}\right ). }$$
(1.29)

A comprehensive description and explanation on estimating entropies from recurrence plots is given in Chap. 2.

1.3.6 Windowing Techniques

Fig. 1.5

Two possibilities of windowed RQA: (a) windowing of time series and (b) windowing of RP. The example is an auto-regressive process, Eq. (1.33), the RP is calculated using a constant number of neighbours (10 % of all points) and without embedding. The sub-RPs at the bottom clearly demonstrate the differences between the two approaches

RQA is powerful for the analysis of slight changes and transitions in the dynamics of a complex system. For this purpose we need a time-dependend RQA (a RQA series) what can be realised in two ways (Fig. 1.5):
  1. 1.

    The RP is covered with small overlapping windows of size w spreading along the LOI and in which the RQA will be calculated, R i, j (\(i,j = k,\ldots,k + w - 1\)).

     
  2. 2.

    The time series (or phase space trajectory) is divided into overlapping segments x i (\(i = k,\ldots,k + w - 1\)) from which RPs and subsequent RQA will be calculated separately.

     

Such time dependent approach can be used to analyse the stationarity of the dynamical system or dynamical transitions, like period-chaos or chaos-chaos transitions.

Here we should note the following important points. The time scale of the RQA values depends on the choice which point in the window should be considered as the corresponding time point. Selecting the first point k of the window as the time point of the RQA measures allows to directly transfer the time scale of the time series to the RQA series. However, the window reaches into the future of the current time point and, thus, the RQA measures represent a state which lies in the future. Variations in the RQA measures can be misinterpreted as early signs of later state transitions (like a prediction). A better choice is therefore to select the centre of the window as the current time point of the RQA. Then the RQA considers states in the past and in the future. If strict causality is required (crucial when attempting to detect subtle changes in the dynamics just prior the onset of dramatic state changes), it might be even useful to select the end point of the window as the current time point of the RQA (using embedding we have to add \((m - 1)\tau - 1\)). For most applications the centre point should be appropriate.

The different windowing methods (1) or (2) are only equivalent when we do not normalise the time series (or its pieces) from which the RP is calculated and when we chose a fixed threshold recurrence criterion. Both approaches (1) and (2) can be useful and depend on the given question. If we know that the time series shows some nonstationarities or trends which are not of interest, then approach (2) can help to find transitions neglecting these nonstationarities. But, if we are interested in the detection of the overall changes (e.g., to test for nonstationarity), we should keep the numerical conditions for the entire available time constant and chose approach (1). Anyway, for each RQA we should explicitly state how the windowing procedure has been performed.

For the choice of the window size we have to consider the following fact. Because the RQA measures are statistical measures derived from histograms, the window should be large enough to cover a sufficient number of recurrence lines or orbits. A too small window can pretend strong fluctuations in the RQA measures just by weak statistical significance (e.g., the RQA measure TND is very sensitive to the window size and can reveal even contrary results. Therefore, chosen window size has to checked carefully and conclusions about nonstationarity or dynamical transitions have to be validated by significance tests [20, 53].

1.3.7 Remark on Significance

Variations in these measures of complexity are mostly of relative nature. In order to not draw wrong conclusions from non-significant variations, a statistical test is strongly suggested. Two approaches have been recently suggested, both basing on bootstrapping recurrence features (e.g., diagonal lines). The first approach estimates the local variation of a RQA measure within one window by bootstrapping the recurrence structures in the current window and calculating an empirical test distribution for the RQA measures for each sliding window separately [25]. This approach results in a confidence band around the varying RQA measure and can help when comparing the RQA variation of several studied time series. The second approach merges the local distributions, e.g., of the diagonal lines H D(i), Eq. (1.9), of the sliding windows i by \(\hat{H}_{\text{D}} =\sum _{i}H_{\text{D}}(i)\), and then bootstrapping from this “global” distribution the recurrence structures [53]. The bootstrapped recurrence structures are used to calculate the RQA measures, finally providing one empirical test distribution of the current RQA measure for the entire time series. This can be used to estimate a confidence level for the variation of the RQA measure in order to check whether a transition is significant or not (see the confidence levels in Fig. 1.6).

Fig. 1.6

(a) Poincaré section of the y-component of the Rössler system at x = 0. A periodic window at time t ≈ 400, , 600, corresponding to the control parameter range around c ∈ [36. 5, 37. 5] is clearly visible. An unstable periodic period (UPO) appears at c ≈ 41. The RQA measures are calculated within moving windows of size w = 1, 500 with an overlap of 20 %. (b) The RQA measure DET reveals very high values for the entire time period due to the deterministic nature of the system. In the periodic window, (c) the mean diagonal line length \(\langle D\rangle\) has high values, whereas (d) the measure ENT decreases. Both, 〈D〉 and ENT have increased values at the UPO. (e) T 2 reveals the longer recurrence times during periodic dynamics and (f) the transitivity coefficient \(\mathcal{T}\) measures regular dynamics, as present in the periodic window and UPO. A surrogate test was applied to mark the 99 % confidence interval

1.3.8 Example: Rössler System with Regime Transitions

To illustrate the potential of recurrence quantification we consider the Rössler system, Eq. (1.35), [54], with \(a = b = 0.25\) and continuously changing parameter c. We integrate the equations with a Runge–Kutta of fourth order of a time period of 2,200 s and with a sampling time of Δ t = 0. 05. We remove the first 1,000 values in order to remove transients, i.e., the resulting time series have a length of N = 43, 001. With each time step, c increases by 0. 004, resulting in a range of c ∈ [35. 2, 43. 8]. Within this interval, the system exhibits transitions from chaotic to periodic and back to chaotic states, e.g., a periodic window around c ∈ [36. 5, 37. 5] (Fig. 1.6a), and an unstable periodic orbit (UPO) at c ≈ 41.

The RQA measures are calculated from the x-component of the system, using an embedding with dimension m = 3 and delay τ = 6, a fixed recurrence rate of RR = 0. 05, and a minimal line length of d min = 10.

In order to calculate the RQA measures within each window (of window site 1,500 and 20 % overlap), we had to calculate the local distributions of recurrence structures, e.g., of the diagonal lines H D(i). These local distributions are merged together and used for a bootstrapping based estimation of a test distribution of the RQA measures, using 1,000 samplings and a confidence level of 99 % [53].

From the x-component of the Rössler system we calculate the RQA measures DET, 〈D〉, ENT, as well as the mean recurrence time T 2 [35], and the transitivity coefficient \(\mathcal{T}\) [47]. The deterministic character of the system is well reflected by high values of DET for the complete time interval (Fig. 1.6b), also revealing that the RP mainly consists of diagonal lines. The confidence interval clearly indicates that the variation of DET is not significantly differing over the time. Without such statistical test we might be inveigled to interpret the variations as real and draw, therefore, wrong conclusions about the transitions in the system. During the periodic regime the diagonal lines are spanning the whole RP, leading to an increase of the mean diagonal line length 〈D〉 during this period (Fig. 1.6c). An UPO also causes many diagonal lines with increased length and a reduction of short lines, yielding increased 〈D〉. The distribution of the line lengths in the periodic window is less complex than during chaotic dynamics. Therefore, the entropy of the line length distributions ENT decreases in the periodic window, but increases at the UPO, because the UPO duration is smaller than the window length (Fig. 1.6d). The mean recurrence time also increases during the periodic regime and the UPO (Fig. 1.6e). The transitivity coefficient \(\mathcal{T}\) can distinguish regular from irregular dynamics [46, 55], and thus, reveals the periodic window and the UPO due to the more regular dynamics than during the other chaotic regimes.

1.4 Bivariate Extensions of Recurrence Analysis

Bivariate recurrence analysis allow the study of correlations, couplings, coupling directions, or synchronization between dynamical systems. Depending on the purpose and application, there are two major directions for such bivariate extension: the cross recurrence plot and the joint recurrence plot.

1.4.1 Cross Recurrence Plot

The cross recurrence plot (CRP) is a bivariate extension of the RP and was introduced for the investigation of the simultaneous evolution of two different phase space trajectories, allowing the study of dependencies between two different systems [56, 57]. Suppose we have two dynamical systems, each one represented by the trajectories \(\mathbf{x}_{i}\) and \(\mathbf{y}_{i}\) in the same d-dimensional phase space (Fig. 1.7a). We find the corresponding cross recurrence matrix (Fig. 1.7b) by computing the pairwise mutual distances between the phase vectors of the two systems:
$$\displaystyle{ \mathit{CR}_{i,j}^{\mathbf{x},\mathbf{y}}(\varepsilon ) =\varTheta \left (\varepsilon -\|\mathbf{x}_{ i} -\mathbf{ y}_{j}\|\right ),\qquad i = 1,\ldots,N,\ j = 1,\ldots,M, }$$
(1.30)
where the length of the trajectories of \(\mathbf{x}\) and \(\mathbf{y}\) is not required to be identical, hence, the matrix CR is not necessarily square. Note that both systems are represented in the same phase space, because a CRP looks for those times when a state of the first system recurs to one of the other system. Using experimental data, it is sometimes difficult to reconstruct the phase space. If the embedding parameters are estimated from both time series, but are not equal, the higher embedding should be chosen. However, the data under consideration should be from the same (or a very comparable) process and, actually, should represent the same observable. Therefore, the reconstructed phase space should be the same. The components of \(\mathbf{x}_{i}\) and \(\mathbf{y}_{i}\) are usually normalised before computing the cross recurrence matrix, in order to make both systems comparable.
Fig. 1.7

(a) Phase space trajectories of two coupled Rössler systems, Eqs. (1.36) and (1.37), with a = 0. 15, b = 0. 20, c = 10, ν = 0. 015 and μ = 0. 01 by using their three components (black and grey line correspond to the first and second oscillator). In (b) the corresponding CRP is shown (Euclidean norm and \(\varepsilon = 3\) is used). (a) If a phase space vector of the second Rössler system at j (grey point on the grey line) falls into the neighbourhood (grey circle) of a phase space vector of the first Rössler system at i, in the CRP (b) at the location (i, j) a black point will occur

Since the values of the main diagonal CR i, i (i = 1, , N) are not necessarily one, there is usually not a black main diagonal (Fig. 1.7b). Apart from that, the statements given in the subsection about all the structures in RPs (Sect. 1.2.2) hold also for CRPs. The lines which are diagonally oriented are here of major interest too. They represent segments on both trajectories, which run parallel for some time. The frequency and length of these lines are obviously related to a certain similarity between the dynamics of both systems. A measure based on the lengths of such lines can be used to find nonlinear interrelations between two systems, which cannot be detected by the common cross-correlation function [57].

An important advantage of CRPs is that they reveal the local difference of the dynamical evolution of close trajectory segments, represented by bowed lines. A time dilatation or time compression of one of the trajectories causes a distortion of the diagonal lines [58]. A time shift between the trajectories causes a dislocation of the LOS. Hence, the LOS may lie rather far from the main diagonal of the CRP.

Fig. 1.8

(a, b) Phase space trajectories of two coupled Rössler systems, Eqs. (1.36) and (1.37), with a = 0. 15, b = 0. 20, c = 10, ν = 0. 015 and μ = 0. 01. In (c) the corresponding JRP is shown (L 2 norm and \(\varepsilon = 5\) is used for both systems). If two phase space vectors of the second Rössler system at i and j are neighbours [black points in (b)] and if two phase space vectors of the first Rössler system at same i and j are also neighbours [black points in (a)], a black point in the JRP at the location (i, j) will occur

1.4.2 Joint Recurrence Plot

If we ask whether two systems have a similar recurrence structure, i.e., their states recur in a simultaneous way, we will use the joint recurrence plot [59, 60]. Here we consider the recurrences of the trajectories of the two systems in their respective phase spaces separately and look for the times when both of them recur simultaneously, i. e. when a joint recurrence occurs. By means of this approach, the individual phase spaces of both systems can used that could have even different embedding dimension. Furthermore, two different thresholds for each system, \(\varepsilon ^{\mathbf{x}}\) and \(\varepsilon ^{\mathbf{y}}\), can be considered, so that the criteria for choosing the threshold (Sect. 1.2.1) can be applied separately, respecting the natural measure of both systems. The joint recurrence matrix (Fig. 1.8) for two systems \(\mathbf{x}\) and \(\mathbf{y}\) is then the element wise product of the single RPs
$$\displaystyle{ \mathit{JR}_{i,j}^{\mathbf{x},\mathbf{y}}(\varepsilon ^{\mathbf{x}},\varepsilon ^{\mathbf{y}}) =\varTheta \left (\varepsilon ^{\mathbf{x}} -\|\mathbf{ x}_{ i} -\mathbf{ x}_{j}\|\right )\varTheta \left (\varepsilon ^{\mathbf{y}} -\|\mathbf{ y}_{ i} -\mathbf{ y}_{j}\|\right ),\quad i,j = 1,\ldots,N. }$$
(1.31)
In this approach, a recurrence will take place if a point \(\mathbf{x}_{j}\) on the first trajectory returns to the neighbourhood of a former point \(\mathbf{x}_{i}\), and simultaneously a point \(\mathbf{y}_{j}\) on the second trajectory returns to the neighbourhood of a former point \(\mathbf{y}_{i}\). That means, that the joint probability that both recurrences (or n recurrences, in the multidimensional case) happen simultaneously in their respective phase spaces are studied. In such a definition of a recurrence it is not necessary that the recurrence occurs at same states of the considered systems.

The JRP is invariant under permutation of the coordinates in one or both of the considered systems.

Moreover, a delayed version of the joint recurrence matrix can be introduced
$$\displaystyle{ \mathit{JR}_{i,j}^{\mathbf{x},\mathbf{y}}(\varepsilon ^{\mathbf{x}},\varepsilon ^{\mathbf{y}},\tau ) = R_{ i,j}^{\mathbf{x}}(\varepsilon ^{\mathbf{x}})R_{ i+\tau,j+\tau }^{\mathbf{y}}(\varepsilon ^{\mathbf{y}}),\qquad i,j = 1,\ldots,N-\tau, }$$
(1.32)
which is very useful for the analysis of interacting delayed systems (e. g. for lag synchronisation).

The JRP can be used to estimate joint recurrence probabilities and even conditional recurrence probabilities [61, 62], what is useful for the study of coupling directions (Chap.  3).

1.4.3 Comparison Between CRPs and JRPs

In order to illustrate the difference between CRPs and JRPs, we consider the phase space trajectory of the Rössler system, Eq. (1.35), in three different situations: the original trajectory (Fig. 1.9a), the trajectory rotated on the z-axis (Fig. 1.9b) and the trajectory under a parabolic stretching/compression of the time scale (Fig. 1.9c). These three trajectories look very similar; one of them is rotated and the other one contains another time parametrisation (but looks identical to the original trajectory in phase space).

At first, let us look at the RPs of these three trajectories. The RP of the original trajectory is identical to the RP of the rotated one, as expected (Fig. 1.10a, b). The RP of the stretched/compressed trajectory looks different than the RP of the original trajectory (Fig. 1.10c): it contains bowed lines, as the recurrent structures are shifted and stretched in time with respect to the original RP.

Fig. 1.9

(a) Phase space trajectory of the Rössler system [Eqs. (1.35), with a = 0. 15, b = 0. 2 and c = 10]. (b) Same trajectory as in (a) but rotated on the z-axis by \(\frac{3} {5}\pi\). (c) Same trajectory as in (a) but time scale transformed by \(\tilde{t} = t^{2}\)

Fig. 1.10

RPs of the (a) original trajectory of the Rössler system, (b) of the rotated trajectory and (c) of the stretched/compressed trajectory. (d) CRP and (e) JRP of the original and rotated trajectories and (f) CRP and (g) JRP of the original and stretched/compressed trajectories. The threshold for recurrence is \(\varepsilon = 1\)

Now we calculate the CRP between the original trajectory and the rotated one (Fig. 1.10d) and observe, that it is rather different from the RP of the original trajectory (Fig. 1.10a). This is because in the CRP the difference between each pair of vectors is computed, and this difference is not invariant under rotation of one of the systems. Hence, a rotation of the reference system of one trajectory changes the CRP. Therefore, the CRP cannot detect that both trajectories are identical up to a rotation. In contrast, the JRP of the original trajectory and the rotated one (Fig. 1.10e) is identical to the RP of the original trajectory (Fig. 1.10a). This is because the JRP considers joint recurrences, i. e. recurrences which occur simultaneously in both systems, and they are invariant under affine transformations.

The CRP between the original trajectory and the stretched/compressed one contains the bowed LOS, which reveals the functional shape of the parabolic transformation of the time scale (Fig. 1.10f). Note that the CRP represents the times at which both trajectories visit the same region of the phase space. On the other hand, the JRP of these trajectories is almost empty (Fig. 1.10g) because the recurrence structure of both systems is now different. Both trajectories have different time scales, and hence, there are almost no joint recurrences. Therefore, the JRP is not able to detect the time transformation applied to the trajectory, even though the shape of the phase space trajectories is very similar.

To conclude we can state that CRPs are more appropriate to investigate relationships between the parts of the same system which have been subjected to different physical or mechanical processes, e. g., two borehole cores in a lake subjected to different compression rates. On the other hand, JRPs are more appropriate for the investigation of two interacting systems which influence each other, and hence, adapt to each other, e. g., in the framework of phase and generalised synchronisation or causal couplings (see Chap.  3).

1.5 Computational Foundations of Recurrence Quantification Analysis

1.5.1 Brief Historical Background

The mathematical concept of recurrences traces back to Feller (1950) [63] and Poincaré (1890) [13] and has direct application to dynamical systems. Eckmann et al. (1987) [15] incorporated these ideas into the qualitative tool, the recurrence plot. A few years later Zbilut and Webber [28, 36] quantified the recurrence plot and introduced the concept of recurrence quantifications by defining five recurrence variables: recurrence rate, determinism, max diagonal line, line entropy, and trend. A decade later Marwan [44] added three new recurrence variables: laminarity, max vertical line, and trapping time. The details of these measures of complexity have already been discussed in depth in Sect. 1.3 of this chapter. To make long story short, from these three foundational papers has grown up a large and vast literature across fields and five international symposia on recurrence plots conducted every other year from 2005 through 2013 respectively in Potsdam, Siena, Montreal, Hong Kong, and Chicago, also reflected by the impressive growing list of recurrence papers at the webpage http://www.recurrence-plot.tk/.

The generation of recurrence plots and recurrence quantifications requires high capacity and high speed computers. As the speed of machines has increased dramatically over the last four decades [64], so too has the ease of computation of distance and recurrence matrices. The authors are very familiar with various programming languages and have implemented computational strategies in their doctoral dissertations [65, 66]. One of the authors (Webber) has programmed recurrence algorithms using the C language [67] for the disk operating system (DOS) [68] and the other (Marwan) has programmed the algorithms as a MATLAB Toolbox [69]. A third popular format has been devised for MS Windows-based operating systems by Kononov [70].

1.5.2 Computational Strategies

The purpose of this section is to provide a generalized flow chart for RQA calculations covering three principle categories: recurrence plots (RQD, KRQD, JRQD), recurrence quantifications (RQS, KRQS, JRQS), and recurrence windows (RQE, KRQE, JRQE). Each class of programs will be covered separately. The ubiquitous flow chart for all these approaches is shown in Fig. 1.11.

Fig. 1.11

Generalized flow chart for all recurrence computations

First, the program starts (S) by receiving a single input data vector for auto-recurrences or dual input data vectors for cross-recurrences. These data are in the standard ASCII format (numeric codes devoid of any alphabetic codes). Second, RQA parameters are entered either manually or from automatically from a parameter file. Third, the ordering of the input data set within the recurrence window can either be retained or shuffled to destroy correlative coupling within the data stream (randomized control). Fourth, the data set can be left alone or normalized over the unit interval. Fifth, the computation enters a do/while loop in which the distance and the thresholded recurrence matrices are computed. Here the distance matrix can be rescaled to the maximum distance or mean distance of the matrix or left in absolute distance units. Sixth, the recurrence plot can be displayed or skipped prior to the calculation of RQA variables. Seventh, the RQA variables are reported to the screen and/or output file in ASCII format. Eighth, the data can be updated by the selection of new parameters or by shifting the recurrence window to a new segment of the input data. Ninth, the looping halts (H) either by manual interruption or automatic exiting depending upon the input data length.

1.5.3 Example Program Runs

For demonstration purposes three brainwave signals from a single subject will be used to illustrate the operation of recurrence programs found within a larger suite of RQA programs [68].

Electroencephalographic (EEG) signals were digitized at 1,000 Hz in a patient using the standard 10–20 electrode system as diagrammed in Fig. 1.12 (e.g., see [71]). One-second traces of the waveforms from three sites (Fz, F3 and F7) are displayed in Fig. 1.13. Note that electrode FZ is closer to electrode F3 than electrode F7. These physical separations become important when comparing electrical activations in pairs (kross and joint recurrences) (e.g. near pair KZ-F3 versus distant pair KZ-F7).

Fig. 1.12

Electrode placements in the 10–20 EEG system. Figure reproduced from Wikipedia (public source)

Fig. 1.13

One-second recordings from EEG electrode positions FZ, F3 and F7 in a normal, healthy and resting human subject

1.5.3.1 Programs RQD, KRQD and JRQD

Individual RPs of the three EEG signals are shown in Fig. 1.14. Parameter settings for each time series were identical: window of 500 points (500 ms), embedding dimension of 10, delay of 50 (50 ms), Euclidean norm, maximum distance rescaling, radius of 30 % maximum distance and line of 2. The distribution of recurrent points in each plot are not homogeneous, showing a non-stationarity in the signals. The recurrence density and deterministic structuring for each signal are unique: FZ (7.428 and 96.903 %); F3 (2.833 and 86.333 %); F7 (4.336 and 89.832 %).

Fig. 1.14

Recurrence plots of EEG signals at three brain sites, FZ, F3 and F7

To study how two signals are correlated in time, CRPs are shown in Fig. 1.15 (left). Program KRQD was run on paired signals using the same parameters settings as for the RPs. As can be seen the recurrences and determinisms for the two pairs are rather similar for this single window in time: FZ:L3 (4.769 and 92.997 %); FZ:F7 (4.252 and 91.552 %). This means that the location of the signals on the skull, near (FZ to F3) or distant (FZ to F7), cannot be discriminated.

Lastly, to study how two signals have a similar recurring behavior, JRPs are shown in Fig. 1.15 (right). Program JRQD was run on paired signals using the same parameter settings as for the auto and cross recurrences. Recurrent points in JRPs signify shared recurrent points in the RPs of the individual signals. For this reason, JRPs are necessarily symmetrical about the LOI as are the RPs. For the double pairs, the recurrence densities are lower than the auto recurrences and kross recurrences, but the deterministic structuring is still high: FZ:F3 (1.699 and 89.901 %); FZ:F7 (1.525 and 90.168 %). Again, the data at this time window do not discriminate on signal location.

Fig. 1.15

Cross recurrence plots (left) and joint recurrence plots (right) for paired EEG signals FZ:F3 and FZ:F7

1.5.3.2 Programs RQS, KRQS and JRQS

Implementation of RPs and RQA requires a familiarity with some characteristics of the dynamical system one is exploring. Some systems are best studied as flows (continuous smooth fluctuations of time) whereas other systems are best studied as maps (discontinuous jumps in time). For example, the electrocardiogram (EEG) is a flow. But the series of time intervals from one zero-crossing the next is a map (literally a Poincare section of the EEG flow). But whether flows or maps, it is critical to set the RQA parameters appropriate for each signal. There are seven key parameters: window size, embedding dimension, delay between embedded points, type of norm (min, max or Euclidean), method of rescaling the distance matrix (absolute, mean distance, maximum distance), radius threshold, and line parameter (defining d min and v min). The proper selection of these parameters are describe earlier in this chapter as well as elsewhere [72].

The three programs of interest (RQS, KRQS and JRQS) are all scaling programs that conveniently increment four key RQA parameters and generate large matrices of recurrence quantifications. The user can select the range of points to be studied, the range of delays, range of embedding dimensions and the range of thresholds each of which is systematically incremented. The delay can be found using either the autocovarience function or minimal mutual information and held constant when running these scaling programs. For example, program RQS was run on EEG variable FZ on a single window of 500 points with a delay of 50 points while incrementing the embedding dimension from 1 to 40 and the radius threshold from 0 to 50. The output matrix contained sufficient data to plot three-dimensional surfaces for each of the 8 RQA variables as functions of embedding and radius. We show only the topology of the entropy variable in Fig. 1.16 over the embedding and radius parameter space. Three-dimensional graphs such as these for determinism, for example, can be used for visual selection the radius threshold and embedding space [68].

Fig. 1.16

Topology of RQA variable entropy in parameter space of radius and embedding dimension

1.5.3.3 Programs RQE, KRQE and JRQE

One of the most useful applications of recurrence quantifications is to examine long time series of data using a small moving window traversing the data. For example, in retrospective studies it is possible to study subtle shifts in dynamical properties just before a large event occurs. One can ask questions like [44, 72]: What RQA quantifications change just prior to a brain seizure or heart fibrillation? To show the dynamical richness of EEG signals, we used program RQE to compute recurrence variables with 838 sliding windows. Each window was 500 points line 500 ms) with starting times offset by only 5 points (5 ms) giving exactly 99 % overlap for adjacent windows (Fig. 1.17). The other RQA parameters were selected as before for these same EEG signals.

Fig. 1.17

Sensitivity of RQA variable determinism within a 500-point (500 ms) sliding window for EEG signals recorded at sites FZ (red), F3 (green) and F7 (blue). The dynamical complexities shown correspond to unknown processes in place for this resting human subject

Fig. 1.18

Sensitivity of RQA variable determinism within a 500-point (500 ms) sliding window for paired EEG signals recorded at sites FZ, F3 and F7. The RQA kross correlations are performed for the FZ:F3 close pair (red) and Fz:F7 distant pair (blue). Both pairs show similar dynamical correlations within the first 2,000 ms, but soon after there is a departure or bifurcation in the dynamics in which the distant pairing loses deterministic coupling as compared to the close pairing of electrodes

Fig. 1.19

Sensitivity of RQA variable determinism within a 500-point (500 ms) sliding window for paired EEG signals recorded at sites FZ, F3 and F7. The RQA joint correlations are performed for the FZ:F3 close pair (red) and FX:F7 distant pair (blue). Both electrode pairings show similar dynamical correlations for most of the epochs save in the windows around 1,250–1,750 ms and 2,750–3,000 ms. These departures reveal dynamical bifurcations of the complex system of resting brainwaves recorded at specific sites

This windowed process is also fully applicable to quantifications derived from CRPs and JRPs as shown in Figs. 1.18 and 1.19. Two things are to be noted from these plots. First, running determinism values in cross and joint recurrences do not remain fixed or constant over time, by show complex rhythms. For example, the cross recurrence picks up a 1 Hz rhythm with repeating (recurring) nadirs or dips in determinism every 1,000 ms. Running determinism values in joint recurrences, however, reveal a slower rhythm of about 0.5 Hz with repeating (recurring) nadirs or dips in determinism every 2,000 ms. Second, running determinism values for the different pairs of electrodes representing close sites (FZ:F3) versus distant (FZ:F7) sites sometimes track together and sometimes diverge from one another. These results give important hints regarding diverging (and converging) dynamics occurring over time. Remember that these examples come from a resting human subject. Maybe changes in attention or other state changes in this free-run mode are responsible for the dynamical bifurcations. The point is that windowed cross and joint recurrences gives the investigator powerful tools to study complex brain activities. Thus one might make similar EEG recordings and recurrence analyses during the performance of specific tasks. One wonders if this type of perspective could be allied to robotics in which human EEG patterns are used to control artificial limbs [73].

1.5.4 Advanced Topics

There are several advanced recurrence topics that will just be mentioned here because they are not well explored in terms of dynamical performance. Just as there are frequency spectra (linear) so too there are recurrence spectra (nonlinear) [39, 74] that can be studied as auto-spectra, cross spectra and joint spectra using programs RQF, KRQF and JRQF respectively. Also, histogram distribution of recurrence intervals (inverse of frequencies) can be studied with program RQI, KRQI and JRQI [72]. When comparing linear spectra with nonlinear spectra, the latter methodologies have a higher resolution and sensitivity that pick up subtleties missed by the former.

Windowed recurrences, whether auto, cross or joint all generate large matrices of data. The typical approach has been to examine those RQA variables which best diagnosis the system under study. Recurrence rate and determinism are two favorite variables, for example. However, instead of insisting that the investigator make the judgment call in this respect, the full matrix of data [N, 8] where N is the number of epoch rows and eight is the number of RQA variables can be submitted to principal component analysis. Typically, the first three principal components account for 95 % or better of the variability in the data. So if one has 20 subjects in the study, 20 sets of three principal components will be produced. PC1, PC2 and PC3 can then be plotted as three-dimensional scatter plots to see if any bunching of points occurs. Cluster analysis can formalize the grouping of points which may be diagnostic for different patient types.

1.6 Summary

Recurrence plot (RP) and recurrence quantification analysis (RQA) are easily accessible tools for investigating the dynamics of complex systems. Since its introduction in 1987 as a visualization tool for revealing hidden rhythms, RP methodology has not only been enriched by a heuristic quantification approach (RQA), but also advanced by a growing number of add-on applications. For example, RPs produce meaningful graphical displays relating theoretically founded measures of complexity (e.g., K2 entropy), complex network relationships, and even synchronized systems replete with coupling directions. Simplicity of implementation and wide applicability of RP and RQA technologies across diverse systems continue to attract and expand the utilization of these measures in a growing number of scientific fields. As more researchers utilize recurrence strategies on their particular systems of interest, the limitations and pitfalls of these nonlinear techniques are being better appreciated. Better understood also are the forthcoming results, their assessment and interpretation, and how they infer dynamical structuring or topological properties of complex systems. In short, recurrence analysis is a statistical tool which works remarkably well on non-linear, non-deterministic, non-stationary, noisy dynamical systems of short duration. Twenty seven years have passed since the foundational paper of Eckmann, Kamphorst, and Ruelle [15], yet recurrence analysis remains an active field with open questions and promising new directions which the following chapters in this book will illustrate remarkably well.

References

  1. 1.
    H. Kantz, T. Schreiber, Nonlinear Time Series Analysis (University Press, Cambridge, 1997)zbMATHGoogle Scholar
  2. 2.
    F. Takens, Detecting strange attractors in turbulence, in Dynamical Systems and Turbulence, ed. by D. Rand, L.-S. Young. Lecture Notes in Mathematics, vol. 898 (Springer, Berlin, 1981), pp. 366–381Google Scholar
  3. 3.
    N.H. Packard, J.P. Crutchfield, J.D. Farmer, R.S. Shaw, Geometry from a time series. Phys. Rev. Lett. 45(9), 712–716 (1980)ADSCrossRefGoogle Scholar
  4. 4.
    L. Cao, Practical method for determining the minimum embedding dimension of a scalar time series. Physica D 110(1–2), 43–50 (1997)ADSCrossRefzbMATHGoogle Scholar
  5. 5.
    M.B. Kennel, R. Brown, H.D.I. Abarbanel, Determining embedding dimension for phase-space reconstruction using a geometrical construction. Phys. Rev. A 45(6), 3403–3411 (1992)ADSCrossRefGoogle Scholar
  6. 6.
    A.M. Fraser, H.L. Swinney, Independent coordinates for strange attractors from mutual information. Phys. Rev. A 33(2), 1134–1140 (1986)ADSCrossRefzbMATHMathSciNetGoogle Scholar
  7. 7.
    J.P. Zbilut, J.-M. Zaldívar-Comenges, F. Strozzi, Recurrence quantification based Liapunov exponents for monitoring divergence in experimental data. Phys. Lett. A 297(3–4), 173–181 (2002)ADSCrossRefzbMATHGoogle Scholar
  8. 8.
    F.M. Atay, Y. Altıntaş, Recovering smooth dynamics from time series with the aid of recurrence plots. Phys. Rev. E 59(6), 6593–6598 (1999)ADSCrossRefGoogle Scholar
  9. 9.
    N. Marwan, M.C. Romano, M. Thiel, J. Kurths, Recurrence plots for the analysis of complex systems. Phys. Rep. 438(5–6), 237–329 (2007)ADSCrossRefMathSciNetGoogle Scholar
  10. 10.
    J.-P. Eckmann, D. Ruelle, Ergodic theory of chaos and strange attractors. Rev. Mod. Phys. 57(3), 617–656 (1985)ADSCrossRefzbMATHMathSciNetGoogle Scholar
  11. 11.
    H.D.I. Abarbanel, R. Brown, J.J. Sidorowich, L.S. Tsimring, The analysis of observed chaotic data in physical systems. Rev. Mod. Phys. 65(4), 1331–1392 (1993)ADSCrossRefMathSciNetGoogle Scholar
  12. 12.
    E. Ott, Chaos in Dynamical Systems (University Press, Cambridge, 1993)zbMATHGoogle Scholar
  13. 13.
    H. Poincaré, Sur la probleme des trois corps et les équations de la dynamique. Acta Math. 13, 1–271 (1890)zbMATHGoogle Scholar
  14. 14.
    J.H. Argyris, G. Faust, M. Haase, An Exploration of Chaos (North Holland, Amsterdam, 1994)zbMATHGoogle Scholar
  15. 15.
    J.-P. Eckmann, S. Oliffson Kamphorst, D. Ruelle, Recurrence plots of dynamical systems. Europhys. Lett. 4, 973–977 (1987)ADSCrossRefGoogle Scholar
  16. 16.
    N. Marwan, A historical review of recurrence plots. Eur. Phys. J. Spec. Top. 164(1), 3–12 (2008)CrossRefMathSciNetGoogle Scholar
  17. 17.
    E.N. Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci. 20, 120–141 (1963)ADSCrossRefGoogle Scholar
  18. 18.
    G. Robinson, M. Thiel, Recurrences determine the dynamics. Chaos 19, 023104 (2009)ADSCrossRefMathSciNetGoogle Scholar
  19. 19.
    Y. Hirata, S. Horai, K. Aihara, Reproduction of distance matrices from recurrence plots and its applications. Eur. Phys. J. Spec. Top. 164(1), 13–22 (2008)CrossRefGoogle Scholar
  20. 20.
    N. Marwan, How to avoid potential pitfalls in recurrence plot based data analysis. Int. J. Bifurcat. Chaos 21(4), 1003–1017 (2011)CrossRefzbMATHMathSciNetGoogle Scholar
  21. 21.
    P. Faure, H. Korn, A new method to estimate the Kolmogorov entropy from recurrence plots: its application to neuronal signals. Physica D 122(1–4), 265–279 (1998)ADSCrossRefGoogle Scholar
  22. 22.
    M. Thiel, M.C. Romano, J. Kurths, R. Meucci, E. Allaria, F.T. Arecchi, Influence of observational noise on the recurrence quantification analysis. Physica D 171(3), 138–152 (2002)ADSCrossRefzbMATHMathSciNetGoogle Scholar
  23. 23.
    M. Thiel, M.C. Romano, J. Kurths, Analytical Description of Recurrence Plots of white noise and chaotic processes. Appl. Nonlinear Dyn. 11(3), 20–30 (2003)Google Scholar
  24. 24.
    L. Matassini, H. Kantz, J.A. Hołyst, R. Hegger, Optimizing of recurrence plots for noise reduction. Phys. Rev. E 65(2), 021102 (2002)Google Scholar
  25. 25.
    S. Schinkel, O. Dimigen, N. Marwan, Selection of recurrence threshold for signal detection. Eur. Phys. J. Spec. Top. 164(1), 45–53 (2008)CrossRefGoogle Scholar
  26. 26.
    G.M. Mindlin, R. Gilmore, Topological analysis and synthesis of chaotic time series. Physica D 58(1–4), 229–242 (1992)ADSCrossRefzbMATHMathSciNetGoogle Scholar
  27. 27.
    M. Koebbe, G. Mayer-Kress, Use of recurrence plots in the analysis of time-series data, in Proceedings of SFI Studies in the Science of Complexity, vol. XXI, ed. by M. Casdagli, S. Eubank (Addison-Wesley, Redwood City, 1992), pp. 361–378Google Scholar
  28. 28.
    J.P. Zbilut, C.L. Webber Jr., Embeddings and delays as derived from quantification of recurrence plots. Phys. Lett. A 171(3–4), 199–203 (1992)ADSCrossRefGoogle Scholar
  29. 29.
    J. Theiler, Spurious dimension from correlation algorithms applied to limited time-series data. Phys. Rev. A 34(3), 2427–2432 (1986)ADSCrossRefGoogle Scholar
  30. 30.
    J. Gao, Z. Zheng, Direct dynamical test for deterministic chaos and optimal embedding of a chaotic time series. Phys. Rev. E 49, 3807–3814 (1994)ADSCrossRefGoogle Scholar
  31. 31.
    V. Balakrishnan, G. Nicolis, C. Nicolis, Recurrence time statistics in deterministic and stochastic dynamical systems in continuous time: A comparison. Phys. Rev. E 61(3), 2490–2499 (2000)ADSCrossRefMathSciNetGoogle Scholar
  32. 32.
    E.G. Altmann, E.C. da Silva, I.L. Caldas, Recurrence time statistics for finite size intervals. Chaos 14(4), 975–981 (2004)ADSCrossRefzbMATHMathSciNetGoogle Scholar
  33. 33.
    L.M. Little, P. McSharry, S.J. Roberts, D.A.E. Costello, I.M. Moroz, Exploiting nonlinear recurrence and fractal scaling properties for voice disorder detection. BioMed. Eng. OnLine 6(23), 1–19 (2007)Google Scholar
  34. 34.
    E.J. Ngamga, D.V. Senthilkumar, A. Prasad, P. Parmananda, N. Marwan, J. Kurths, Distinguishing dynamics using recurrence-time statistics. Phys. Rev. E 85(2), 026217 (2012)Google Scholar
  35. 35.
    J.B. Gao, H.Q. Cai,s On the structures and quantification of recurrence plots. Phys. Lett. A 270(1–2), 75–87 (2000)Google Scholar
  36. 36.
    C.L. Webber Jr., J.P. Zbilut, Dynamical assessment of physiological systems and states using recurrence plot strategies. J. Appl. Physiol. 76(2), 965–973 (1994)Google Scholar
  37. 37.
    J.P. Zbilut, C.L. Webber Jr., Recurrence quantification analysis: Introduction and historical context. Int. J. Bifurcat. Chaos 17(10), 3477–3481 (2007)CrossRefzbMATHGoogle Scholar
  38. 38.
    P. Grassberger, I. Procaccia, Characterization of strange attractors. Phys. Rev. Lett. 50(5), 346–349 (1983)ADSCrossRefMathSciNetGoogle Scholar
  39. 39.
    J.P. Zbilut, N. Marwan, The Wiener-Khinchin theorem and recurrence quantification. Phys. Lett. A 372(44), 6622–6626 (2008)ADSCrossRefzbMATHGoogle Scholar
  40. 40.
    D.P. Lathrop, E.J. Kostelich, Characterization of an experimental strange attractor by periodic orbits. Phys. Rev. A 40(7), 4028–4031 (1989)ADSCrossRefMathSciNetGoogle Scholar
  41. 41.
    R. Gilmore, Topological analysis of chaotic dynamical systems. Rev. Mod. Phys. 70(4), 1455–1529 (1998)ADSCrossRefzbMATHMathSciNetGoogle Scholar
  42. 42.
    N. Marwan, M. Thiel, N.R. Nowaczyk, Cross recurrence plot based synchronization of time series. Nonlinear Process. Geophys. 9(3/4), 325–331 (2002)ADSCrossRefGoogle Scholar
  43. 43.
    M.C. Romano, M. Thiel, J. Kurths, I.Z. Kiss, J. Hudson, Detection of synchronization for non-phase-coherent and non-stationary data. Europhys. Lett. 71(3), 466–472 (2005)ADSCrossRefGoogle Scholar
  44. 44.
    N. Marwan, N. Wessel, U. Meyerfeldt, A. Schirdewan, J. Kurths, Recurrence plot based measures of complexity and its application to heart rate variability data. Phys. Rev. E 66(2), 026702 (2002)Google Scholar
  45. 45.
    E.J. Ngamga, A. Nandi, R. Ramaswamy, M.C. Romano, M. Thiel, J. Kurths, Recurrence analysis of strange nonchaotic dynamics. Phys. Rev. E 75(3), 036222 (2007)Google Scholar
  46. 46.
    N. Marwan, J.F. Donges, Y. Zou, R.V. Donner, J. Kurths, Complex network approach for recurrence analysis of time series. Phys. Lett. A 373(46), 4246–4254 (2009)ADSCrossRefzbMATHGoogle Scholar
  47. 47.
    R.V. Donner, Y. Zou, J.F. Donges, N. Marwan, J. Kurths, Recurrence networks – A novel paradigm for nonlinear time series analysis. New J. Phys. 12(3), 033025 (2010)Google Scholar
  48. 48.
    R.V. Donner, M. Small, J.F. Donges, N. Marwan, Y. Zou, R. Xiang, J. Kurths, Recurrence-based time series analysis by means of complex network methods. Int. J. Bifurcat. Chaos 21(4), 1019–1046 (2011)CrossRefzbMATHMathSciNetGoogle Scholar
  49. 49.
    R.V. Donner, J. Heitzig, J.F. Donges, Y. Zou, N. Marwan, J. Kurths, The geometry of chaotic dynamics – a complex network perspective. Eur. Phys. J. B 84, 653–672 (2011)ADSCrossRefMathSciNetGoogle Scholar
  50. 50.
    A. Rényi, Probability Theory (North-Holland, Amsterdam, 1970)Google Scholar
  51. 51.
    P. Grassberger, I. Procaccia, Estimation of the Kolmogorov entropy from a chaotic signal. Phys. Rev. A 9(1–2), 2591–2593 (1983)ADSCrossRefMathSciNetGoogle Scholar
  52. 52.
    P. Grassberger, I. Procaccia, Measuring the strangeness of strange attractors. Physica D 9(1–2), 189–208 (1983)ADSCrossRefzbMATHMathSciNetGoogle Scholar
  53. 53.
    N. Marwan, S. Schinkel, J. Kurths, Recurrence plots 25 years later – gaining confidence in dynamical transitions. Europhys. Lett. 101, 20007 (2013)ADSCrossRefGoogle Scholar
  54. 54.
    O.E. Rössler, An equation for continuous chaos. Phys. Lett. A 57(5), 397–398 (1976)ADSCrossRefGoogle Scholar
  55. 55.
    Y. Zou, R.V. Donner, J.F. Donges, N. Marwan, J. Kurths, Identifying complex periodic windows in continuous-time dynamical systems using recurrence-based methods. Chaos 20(4), 043130 (2010)Google Scholar
  56. 56.
    J.P. Zbilut, A. Giuliani, C.L. Webber Jr., Detecting deterministic signals in exceptionally noisy environments using cross-recurrence quantification. Phys. Lett. A 246(1–2), 122–128 (1998)ADSCrossRefGoogle Scholar
  57. 57.
    N. Marwan, J. Kurths, Nonlinear analysis of bivariate data with cross recurrence plots. Phys. Lett. A 302(5–6), 299–307 (2002)ADSCrossRefzbMATHMathSciNetGoogle Scholar
  58. 58.
    N. Marwan, J. Kurths, Line structures in recurrence plots. Phys. Lett. A 336(4–5), 349–357 (2005)ADSCrossRefzbMATHGoogle Scholar
  59. 59.
    A. Porta, G. Baselli, N. Montano, T. Gnecchi-Ruscone, F. Lombardi, A. Malliani, S. Cerutti, Classification of coupling patterns among spontaneous rhythms and ventilation in the sympathetic discharge of decerebrate cats. Biol. Cybern. 75(2), 163–172 (1996)CrossRefGoogle Scholar
  60. 60.
    M.C. Romano, M. Thiel, J. Kurths, W. von Bloh, Multivariate recurrence plots. Phys. Lett. A 330(3–4), 214–223 (2004)ADSCrossRefzbMATHMathSciNetGoogle Scholar
  61. 61.
    Y. Zou, M.C. Romano, M. Thiel, N. Marwan, J. Kurths, Inferring indirect coupling by means of recurrences. Int. J. Bifurcat. Chaos 21(4), 1099–1111 (2011)CrossRefzbMATHMathSciNetGoogle Scholar
  62. 62.
    N. Marwan, Y. Zou, N. Wessel, M. Riedl, J. Kurths, Estimating coupling directions in the cardio-respiratory system using recurrence properties. Philos. Trans. R. Soc. A 371(1997), 20110624 (2013)MathSciNetGoogle Scholar
  63. 63.
    W. Feller, An Introduction to Probability Theory and Its Applications, vol. 1 (Wiley, New York, 1950)zbMATHGoogle Scholar
  64. 64.
    D.C. Brock, Understanding Moore’s Law: Four Decades of Innovation (Chemical Heritage Foundation, Philadelphia, 2006)Google Scholar
  65. 65.
    C.L. Webber Jr., Quantitative Analysis of Respiratory Cell Activity. PhD Dissertation, Loyola University Chicago, 1974Google Scholar
  66. 66.
    N. Marwan, Encounters With Neighbours – Current Developments Of Concepts Based On Recurrence Plots And Their Applications. PhD thesis, University of Potsdam, 2003Google Scholar
  67. 67.
    B. Kernighan, D. Ritchie, The C Programming Language (Prentice Hall, Englewood Cliffs, 1978)Google Scholar
  68. 68.
    C.L. Webber, Jr., Introduction to recurrence quantification analysis. RQA version 14.1 README.PDF. 2012Google Scholar
  69. 69.
    N. Marwan. CRP Toolbox 5.17, 2013, platform independent (for Matlab)Google Scholar
  70. 70.
    E. Kononov, Visual Recurrence Analysis 4.9, 2009, only for WindowsGoogle Scholar
  71. 71.
    N. Thomasson, T.J. Hoeppner, C.L. Webber Jr., J.P. Zbilut, Recurrence quantification in epileptic EEGs. Phys. Lett. A 279(1–2), 94–101 (2001)ADSCrossRefGoogle Scholar
  72. 72.
    C.L. Webber Jr., J.P. Zbilut, Recurrence Quantification Analysis of Nonlinear Dynamical Systems (National Science Foundation, Arlington, 2005), pp. 26–94Google Scholar
  73. 73.
    D.J. McFarland, W.A. Sarnacki, J.R. Wolpaw, Electroencephalographic (EEG) control of three-dimensional movement. J. Neural Eng. 7(3), 036007 (2010)Google Scholar
  74. 74.
    K. Shockley, M. Butwill, J.P. Zbilut, C.L. Webber Jr., Cross recurrence quantification of coupled oscillators. Phys. Lett. A 305(1–2), 59–69 (2002)ADSCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Potsdam Institute for Climate Impact ResearchPostdamGermany
  2. 2.Loyola University ChicagoChicagoUSA

Personalised recommendations