1 Introduction

In Szebehely (1982), Victor Szebehely underlines how dynamical models are approximations of real-world phenomena and how initial conditions and parameters can be known with a finite degree of accuracy. The approximation in the modelling of natural phenomena and the degree of accuracy in model parameters and initial conditions are all aspects of the uncertainty in dynamical systems. A complete understanding of the evolution of a dynamical system requires a quantification of the effects of this uncertainty. More specifically, the goal is to compute a measure of the uncertainty in a given quantity of interest. In dynamical systems, the quantity of interest is often a function of the state variables at a given time and the value of the state variables is a function of the uncertain quantities in the dynamical model.

In the past two decades, there has been a growing interest in developing methods for uncertainty quantification in dynamical systems. Broadly speaking, methods differ for the assumptions on the nature of the uncertainty, aleatory or epistemic, the way uncertainty is propagated and the quantity of interest is computed. A complete review of methods for uncertainty quantification in dynamical systems is out of the scope of this paper. Here, we will focus on a rather large and popular class of these methods that uses polynomial expansions to model the dependency of the state variables or, directly, the quantity of interest, on the uncertain quantities. Among them, it is worth mentioning methods that propagate high-order Taylor polynomials (Massari et al. 2017; Pérez-Palau et al. 2015), polynomial chaos expansions (PCE) (Bhusal and Subbarao 2019; Ozen 2017; Gerritsma et al. 2010; Schick and Heuveline 2014) and Chebyshev polynomials (Vasile et al. 2019).

Often, the study of dynamical systems makes use of indicators to identify chaotic behaviours, diffusion phenomena and invariant and coherent structures (e.g. Froeschlé et al. 1997; Skokos 2009; Darriba et al. 2012; Lega et al. 2016). Among these indicators, the finite-time Lyapunov exponent (FTLE) (Shadden et al. 2005) was recently proposed as an attempt to generalise the concept of invariant manifolds for non-autonomous dynamical systems (Haller 2015), and identify structures that separate qualitatively different dynamical regimes. Some applications can be found in Gawlik et al. (2009), Short and Howell (2014), Short et al. (2015) and Manzi and Topputo (2021). Other chaos indicators are the frequency map analysis (Laskar 1993), the Mean Exponential Growth factor of Nearby Orbits (MEGNO); the Smaller Alignment Index (SALI); the Fast Lyapunov Indicator (FLI); the Dynamical Spectra of stretching numbers and the corresponding Spectral Distance and the Relative Lyapunov Indicator (RLI). A review of some of them can be found in Maffione et al. (2011). Another class of indicators are used to study dynamical systems driven by stochastic processes, from time series, e.g. Steeb and Andrieu (2005), Grassberger and Procaccia (1983) and Tarnopolski (2018). However, to the best of our knowledge, there is no indicator that is designed to quantify the effect of uncertainty in the system dynamics. Commonly used chaos indicators, for example, would need to be recomputed for each realisation of the uncertain quantities and a statistics on their sensitivity to the variation of the uncertain quantities would need to be computed a posteriori from a Monte Carlo simulation. In this respect, it is worth mentioning the work on the computation of Lyapunov exponents of stochastic driven processes in Schomerus and Titov (2002) and Froyland and Aihara (2000).

In this paper, we propose three novel dynamical indicators that exploit the properties of polynomial expansions for uncertainty quantification. Two indicators generalise the concept of finite-time Lyapunov exponents to the case where the parameters of the dynamic model are uncertain. The third indicator directly relates the coefficients of the polynomial expansion to the rate at which an ensemble of trajectories, given by different realisations of the uncertain parameters, diffuses. All three indicators allow one to directly study the effect of uncertainty without the need to run a Monte Carlo simulation and recompute multiple times the value of the chaos indicators. Unlike previous works that aimed at differentiating deterministic chaos from the effect of stochastic processes (Rosso et al. 2007; Poon and Barahona 2001; Turchetti and Panichi 2019) or identify particular types of motion from time series (Cincotta et al. 1999), in this paper we propose indicators that quantify the effect of parametric uncertainty in the dynamic model. Furthermore, the third indicator, called pseudo-diffusion exponent in the following, is shown to be more computationally advantageous as it does not require the derivation and propagation of the variational equations.

Three examples of known dynamical systems are used to illustrate the applicability of the three types of indicators to the construction of a cartography of the dynamics and the identification of regions, in the phase space, that are more or less sensitive to model uncertainty. It will be shown that the new indicators provide results that are consistent with the FTLE, when the uncertainty is only in the initial conditions. When the uncertainty is in the parameters of the dynamic model, the new indicators allow one to identify behaviours that manifest only due to the presence of a parametric uncertainty. At the same time, the new indicators, consistent with other chaos indicators in the literature, allow one to identify regions of regular and chaotic motion. However, unlike existing chaos indicators, the ones proposed in this paper provide additional information on these regions, including variance, skewness, and higher statistical moments, of the ensemble of trajectories induced by multiple realisations of the uncertain quantities.

In particular, we will show how the pseudo-diffusion exponent can be used to identify trajectories that are nearly insensitive to parametric uncertainty in the dynamics and others that, for the same initial conditions, manifest radically different behaviours for different realisations of the uncertain quantities.

The paper is structured as follows. After introducing the problem that this paper is addressing and a brief summary of the background material, the paper introduces the definition and derivation of the three indicators. Then, the indicators are applied to three known dynamical systems where a model parameter is affected by uncertainty. A discussion section with computational cost and significance of the three indicators follows. Finally a section on the practical applicability of the indicators concludes the paper.

2 Problem statement

In this work, we consider a general dynamical system in the form:

$$\begin{aligned} \frac{\textrm{d} {\textbf {z}}}{\textrm{d} t} = \textbf{g}(t, \textbf{p}, \textbf{z}) \end{aligned}$$
(1)

with initial conditions:

$$\begin{aligned} \textbf{z}(t=t_0)=\textbf{z}_0 \end{aligned}$$
(2)

where t is the time, \({\textbf {z}}:[t_0,t_f]\rightarrow \mathbb {R}^{n}\) is the state of the system and \({\textbf {p}}\in \Omega \subset \mathbb {R}^{n_p}\) is a vector of uncertain model parameters. In the general case, both \(\textbf{p}\) and \(\textbf{z}_0\) are uncertain quantities and similar in nature. The vector function \(\textbf{g} : [t_0,t_f]\times \mathbb {R}^{n_p}\times \mathbb {R}^{n}\longrightarrow \mathbb {R}^{n}\) is the dynamic model.

The objective is to derive a scalar quantity \(\alpha (\textbf{z},t):\mathbb {R}^{n}\times [t_0,t_f]\rightarrow \mathbb {R}\) that measures the divergence of the trajectories of system (1), belonging to an ensemble \(\Phi (t,\textbf{p})=\{\textbf{z}(t,\textbf{p})|\forall \textbf{p}\in \Omega \wedge t\in [t_0,t_f]\}\) induced by multiple realisations of \(\textbf{p}\). We want also to quantify the uncertainty in the distance between a realisation \(\textbf{z}\) and the mean value of all the realisations in the ensemble at a given time \(t_f\). We can quantify this uncertainty by computing the integral:

$$\begin{aligned} \mathbb {E}(\delta (t_f)<\epsilon )=\int _{\Omega } I(\Vert \textbf{z}(t_f)-\hat{\textbf{z}}(t_f)\Vert <\epsilon )w(\textbf{p})\textrm{d}\textbf{p} \end{aligned}$$
(3)

where \(\delta =\Vert \textbf{z}(t_f)-\hat{\textbf{z}}(t_f)\Vert \), I is the indicator function, \(\epsilon \) is a threshold value and \(\hat{\textbf{z}}(t_f)\) is the mean value of the state variables at time \(t_f\), or:

$$\begin{aligned} \hat{\textbf{z}}(t_f)=\frac{\int _{\Omega } \textbf{z}(t_f)w(\textbf{p})\textrm{d}\textbf{p}}{\int _{\Omega } w(\textbf{p})\textrm{d}\textbf{p}} \end{aligned}$$
(4)

The function w can represent the distribution of \(\textbf{p}\) over \(\Omega \). In this case, (3) is a probability and (4) an expected value.

3 Background material

In this section, we recall some basic material that is required to derive the dynamical indicators proposed in this paper. In particular, we will focus on polynomial expansions to propagate the uncertainty in \(\textbf{p}\) through system (1). Thus, we will first briefly introduce both intrusive and non-intrusive polynomial chaos expansions.

Two of the indicators are derived from finite-time Lyapunov exponents; hence, a subsection will introduce the concept of FTLE. Finally, one dynamical indicator is based on the idea of anomalous diffusion in stochastic systems; therefore, the last subsection will present some basic concepts of anomalous diffusion.

3.1 Polynomial expansions

A popular technique to study the dependency of a dynamical system on a set of uncertain quantities is polynomial chaos expansions. The idea is to represent the state vector \(\textbf{z}\) as a truncated expansion in the orthogonal polynomials \(\Psi _i(\textbf{p})\) of the uncertain quantities \(\textbf{p}\):

$$\begin{aligned} \textbf{z}(t, \textbf{p}) \approx \sum _{i=0}^m \mathbf {c_i}(t) \Psi _i(\textbf{p}) \end{aligned}$$
(5)

where \(\textbf{c}_i(t)\) are time-dependent coefficients. The \(\Psi _i\) terms define a set of orthogonal polynomials up to degree m (Gautschi 2004). The orthogonality condition is formalised as follows:

$$\begin{aligned} \langle \Psi _j, \Psi _k \rangle = \int _{\Omega } \Psi _j(\textbf{p}) \Psi _k (\textbf{p}) w(\textbf{p}) \text {d}\textbf{p} = \mathbb {E}[\Psi _j, \Psi _k] \ne 0 \Leftrightarrow j = k \end{aligned}$$
(6)

where \(\langle \cdot , \cdot \rangle \) is a shorthand of the inner product. As mentioned before when the w is a distribution (6) defines the expectation operator associated with w. Because of the polynomial nature of the terms appearing in (6), it is straightforward to compute the nonzero terms. Then, given a particular weight function \(w(\textbf{p})\), one can use the following three terms recursion relation given in Gautschi (1968) to create stabilised univariate orthogonal polynomials:

$$\begin{aligned} \Psi _{i+1}(p) = \Psi _{i}(p)(p-A_i) - \Psi _{i-1}(p)B_i, \ \ \ \ A_i = \frac{\mathbb {E}[p \Psi ^2_{i}]}{\mathbb {E}[\Psi ^2_{i}]}, \ \ \ \ B_i = \frac{\mathbb {E}[\Psi ^2_{i}]}{\mathbb {E}[\Psi ^2_{i-1}]} \end{aligned}$$
(7)

In the case in which more than one source of uncertainty is present, it is still possible to construct orthogonal multivariate polynomials via tensor product rules (Feinberg and Langtangen 2015). Note that while the method proposed in this paper is applicable to any orthogonal polynomial constructed with (7) in all the examples in this paper Chebyshev basis functions of the second kind are used together with the associated weight function \(w(\textbf{p})\).

By substituting the approximation given by (5) in (1), one gets:

$$\begin{aligned} \frac{\textrm{d} {\textbf {z}}}{\text {d}t} = \frac{\textrm{d}}{\textrm{d} t} \sum \limits _{i=0}^m \mathbf {c_i}(t) \Psi _i(\textbf{p}) = \sum \limits _{i=0}^m \varvec{\dot{c}_i}(t) \Psi _i(\textbf{p}) = \textbf{g}(t, \textbf{p}, \textbf{z}) \end{aligned}$$
(8)

and by making use of the intrusive Galerkin method, one obtains the following:

$$\begin{aligned} \left\langle \sum \limits _{i=0}^m \varvec{\dot{c}_i}(t) \Psi _i(\textbf{p}), \Psi _k(\textbf{p}) \right\rangle = \left\langle \textbf{g}(t, \textbf{p}, \textbf{z}), \Psi _k(\textbf{p}) \right\rangle \end{aligned}$$
(9)

from which the time variation of the coefficients can be derived:

$$\begin{aligned} \dot{\textbf{c}}_k(t) = \frac{\left\langle \textbf{g}(t, \textbf{p}, \textbf{z}), \Psi _k(\textbf{p}) \right\rangle }{\left\langle \Psi _k(\textbf{p}), \Psi _k(\textbf{p}) \right\rangle } \end{aligned}$$
(10)

The integrals at numerator of the right-hand side of (10) need to be computed numerically, in the general case, while the integrals at denominator can be pre-computed analytically. Gauss quadrature rules (Feinberg and Langtangen 2015) can be used to approximate the integrals at numerator, as follows:

$$\begin{aligned} \begin{array}{l} \left\langle \textbf{g}(t, \textbf{p}, \textbf{z}), \Psi _k(\textbf{p}) \right\rangle = \int _{\Omega } \textbf{g}(t, \textbf{p}, \textbf{z}(\textbf{p})) \Psi _k (\textbf{p}) w(\textbf{p}) \textrm{d} \textbf{p} \\ \quad \approx \sum \limits _{j_1=1}^N...\sum \limits _{j_i=1}^N...\sum \limits _{j_n=1}^N W_{j_1}...W_{j_i}...W_{j_n} \textbf{g}(t, \textbf{p}_{j_i}, \textbf{z}(\textbf{p}_{j_i})) \Psi _k (\textbf{p}_{j_i}) \end{array} \end{aligned}$$
(11)

where \(W_{j_i}\) and \(\textbf{p}_{j_i}\) are, respectively, the N quadrature weights and abscissa points along each dimension i. Sparse quadrature schemes (Smolyak 1963) can be used to reduce the computational complexity of the numerical integrals with the increase in the number of dimensions.

The initial value of the coefficients \(\textbf{c}_k(t=0)\) is found by projecting the initial conditions \(\textbf{z}_0\):

$$\begin{aligned} \textbf{c}_k(t=0) = \frac{\left\langle \mathbf {z_0}, \Psi _k(\textbf{p}) \right\rangle }{\langle \Psi _k(\textbf{p}), \Psi _k(\textbf{p}) \rangle } \end{aligned}$$
(12)

which greatly simplifies in the case in which the initial state is deterministic (i.e. none of the components of \(\mathbf {z_0}\) are components of \(\textbf{p}\)): the only nonzero coefficient is \(\mathbf {c_0}\), the one associated with the degree-zero polynomial of the orthogonal basis, whose value is the one of the deterministic initial condition.

Up to this point, PCEs are simply a way to represent the state of the system \(\textbf{z}\) with a polynomial expansion of the parameters \(\textbf{p}\) and propagate this expansion forward in time. Thus, regardless of whether \(\textbf{p}\) is an uncertain quantity with an associated probability distribution w or a simple parameter defined on a parameter space \(\Omega \), (41) provides a way to propagate the polynomial forward in time.

Furthermore, (12) can be applied at any time t to calculate a polynomial expansion of the state variables with respect to the uncertainty variables. In this case (12) reads:

$$\begin{aligned} \hat{\textbf{c}}_k(t) = \frac{\left\langle \textbf{z}(t,\textbf{p}), \Psi _k(\textbf{p}) \right\rangle }{\langle \Psi _k(\textbf{p}), \Psi _k(\textbf{p}) \rangle } \end{aligned}$$
(13)

In both (12) and (13), the integral at denominator can be computed analytically, one time before the calculation of the coefficients. The integral at numerator of (13) can be solved numerically as in (11):

$$\begin{aligned} \begin{array}{l} \left\langle \textbf{z}(t, \textbf{p}) \Psi _k(\textbf{p}) \right\rangle = \int _{\Omega } \textbf{z}(t, \textbf{p}) \Psi _k (\textbf{p}) w(\textbf{p}) \textrm{d} \textbf{p} \\ \quad \approx \sum \limits _{j_1=1}^N...\sum \limits _{j_i=1}^N...\sum \limits _{j_n=1}^N W_{j_1}...W_{j_i}...W_{j_n} \textbf{z}(t, \textbf{p}_{j_i}) \Psi _k (\textbf{p}_{j_i}) \end{array} \end{aligned}$$
(14)

The polynomial expansion computed with (13) is called non-intrusive because one needs only samples of the state vector \(\textbf{z}(t,\textbf{p})\) at time t for different realisations of \(\textbf{p}\). These samples can be obtained from the direct forward integration of the equations of motion.

The use of a non-intrusive computation of the coefficients of the polynomial expansion is advantageous when the dynamical model is not directly accessible, the state vector is available through observations or, as it will be explained in Sect. 5, if the integration of system (17) becomes problematic due to the presence of singularities or discontinuities in the uncertainty space. In this case, restart mechanisms like the ones proposed in Greco et al. (2020); Manzi and Vasile (2020) and Ozen and Bal (2016) can be effectively used to improve the propagation of the polynomial expansion. In this paper, however, we will not consider these restart mechanisms and we will show the use of (13) instead of (10) to compute two of the indicators.

Since the interest is to exploit the evolution of the coefficients of a polynomial expansion and not to exactly propagate a particular probability distribution, the weight w and basis functions \(\Psi \) can be arbitrarily chosen to make the numerical integration of (11) efficient. In the following, we will consider the components of \(\textbf{p}\) to be independent and \(\Omega \) to be an orthotope. Furthermore, integral (11) is performed after the change of coordinates:

$$\begin{aligned} p_i=\frac{(b_i-a_i)}{2}\xi _i+\frac{b_i+a_i}{2}\;\;\; i=1,...,n \end{aligned}$$
(15)

with \(p_i\in [a_i,b_i]\) and \(\varvec{\xi }\in [-1,1]^n\) so that:

$$\begin{aligned} \int _{\Omega } \textbf{g}(t, \textbf{p}, \textbf{z}(\textbf{p})) \Psi _k (\textbf{p}) w(\textbf{p}) \textrm{d}\textbf{p}= \frac{\prod _i^n(b_i-a_i)}{2^n}\int _{[-1,1]^n} \textbf{g}(t, \varvec{\xi }, \textbf{z}(\varvec{\xi })) \Psi _k (\varvec{\xi }) w(\varvec{\xi }) \textrm{d} \varvec{\xi } \end{aligned}$$
(16)

In this section, we derived the expansion, intrusive or non-intrusive, of \(\textbf{z}\) in orthogonal polynomials of \(\textbf{p}\). Other forms of uncertainty quantification in the literature, like Taylor series expansions, for example, do not use orthogonal polynomials. However, in the definition of the stochastic dynamical indicators we will exploit the orthogonality of the polynomials. Thus, while, in principle, any polynomial representation of \(\textbf{z}\) is applicable, before computing the stochastic indicators one would need to transform the polynomial expansion into orthogonal basis as suggested in Fodde et al. (2022).

Note also that the use of Taylor expansions to derive dynamical indicators was already proposed in Pérez-Palau et al. (2015). However, the approach introduced in this paper differs from the one in (Pérez-Palau et al. 2015) in two important ways: (i) in this paper, we use the evolution of the coefficients of the polynomials to directly define the indicators and (ii) the indicators proposed in this paper quantify the effect of uncertainty in the parameters defining the dynamic model. This later point is of particular importance because, as it will be explained in the remainder of the paper, the primary utility of the indicators proposed in this work is to study the effect of the uncertainty in the dynamic model.

3.2 Finite-Time Lyapunov Exponent

Following Milani and Gronchi (2009) Section 2.3, we now briefly recall the definition of finite-time Lyapunov exponents. We start from the definition of the variational equations in the deterministic settings:

$$\begin{aligned} \textrm{d} {\textbf {z}} (t, {\textbf {p}}) \approx \frac{\partial {\textbf {z}}(t, {\textbf {p}})}{\partial \mathbf {z_0}} \text {d}\mathbf {z_0} \end{aligned}$$
(17)

The FTLE emerges from the spectral analysis of the Cauchy–Green (CG) strain tensor:

$$\begin{aligned} \Delta = \Phi ^T \Phi \end{aligned}$$
(18)

where \(\Phi \) is the state transition matrix of the system. From it, the definition of finite-time Lyapunov exponent (Shadden et al. 2005) is given by:

$$\begin{aligned} \sigma (\textbf{z}(t_f,\textbf{p}))=\frac{1}{t_f-t_0} \log {\sqrt{\lambda _{\max }(\textbf{z}(t_f,\textbf{p}))}} \end{aligned}$$
(19)

where \(t_f\) is the time interval associated with the propagation, starting at \(t_0\), and \(\lambda _{\max }\) is the maximum eigenvalue of the Cauchy–Green strain tensor.

3.3 Random walks, mean square displacement and diffusion

A random walk is a stochastic process that defines a path made of random steps. Steps can have random direction, random length and be taken at random times. One of the best known random walks is Brownian motion. Brownian motion can be well described by a Weiner process \(W_t\) with independent steps and each step taken from a normal distribution \(\mathcal {N}(0,t)\) with zero mean and variance Dt.

$$\begin{aligned}{} & {} x(t)-x_0=\sqrt{2D}W_t \end{aligned}$$
(20)
$$\begin{aligned}{} & {} \langle (x(t)-x_0)^2\rangle =W_t^2=2Dt \end{aligned}$$
(21)

where D is the diffusion coefficient. In normal diffusion, the exponent of the time t is one, however, some stochastic processes can diffuse faster or slower (e.g. fractionated Brownian motion or Levy processes) (Alves et al. 2016). Thus, in the general case one can write:

$$\begin{aligned} \langle (x(t)-x_0)^2\rangle \approx Kt^{\alpha } \end{aligned}$$
(22)

where K is a constant and \(\alpha \) is the diffusion exponent. In the next section, we will make use of (22) to derive an indicator that relates the coefficients of the polynomial expansion to the diffusion exponent.

4 Stochastic dynamical indicators

In this section, we introduce and define three different types of stochastic dynamical indicators, or SDIs. The first one is a simple quantification of the uncertainty in the FTLE induced by multiple realisation of the uncertain parameter vector \(\textbf{p}\). The second type of indicator is an extension of the idea of FTLE that measures the divergence of two polynomial expansions of neighbouring trajectories. The third type measures the degree of diffusion of an ensemble of trajectories induced by multiple realisation of the uncertain quantities.

4.1 Stochastic Finite-Time Lyapunov Exponents

In this section, we will develop two types of stochastic finite-time Lyapunov exponents. The first type replaces the FLTE with the statistical moments quantifying the uncertainty in the FTLE. If the dynamics depends on some uncertain quantities, the strain tensor in (18) is a random matrix with entries that are a function of the realisations of the uncertain quantities. Thus, one could study the ensemble of matrices and derive a statistics over the realisations of the eigenvalues. An approach to derive the statistical moments of the FTLE can be found in Schomerus and Titov (2002). In Schomerus and Titov (2002), the authors considered the case of a one-dimensional dynamical system driven by a random potential and built the statistical moments of the FTLE by computing the moments of the components of the matrix \(\partial \textbf{z}(t_f)(t)/\partial \textbf{z}_0\). In what follows, instead, we will use a polynomial chaos expansion of the FTLE with respect to the uncertain vector \(\textbf{p}\). By sampling the uncertain space \(\Omega \), one can directly construct the PCE expansion of the FTLE \(\sigma \) defined in (19):

$$\begin{aligned} \sigma (\textbf{z}(t_f,\textbf{p})) \approx \sum _{k=0}^m \sigma _k(t_f) \Psi _k(\textbf{p}) \end{aligned}$$
(23)

where the coefficients \(\sigma _k(t_f)\) are computed by projection:

$$\begin{aligned} \sigma _k(t_f) = \frac{\langle \sigma (\textbf{z}(t_f,\textbf{p})), \Psi _k(\textbf{p})\rangle }{\langle \Psi _k(\textbf{p}), \Psi _k(\textbf{p}) \rangle } \end{aligned}$$
(24)

Definition 1

We call stochastic finite-time Lyapunov exponents type 1 the statistical moments of the FTLE derived from expansion (23):

$$\begin{aligned} \alpha _1^1= & {} \sigma _0 \end{aligned}$$
(25)
$$\begin{aligned} \alpha _1^2= & {} \sum _{k=1}^m\sigma _k^2\langle \Psi _k, \Psi _k \rangle \end{aligned}$$
(26)

For all higher moments, one can use the multinomial expansion and pre-calculate the integrals of the basis functions:

$$\begin{aligned} \alpha _1^m=\sum _{|\textbf{k}|=m}\left( {\begin{array}{c}m\\ k_1,k_1,...,k_q\end{array}}\right) \left\langle \prod _{j=1}^q\Psi _j^{k_j} \right\rangle \prod _{j=1}^q \sigma _j^{k_j} \end{aligned}$$
(27)

where \(\langle \prod _{j=1}^q\Psi _j^{k_j} \rangle \) can be pre-computed given a set of basis function and associated distribution function, and \(|\textbf{k}|=m\) means all the combination of indexes \(k_j\) such that the sum is equal to m.

Remark 1

From the definition of stochastic finite-time Lyapunov element type 1, it is clear that the same procedure described above can be applied to any other deterministic indicator to derive their statistical moments as a function of the distribution of \(\textbf{p}\).

For the second type, as in the deterministic settings, we start from the hypervolume \(\textrm{d}\textbf{z}^T\textrm{d}\textbf{z}\) and compute the time evolution of its expectation \(\mathbb {E}(\textrm{d}\textbf{z}^T\textrm{d}\textbf{z})\).

Proposition 1

Given two solutions of system (1) and assuming that each solution can be expanded in the same orthogonal basis functions \(\Psi (\textbf{p})\) of the uncertain parameter vector \(\textbf{p}\), and given the distribution function \(w(\textbf{p})\), the expected value of the square difference of the two solutions can be approximated with:

$$\begin{aligned} \mathbb {E}(\textrm{d}\textbf{z}^T\textrm{d}\textbf{z})\approx \sum _{i=0}^m \textrm{d}\textbf{z}_0^T\left( \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}}^T \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \right) \textrm{d}\textbf{z}_0 \langle \Psi _i, \Psi _i \rangle \end{aligned}$$
(28)

Proof

Given the two solutions \(\textbf{z}(\textbf{p},t:\textbf{z}_0)\) and \(\varvec{\hat{z}}(\textbf{p},t:\varvec{\hat{z}}_0)\), with initial conditions \(\textbf{z}_0\) and \(\varvec{\hat{z}}_0\), under the assumption that the solutions can be expanded in the same basis functions \(\Psi _i\), we can write:

$$\begin{aligned} \textrm{d} {\textbf {z}}= {\textbf {z}}(t,{\textbf {p}}:\textbf{z}_0)- \varvec{\hat{z}}(t,{\textbf {p}}:\varvec{\hat{z}}_0) \approx \sum _{i=0}^m \mathbf {c_i} \Psi _i - \sum _{i=0}^m \varvec{\hat{c}_i} \Psi _i \end{aligned}$$
(29)

and from (17) calling \(\textrm{d}\textbf{z}_0=\textbf{z}_0-\varvec{\hat{z}}_0\) we have:

$$\begin{aligned} \textrm{d} {\textbf {z}} \approx \sum _{i=0}^m \textrm{d} \mathbf {c_i} \Psi _i \approx \sum _{i=0}^m \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \text {d}\mathbf {z_0}\Psi _i \end{aligned}$$
(30)

from which, computing the expected value of the square of the final offset, we obtain:

$$\begin{aligned} \begin{aligned} \mathbb {E}(\textrm{d} {\textbf {z}}^T\textrm{d} {\textbf {z}})&\approx \int _{\Omega } \sum _{i=0}^m \sum _{j=0}^m \textrm{d} \mathbf {c_i} \textrm{d} \mathbf {c_j} \Psi _i \Psi _j w(\textbf{p}) \text {d}\textbf{p}\\&= \sum _{i=0}^m \textrm{d} \mathbf {c_i}^T \textrm{d}\mathbf {c_i} \langle \Psi _i, \Psi _i \rangle \\ {}&\approx \sum _{i=0}^m \textrm{d}\textbf{z}_0^T\left( \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}}^T \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \right) \textrm{d}\textbf{z}_0 \langle \Psi _i, \Psi _i \rangle \end{aligned} \end{aligned}$$
(31)

\(\square \)

We now derive an equivalent definition of variational equations (17) but in the coefficients of the PCE expansion of \(\textrm{d}\textbf{z}\).

Proposition 2

Given a dynamical system (1), the following set of equations describes a polynomial chaos expansion-based generalisation of the variational equations:

$$\begin{aligned} \frac{\partial }{\partial t} \frac{\partial \mathbf {c_k}}{\partial \mathbf {z_0}} =\frac{1}{\langle \Psi _k, \Psi _k \rangle } \left\langle \frac{\partial {\textbf {g}}}{\partial {\textbf {z}}} \sum _{i=0}^m \left( \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \Psi _i \right) , \Psi _k \right\rangle \end{aligned}$$
(32)

Proof

The following holds for “smooth” dynamics:

$$\begin{aligned} \frac{\partial }{\partial t} \left[ \frac{\partial {\textbf {z}}}{\partial \mathbf {z_0}} (t, {\textbf {p}}, \mathbf {z_0}) \right] = \frac{\partial }{\partial \mathbf {z_0}} \left[ \frac{\partial {\textbf {z}}}{\partial t} (t, {\textbf {p}}, \mathbf {z_0}) \right] \end{aligned}$$
(33)

where the term in brackets is explicitly given by:

$$\begin{aligned} \frac{\partial {\textbf {z}}}{\partial t} (t, {\textbf {p}}, \mathbf {z_0}) = {\textbf {g}}(t, {\textbf {p}}, \textbf{z}) = {\textbf {g}}({\textbf {z}}(\mathbf {z_0},t,{\textbf {p}}), {\textbf {p}}, t) \end{aligned}$$
(34)

Therefore, we can write:

$$\begin{aligned} \frac{\partial }{\partial t} \left[ \frac{\partial {\textbf {z}}}{\partial \mathbf {z_0}} (t, {\textbf {p}}, \mathbf {z_0}) \right] = \frac{\partial }{\partial \mathbf {z_0}} {\textbf {g}}({\textbf {z}}(\mathbf {z_0},t,{\textbf {p}}), {\textbf {p}}, t) \end{aligned}$$
(35)

By using the PCE decomposition, the second term in Eq. (35) leads to:

$$\begin{aligned} \frac{\partial }{\partial \mathbf {z_0}} {\textbf {g}}({\textbf {z}}(\mathbf {z_0},t,{\textbf {p}}), {\textbf {p}}, t) \approx \frac{\partial }{\partial \mathbf {z_0}} {\textbf {g}}({\textbf {z}}(\mathbf {c_1}(t, \mathbf {z_0}), \dots , \mathbf {c_m}(t, \mathbf {z_0}), {\textbf {p}}, t), {\textbf {p}}, t) \end{aligned}$$
$$\begin{aligned} \quad \quad \quad \quad \quad \quad \quad \quad \quad = \frac{\partial {\textbf {g}}}{\partial {\textbf {z}}} \sum _{i=0}^m \left( \frac{\partial {\textbf {z}}}{\partial \mathbf {c_i}} \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \right) = \frac{\partial {\textbf {g}}}{\partial {\textbf {z}}} \sum _{0=1}^m \left( \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \Psi _i \right) \end{aligned}$$
(36)

while the first term of Eq. (35) leads to:

$$\begin{aligned} \frac{\partial }{\partial t} \left[ \frac{\partial }{\partial \mathbf {z_0}} \sum _{i=0}^m \mathbf {c_i}(t, \mathbf {z_0}) \Psi _i({\textbf {p}}) \right] = \frac{\partial }{\partial t} \sum _{i=0}^m \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \Psi _i \end{aligned}$$
(37)

By putting Eqs. (36) and (37) back into Eq. (35), one gets:

$$\begin{aligned} \frac{\partial }{\partial t} \sum _{i=0}^m \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \Psi _i = \frac{\partial {\textbf {g}}}{\partial {\textbf {z}}} \sum _{i=0}^m \left( \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \Psi _i \right) \end{aligned}$$
(38)

and, by making use of the orthogonality condition, one arrives at the following result:

$$\begin{aligned} \frac{\partial }{\partial t} \frac{\partial \mathbf {c_k}}{\partial \mathbf {z_0}} =\frac{1}{\langle \Psi _k, \Psi _k \rangle } \left\langle \frac{\partial {\textbf {g}}}{\partial {\textbf {z}}} \sum _{i=0}^m \left( \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \Psi _i \right) , \Psi _k \right\rangle \end{aligned}$$
(39)

\(\square \)

As discussed for the deterministic formulation in Gawlik et al. (2009), in order to compute the variation of the coefficients \(\textbf{c}_i\), it is possible to propagate a regularly spaced grid of tracers with the same dimension as the phase space. In fact, the spectral harmonics of the generalised State Transition Matrix appearing in Eq. (17) consist of partial derivatives which can be computed via central differencing of neighbouring tracers, making use of the following second-order approximation:

$$\begin{aligned} \frac{\partial (c_{ki})_{t_0}^{t_f}(\textbf{z})}{\partial z_j} \approx \frac{(c_{ki})_{t_0}^{t_f}(\textbf{z} + \Delta \textbf{z}_j) - (c_{ki})_{t_0}^{t_f}(\textbf{z} - \Delta \textbf{z}_j)}{2 \Delta z_j} \end{aligned}$$
(40)

with \(\Delta \textbf{z}_j = [0, \dots , 0, \Delta z_j, 0, \dots , 0]\). This methodology greatly reduces the computational cost associated with the generalisation of the variational equations, as it is for the deterministic case. While the accuracy of the computation of the CG tensor degrades with this approach, the authors in Shadden et al. (2005) points out that: “finite differencing may unveil Lagrange Coherent Structures more reliably than obtaining derivatives of the flow analytically”.

From (28), one can now introduce the Cauchy–Green Tensor \(\Delta ^c_{ii}\) of the coefficients \(\textbf{c}_i\):

$$\begin{aligned} \Delta ^c_{ii}:=\left( \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}}^T \frac{\partial \mathbf {c_i}}{\partial \mathbf {z_0}} \right) \end{aligned}$$
(41)

Definition 2

From the spectral decomposition of \(\Delta ^c_{ii}\), one can derive the maximum eigenvalue \(\lambda _{ii,\textrm{max}}\) and then compute the corresponding exponent:

$$\begin{aligned} \alpha _2^i :=\frac{1}{t_f-t_0} \ln \sqrt{\lambda _{ii,\textrm{max}}} \end{aligned}$$
(42)

We call stochastic finite-time Lyapunov exponents type 2 the quantity \(\alpha _2^i\) defined in (42). The quantity \(\alpha _2^i\) gives an indication of the deformation of the hypervolume \(\textrm{d}\textbf{c}_i^T\textrm{d}\textbf{c}_i\). We can understand this deformation as the difference in the way two polynomial expansions of \(\textbf{z}\) with respect to \(\textbf{p}\), for two infinitesimally close initial conditions, evolve in time.

Remark 2

Note that \(\textrm{d}\textbf{c}_i^T\textrm{d}\textbf{c}_i\) measures the hypervolume defined by each coefficient vector of the polynomial expansion. Thus, the definition of \(\alpha _2^i\) suggests the following:

  • if the polynomial expansion converges rapidly with m, high-order coefficients will be small and so is expected to be the hypervolume \(\textrm{d}\textbf{c}_i^T\textrm{d}\textbf{c}_i\)

  • if two trajectories, starting from infinitesimally close initial conditions evolve very differently in time, the polynomial expansion with respect to \(\textbf{p}\) is also expected to evolve very differently. This descends from the definition of the time derivative of the coefficients \(\textbf{c}_i\) that depends on \(\textbf{g}\) which is a function of \(\textbf{z}\).

  • if multiple independent realisations of \(\textbf{p}\) induce trajectories that evolve very differently in time, a higher-order expansion will be needed to properly represent \(\textbf{z}\) at a given time t, furthermore if two trajectories, starting from infinitesimally close initial conditions evolve very differently in time, one would expect a significant difference in the time evolution of high-order coefficients \(\textbf{c}_i\).

We will expand further on these three points in the discussion section of the paper.

Indicators in Definition 1 will be called SFTLE1 in the remainder of this paper, while indicators in Definition 2 will be called SFTLE2. Indicators SFTLE1 give the probability distribution of the FTLE in (19) as a function of the distribution of the uncertain parameter vector \(\textbf{p}\). Indicators SFTLE2, instead, give a measure of the divergence of the coefficients of the polynomial model of the distribution of the solution \(\textbf{z}(t,\textbf{p})\) as a function of a variation of the initial condition \(\textbf{z}_0\). It should be noted how the eigenvectors associated with the parameter-dependent Cauchy–Green strain tensor are also characterised by a probability distribution. This implies that the direction of maximum strain is not deterministic, and there may be configurations in which there is an abrupt change of the maximum strain direction for different realisations of the uncertain parameter.

4.2 Pseudo-diffusion exponent

In order to derive the third indicator, we start from the idea, introduced in Sect. 3.3, that in a generic random walk process, the expected value of the square of the displacement is proportional to \(Kt^{\alpha }\). In the univariate case, by using Eq. (26) and exploiting the orthogonality of the basis functions, one can write the expected value of the square displacement as:

$$\begin{aligned} \kappa _2=\langle z-z_0,z-z_0 \rangle =\left\langle \left( \sum _{i=0} c_i\psi _i-c_0\right) ^2 \right\rangle =\sum _{i=1} s_i c_i^2 \end{aligned}$$
(43)

with \(s_i= \langle \psi _i,\psi _i\rangle \). One can now equate \(\kappa _2\) to \(Kt^{\alpha }\) to obtain:

$$\begin{aligned} \sum _{i=1} s_i c_i^2(t)=Kt^{\alpha } \end{aligned}$$
(44)

The left-hand side is the variance of z at time t, which, for \(\alpha =1\), is consistent with the fact that for a one-dimensional Brownian motion the second statistical moment of the Mean Square Displacement (MSD) is \(2Dt+z_0^2\), with \(2D=K\) the diffusion coefficient, and the MSD is equal to the second cumulant of the Gaussian distribution characterising the Brownian motion. This suggests that by looking at the variation of the coefficients of the polynomial, one can study the dynamical character of a system. Since the coefficients are subject to the same dynamic equations, see (41), they reflect the same evolution of the state. The evolution of the coefficients can be derived in other ways, for example via an algebra on the space of the polynomials (Greco et al. 2020; Pérez-Palau et al. 2015). As long as the state can be expressed as an expansion in orthogonal polynomials, one can derive Eq. (43).

Proposition 3

The coefficient \(\alpha \) in expression (44) can be approximated by:

$$\begin{aligned} \alpha \approx \tilde{\alpha }=\frac{\log {\left( \sum _{i=1}^m s_i c_i^2(t) +1\right) }}{\log t} \end{aligned}$$

Proof

Take the logarithm of both sides of expression (44) after adding a 1:

$$\begin{aligned} \log {\left( \sum _{i=1}^m s_i c_i^2(t) +1\right) }=b+\alpha \log t+\log \left( 1+\frac{1}{Kt^{\alpha }}\right) \end{aligned}$$
(45)

with \(b=\log K\), which can be rewritten as:

$$\begin{aligned} \frac{\log {\left( \sum _{i=1}^m s_ic_i^2(t) +1\right) }}{\log t}=\frac{b}{\log t}+\alpha +\frac{\log \left( 1+\frac{1}{Kt^{\alpha }}\right) }{\log t} \end{aligned}$$
(46)

and for large t can be approximated by:

$$\begin{aligned} \alpha \approx \tilde{\alpha }=\frac{\log {\left( \sum _{i=1}^m s_i c_i^2(t) +1\right) }}{\log t} \end{aligned}$$
(47)

\(\square \)

Definition 3

In the following, we call the quantity \(\tilde{\alpha }\) defined in Eq. (47), pseudo-diffusion exponent. If \(\textbf{z}\) is a vector of dimension n then one can write the covariance matrix:

$$\begin{aligned} \textbf{C}_v =\left[ \begin{array}{ccc} \sum _{i=1}^m s_i c_{1,i}^2(t)&{}...&{}\sum _{i=1}^m s_i c_{1,i}(t)c_{n,i}(t)\\ ...&{}...&{}...\\ ...&{}\sum _{i=1}^m s_i c_{j,i}^2(t)&{}...\\ ...&{}...&{}...\\ \sum _{i=1}^m s_i c_{n,i}(t)c_{1,i}(t)&{}...&{}\sum _{i=1}^m s_i c_{n,i}^2(t)\\ \end{array} \right] \end{aligned}$$
(48)

In this case, given that the covariance matrix is positive semi-defined, the pseudo-diffusion exponent can be computed as follows:

$$\begin{aligned} \tilde{\alpha }=\frac{\log {\left( \sum _{i=1} \lambda _i(\textbf{c}(t)) +1\right) }}{\log t} \end{aligned}$$
(49)

where \(\lambda _i\) is the \(i\text {th}\) eigenvalue of \(\textbf{C}_v\). If only one component along the diagonal of the matrix \(\textbf{C}_v\) is considered for the computation of \(\tilde{\alpha }\), we call the indicator \(\tilde{\alpha }_j\) with the subscript corresponding to the \(j\text {th}\) component. In this case, the indicator gives the rate of expansion of the projection of the polynomial along one axis only. In the remainder of the paper, we will use the following slightly different definition:

$$\begin{aligned} \tilde{\alpha }=\frac{\log {\left( \sqrt{\max _{i=1} \lambda _i(\textbf{c}(t)) }+1\right) }}{\log t} \end{aligned}$$
(50)

Note that both intrusive and non-intrusive propagation methods can be used to compute the coefficients of the polynomials at time t. However, in all the examples in this paper the pseudo-diffusion exponent will be computed with a non-intrusive computation of the coefficients.

5 Numerical experiments

In this section, we test the applicability of all three types of indicators to the study of three well-known problems: the uncertain perturbed pendulum, the uncertain double gyre, the uncertain circular restricted three-body problem. For each of these problems, we will construct a cartography and, by inspection, will analyse the characteristics of some notable trajectories. All simulations start at \(t_0=0\). The code for all the simulations and analyses in this section was written in MATLAB R2021b and was run on a laptop i7, 2.80GHz, in Windows 10 pro. In all the cases in this section, the expectation \(\mathbb {E}\) defined in (3)) is computed by taking 100 uniformly distributed random samples of the uncertain vector \(\textbf{p}\) and computing the corresponding polynomial chaos model at time \(t_f\). Numerical quadrature formulae were computed with 9 abscissa points and associated weights. From the experiments on the problems in this section, a higher number of abscissa points did not bring any significant change in the indicators and we could reduce the abscissa points to 6 without important degradations of the results.

5.1 The uncertain perturbed pendulum

The motion of a periodically perturbed pendulum can be written as in Pérez-Palau et al. (2015):

$$\begin{aligned} \ddot{x} = (a \cos 5 t -1 ) \sin x \end{aligned}$$
(51)

or as an equivalent system of first-order differential equations:

$$\begin{aligned} \dot{\textbf{z}} = \frac{\text {d}}{\text {d}t} \begin{array}{c} \begin{bmatrix} x \\ v_x \end{bmatrix} \end{array} = \begin{array}{c} \begin{bmatrix} v_x \\ (a \cos 5 t -1 ) \sin x \end{bmatrix} \end{array} = \textbf{g}(\textbf{z}, p, t) \end{aligned}$$
(52)

with \(p=a\) an uncertain parameter. One can then write the Jacobian of system (52):

$$\begin{aligned} \frac{\partial {\textbf {g}}}{\partial {\textbf {z}}} = \begin{array}{cc} \begin{bmatrix} 0, &{} 1 \\ \cos x (a \cos 5 t - 1), &{} 0 \end{bmatrix} \end{array} \end{aligned}$$
(53)

The uncertain parameter a is defined over the interval \(a\in [2.5-0.25,\; 2.5+0.25]\), with known or unknown distribution, dynamics (51) becomes uncertain and its evolution depends on the realisations of a. Thus, we expanded the state variables in Chebyshev polynomials of parameter a, up to degree 4, and used the definition of the three indicators SFTLE1, SFTLE2 and \(\tilde{\alpha }\) to study the evolution of the system.

All differential equations, were propagated forward in time for \(t_f = 10\), with an explicit adaptive Runge–Kutta method of order 4/5 with absolute tolerance and relative tolerance, respectively, \(10^{-10}\) and \(10^{-9}\). The three indicators were computed over a uniform grid of \(200\times 200\) initial conditions over the domain \(x \in [-3, 3]\), \(v_x \in [-3, 3]\). The finite increment for he calculation of both the FTLE and SFTLE is \(\Delta z_j = 1 \cdot 10^{-7}\).

Figure 1a shows the deterministic FTLE for \(a=2.5\), while Fig. 1b shows \(\alpha _1^1\) for a uncertain. Although the magnitude of the two indicators is slightly different, they present the same structures, as to be expected given that \(\alpha _1^1\) is an average value over the realisations of a. Figure 1c represents the variance of the FTLE due to the uncertainty in a, and Fig. 1d the skewness. Because the \(\sin ()\) is an odd function, the mapping \((x, v_x) \mapsto (-x, -v_x)\) is a symmetry of (52) and, because of this, the results shown in Fig. 1 are characterised by a central symmetry with respect to the origin. Note, however, that Fig. 1d clearly shows that the realisations of the state vector at time \(t_f\) are positively or negatively skewed depending on the initial conditions. Thus, SFTLE1 provides different pieces of information on the distribution of the FTLE depending on the order of the indicator.

Fig. 1
figure 1

SFTLE type 1 scalar fields of the perturbed pendulum for \(t_f= 10\)

Fig. 2
figure 2

SFTLE type 2 scalar fields of the perturbed pendulum for \(t_f = 10\)

Figure 2 shows the SFTLE2 from order 1 to 3. In this case, all three indicators show the same structures but with very different ranges. To be noted that as the order increases the regions where the indicators are negative become more negative. This implies that the higher the coefficient c the more two expansions starting from neighbouring initial conditions tend to behave similarly.

Fig. 3
figure 3

Pseudo-diffusion exponent field for the uncertain perturbed pendulum model

Figure 3 shows the pseudo-diffusion exponent field together with the probability of the trajectories in the ensemble to remain within a distance \(\epsilon =0.1\) from the mean at time \(t_f\) (see Eq. (3)) and the skewness of the ensemble of trajectories induced by multiple realisations of the uncertain parameter a. The skewness is computed only for the state component x. For multivariate problems, one would need to compute the skewness vector (Kollo 2008) and then reduce it to a scalar indicator. This computation will be addressed in future work. Figure 3b shows the \(\log 10\) of Fig. 3a. Also, this indicator identifies the same structures as SFTLE1 and 2 and the associated skewness is consistent with Fig. 1d. Figure 3c provides some additional information. First, it is interesting to note that it is the negative image of Fig. 3a which is consistent with the idea that \(\tilde{\alpha }\) provides a measure of the diffusion of the trajectories. Then, 3c highlights how some sets of initial conditions, yellow regions, are weakly sensitive to the uncertainty in a.

Fig. 4
figure 4

Two examples of trajectory ensembles: a low \(\tilde{\alpha }\), b high \(\tilde{\alpha }\)

Finally, Fig. 4 shows two notable trajectories, one for initial conditions \(\textbf{z}_0=[0.889447,\,-0.19598]\) and the other for \(\textbf{z}_0=[1.67337,1.19095]\), which correspond, respectively, to low and high values of \(\tilde{\alpha }\). In this case, 10 trajectories were propagated for 10 random realisations of a.

5.2 The uncertain double gyre

The double-gyre model consists of a pair of counter-rotating gyres, with a time-periodic perturbation. The system is modelled as a first-order system of differential equations, given by:

$$\begin{aligned} \dot{\textbf{z}} = \frac{\text {d}}{\text {d}t} \begin{array}{c} \begin{bmatrix} x \\ y \end{bmatrix} \end{array} = \pi A \begin{bmatrix} - \sin (\pi f(x,t))\cos (\pi y) \\ \cos (\pi f(x, t)) \sin (\pi y) \frac{\partial f}{\partial x} \end{bmatrix} = \textbf{g}(\textbf{z}, \textbf{p}, t) \end{aligned}$$
(54)

The functions and the coefficients appearing in the dynamics are given by:

$$\begin{aligned} \begin{array}{l} f(x, t) = a(t) x^2 + b(t) x \\ a(t) = \eta \sin (\omega t) \\ b(t) = 1 - 2 \eta \sin (\omega t) \\ A = 0.1, \ \ \ \ \omega = 2 \pi / 10 \\ \end{array} \end{aligned}$$
(55)

The Jacobian of the velocity field is given by:

$$\begin{aligned} \frac{\partial {\textbf {g}}}{\partial {\textbf {z}}} = \pi A \begin{array}{cc} \begin{bmatrix} - \pi \cos (\pi y) \cos (\pi f) \frac{\partial f}{\partial x}, &{} \sin (\pi f) \sin (\pi y) \\ -\pi \sin (\pi f) \frac{\partial f}{\partial x}^2 + 2 a(t) \cos (\pi f), &{} \pi \cos (\pi f) \frac{\partial f}{\partial x} \cos (\pi y) \end{bmatrix} \end{array} \end{aligned}$$
(56)

We generalise results from previous works (e.g. Farazmand and Haller 2012), by considering the uncertain parameter \(p=\eta \) to be an uncertain parameter defined over the interval \(\eta \in [0.1-0.01,0.1+0.01]\).

Fig. 5
figure 5

SFTLE1, scalar fields of the double-gyre model. Integration time is \(t_f = 20\)

Fig. 6
figure 6

SFTLE2 scalar fields of the double-gyre model. Integration time is \(t_f = 20\)

Fig. 7
figure 7

Pseudo-diffusion exponent for the double-gyre model

Fig. 8
figure 8

Examples of trajectory ensembles for high and low values of \(\tilde{\alpha }\). Double-gyre model

As in the previous example, we expand the state variables in Chebyshev polynomials of degree 4. Differential equations are propagated with the same adaptive Runge–Kutta integrator with the same absolute and relative tolerances. The propagation is performed for a fixed integration time \(t_f = 20\). A uniform grid of initial conditions has a size of \(200\times 200\), in the domains \(x \in [0, 2]\), \(y \in [0, 1]\). The finite increment for the calculation of both the FTLE and SFTLE is \(\Delta z_j = 1 \cdot 10^{-7}\).

Figure 5a, b compares the deterministic FTLE with SFTLE1. Also, in this case \(\alpha _1^1\) shows the same structures as the FTLE. It is interesting to note in Fig. 5c, how the location of the ridges of \(\alpha _1^2\), are located near the ridges of \(\alpha _1^1\). This implies that for chaotic initial conditions the set of trajectories behaves qualitatively differently with different realisations of the uncertain parameter. This emerges also from Fig. 5d where the skewness of FLTE is positive or negative depending on the initial conditions. Similar consideration can be derived from Fig. 6 where the SFTLE2 are represented and from Fig. 7 where \(\tilde{\alpha }\) is represented together with the expectation for a threshold of \(\epsilon =0.25\) and the skewness of the x component of the ensemble of trajectories. Ten trajectories corresponding to ten realisation of \(\eta \) are represented in Fig. 8 for two initial conditions \(x_0=1.37688\), \(y_0=0.73869\) and \(x_0=1.45729\), \(y_0=0.44221\) corresponding, respectively, to high and low values of \(\tilde{\alpha }\). Note in Fig. 8a the bifurcation of the ensemble into two different groups of trajectories.

5.3 The uncertain circular restricted three-body problem

The circular restricted three-body problem (CR3BP) is arguably one of the most studied problems in celestial mechanics. In this section, we will consider the planar case with an uncertain mass parameter. The planar circular restricted three-body problem (Szebehely 1967) is governed by:

$$\begin{aligned} \ddot{x} - 2 \dot{y} = \frac{\partial J}{\partial x} \end{aligned}$$
(57)
$$\begin{aligned} \ddot{y} + 2 \dot{x} = \frac{\partial J}{\partial y} \end{aligned}$$

where J(xy) is given by:

$$\begin{aligned} J(x,y) = \frac{x^2+y^2}{2} + \frac{1 - \mu }{\sqrt{(x+ \mu )^2 + y^2 }} + \frac{\mu }{\sqrt{(x - 1 + \mu )^2 + y^2}} + \frac{1}{2} \mu (1 - \mu ) \end{aligned}$$
(58)

and \(\mu \), the mass parameter of the system, is a function of the masses of the primaries. With this formulation, the reference frame is uniformly rotating and the primaries’ position, in such frame, is constant in time. We can again rewrite the system as a first-order system of differential equations:

$$\begin{aligned} \dot{\textbf{z}} = \frac{\text {d}}{\text {d}t} \begin{array}{c} \begin{bmatrix} x \\ y \\ v_x \\ v_y \end{bmatrix} \end{array} = \begin{array}{c} \begin{bmatrix} v_x \\ v_y \\ 2 v_y + \frac{\partial J}{\partial x} \\ - 2 v_x + \frac{\partial J}{\partial y} \end{bmatrix} \end{array} = \textbf{g}(\textbf{z}, p) \end{aligned}$$
(59)

with \(v_x=\dot{x}\) and \(v_y=\dot{y}\) and uncertain parameter \(p=\mu \). The partial derivatives of J, appearing in (59), are given by:

$$\begin{aligned} \frac{\partial J}{\partial x} = x - \frac{(1 - \mu )(x+ \mu )}{((x+ \mu )^2 + y^2 )^{3/2}} - \frac{\mu (x - 1 + \mu )}{((x - 1 + \mu )^2 + y^2)^{3/2}} \end{aligned}$$
(60)
$$\begin{aligned} \frac{\partial J}{\partial y} = y - \frac{y(1 - \mu )}{((x+ \mu )^2 + y^2 )^{3/2}} - \frac{\mu y}{((x - 1 + \mu )^2 + y^2)^{3/2}} \end{aligned}$$

The Jacobian of the velocity field, associated with the first-order formulation of the dynamics, is:

$$\begin{aligned} \frac{\partial {\textbf {g}}}{\partial {\textbf {z}}} = \begin{array}{cccc} \begin{bmatrix} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 \\ \frac{\partial ^2 J}{\partial x^2} &{} \frac{\partial ^2 J}{\partial y \partial x} &{} 0 &{} 2 \\ \frac{\partial ^2 J}{\partial x \partial y} &{} \frac{\partial ^2 J}{\partial y^2} &{} -2 &{} 0 \end{bmatrix} \end{array} \end{aligned}$$
(61)

in which the second-order derivatives of J are not expanded for brevity. The energy is then defined as,

$$\begin{aligned} E(x, y, v_x, v_y) = \frac{1}{2} (v_x^2 + v_y^2) - J(x, y) \end{aligned}$$
(62)

and is a constant of motion for the CR3BP. In order to reduce the dimensionality of the problem from four to two, the initial conditions are defined as:

$$\begin{aligned} {\textbf {z}}_i = [x_i, 0, v_{xi}, v_{y}(x_i, 0, v_{xi}, E_0)] \end{aligned}$$
(63)

where the value of \(v_y\) is computed, from a given condition \((x_i, v_{xi})\), making use of the conservation of energy:

$$\begin{aligned} v_y = - \sqrt{2(E_0 + J) - v_x^2} \end{aligned}$$
(64)

For the results in this paper, the energy level has been set to \(E_0= E(L_1) + 0.03715\), where \(E(L_1) = E(L_1^x, 0, 0, 0)\) is the potential energy at the first Lagrange point, \(L_1^x\) being given Wakker (2015) by:

$$\begin{aligned} L_1^x = 1 - \mu - \gamma _1 \end{aligned}$$
(65)

with

$$\begin{aligned} \gamma _1 = b - \frac{1}{3} b^2- \frac{1}{9} b^3 - \frac{23}{81} b^4 \end{aligned}$$

and

$$\begin{aligned} b = \left( \frac{1}{3} a \right) ^{1/3};\;\; a = \frac{\mu }{1 - \mu } \end{aligned}$$

We consider two cases. In case 1, the integration is performed for two full revolutions of the primaries, or \(t_f = 2\), using an explicit adaptive Runge–Kutta 4/5 method with absolute tolerance \(10^{-10}\) and relative tolerance of \(10^{-8}\). As in the previous two examples, we use Chebyshev orthogonal polynomials of type 2; thus, the integration abscissas and weights are optimised for these polynomials. The initial conditions grid is \(200\times 200\), in the domains \(x \in [-0.85, -0.125]\), \(v_x \in [-2.0, 2.0]\). The value of the mass parameter is assumed to be uncertain and within the interval reported in Table 1 case 1. The finite increment for the calculation of both the FTLE and SFTLE is \(\Delta z_j = 1 \cdot 10^{-7}\). Figure 9 reports the FTLE and SFTLE1 for case 1. Polynomials are expanded to order 4 and the figure represents the first three SFTLE1.

Table 1 Summary of parameter settings for the two cases of the uncertain CR3BP
Fig. 9
figure 9

FTLE and SFTLE1 scalar fields for the CR3BP model case 1. Integration time is \(t_f = 2\)

Fig. 10
figure 10

SFTLE2 scalar fields for the CR3BP model case 1. Integration time is \(t_f = 2\)

In Fig. 9a–c, the intersection of the invariant manifold with the plane \(y=0\) is identified by the closed yellow ridge in the upper right part of the figures. As it was found also in previous works, the presence of ridges in FTLE fields is only a sufficient condition for the existence of Lagrangian coherent structures (and invariant manifolds in particular), but not a necessary one. In fact, other ridges in the same figures are not associated with manifold crossings. Figure 10 shows the first three SFTLE2 for the same case. Also, in this case the ridges are consistent with the ones in the FTLE plot and the range of the indicator is progressively shifted towards negative values.

For case 2, we extended the integration time and also the range of the uncertain parameter (see case 2 in Table 1). The extension of the integration time allows one to observe more interesting behaviours. In particular, some trajectories start from the primary with coordinate \(x=1-\mu \) and flow to the primary with coordinate \(x=\mu \). Figure 11 shows the FTLE field for case 2. For the second case, we build a cartography only with the pseudo-diffusion exponent \(\tilde{\alpha }\) because it was faster than the computation of SFTLE1 and 2 and gave interesting results. Figure 12 shows the \(\tilde{\alpha }\) field for the CR3BP case 2 together with the expectation \(\mathbb {E}\) for a threshold \(\epsilon =0.1\).

Figure 13 shows the \(\tilde{\alpha }_x\) and \(\tilde{\alpha }_y\) fields, respectively. Figure 14 presents two trajectory ensembles for two extreme cases of very low and very high values of \(\tilde{\alpha }\) propagated for a time \(t_f=28\). In particular, Fig. 14a corresponds to initial conditions \(x_0=-0.157789, v_{x0}=1.63819\) and Fig. 14b corresponds to initial conditions \(x_0=-0.624121, v_{x0}=-0.271357\). The latter corresponds to a point in the blue ring in Fig. 13a, while the former corresponds to a point in the yellow region in Fig. 12. The ensemble of trajectories is superimposed to the level curves of J calculated with a fixed \(\mu =0.1\) and we limit the axes for x and y to the interval \([-2,2]\) and \([-2,2]\).

It is remarkable that \(\tilde{\alpha }\) captures the diffusion of trajectories that eventually leave the system (see Fig. 13a) or trajectories that are quasi-periodic (see Fig. 13b). In the former case, a change in the mass parameter, for a fixed value of the initial conditions, leads the total mechanical energy to fluctuate from a value below the zero velocity energy of the Lagrange equilibrium point 2 (L2) to one above it. Thus, for some realisations of \(\mu \) the zero velocity curves open at L2 and allow some trajectories to exit. In the latter case, instead all realisations remain confined and display very little sensitivity to the variation of \(\mu \).

Fig. 11
figure 11

FTLE field of the CR3BP for an integration time \(t_f=2.8\)

Fig. 12
figure 12

Pseudo-diffusion exponent for the CR3BP case 2

Fig. 13
figure 13

Pseudo-diffusion exponent for the Cr3BP case 2: individual components

Fig. 14
figure 14

Example of ensemble of trajectories for: a highly diffusive case, b very low pseudo-diffusion exponent. Integration time \(t_f=28\)

6 Computational complexity

The computational cost of the SDIs is mainly dictated by the complexity of the calculation of the coefficients of the polynomial expansions. The computational complexity of the pseudo-diffusion exponent using non-intrusive polynomials requires the integration of N sample trajectories. The number N depends on the integration scheme. For a full tensor product and Gauss formulas \(N=n_g^{n_p}\) with \(n_g\) the number of integration points per uncertain dimension. For a sparse grid, the number of sample trajectories grows as \(N=2^ll^{n_p-1}\) where l is the level of the sparse grid. Thus, in the examples presented above the pseudo-diffusion exponent required the propagation of 9 trajectories per initial condition. The number of coefficients to be computed for a full polynomial expansion is given by \(M=\left( {\begin{array}{c}n_p+m\\ n_p\end{array}}\right) \) with a corresponding number of projection integrals. If an intrusive method is used instead one needs to propagate M differential equations and, for each equation, compute a multidimensional integral.

The computation of the SFTLE1 requires N values of the FTLE. Since the computation of the FTLE requires propagating 2n tracers the computation of SFTLE1 requires 2Nn trajectories. In the test cases in the previous section, 9 Gauss integration points were used. Thus, the computation of SFTLE1 required 36 propagations of the dynamics per initial condition for all two-dimensional problems and 72 propagations for the CR3BP. The computation of the SFTLE2, instead, requires the propagation of 2Mn equations and in each equation the dynamics is evaluated N times per integration step. Looking at the examples in the previous sections, for an expansion to degree 3 and one uncertain parameter, the number of equations to propagate for each initial condition is 12, for a two-dimensional problem, and 24, for a four-dimensional problem, and for each equation the dynamics is evaluated 9 times per integration step. Thus, in terms of number of propagation and computational cost the use of the pseudo-diffusion exponent computed with non-intrusive expansions and sparse grids provides the fastest approach. If polynomials are propagated with an algebra the use of the SFTLE2 becomes an interesting option, along with the pseudo-diffusion exponent, as it incorporates part of the sensitivity to the initial conditions.

7 Discussion

The three indicators proposed in this paper were shown to capture similar structures when applied to a cartographic study of dynamical systems under uncertainty. However, they measure conceptually different properties. SFTLE1 measures the statistical moments of the uncertainty in the standard FTLE. The first moment was shown to provide the same qualitative information of standard FTLE, while higher moments provide more interesting and unexpected information on the evolution of the dynamical system. In particular, the strength of diffusive processes or asymmetries in the evolution of the system. SFTLE2 measures the divergence of neighbouring polynomial expansions. When this index is negative two polynomial expansions are behaving similarly up to time \(t_f\). A value higher than zero means that the coefficients of the polynomials are diverging, which implies a divergent behaviour of the trajectories. Since the coefficients can be used to compute the statistical moments, divergent coefficients signifies that the ensemble of trajectories induced by multiple realisations of the uncertain parameters are also diverging.

In this sense, analysing all the SFTLE2 with superscript up to m might not bring additional useful information as the highest one is sufficient to understand the behaviour of the system. Thus, one can argue that the maximal index m of the positive SFTLE2 can work as a measure of the degree of divergence. This aspect needs further investigation before coming to a conclusion and will be the subject of future work.

The pseudo-diffusion exponent directly measures the degree of diffusion of the ensemble of trajectories. This suggests that the pseudo-diffusion exponent of an infinitesimal uncertainty in the initial conditions would give the same qualitative information of the FTLE. This can be seen in Fig. 15 where the FTLE for the uncertain perturbed pendulum is compared to the \(\log 10\) of \(\tilde{\alpha }\). In this case, \(\tilde{\alpha }\) is computed with a simple first-order polynomial expansion and only 9 integration points. The initial conditions are assumed to belong to a square with edge \(10^{-5}\) centred in the nominal value of the initial conditions while the model parameter a is deterministic and fixed at 2.5. Since the magnitude of the coefficients of the polynomial expansion is dependent on the magnitude of the uncertainty, an infinitesimal uncertainty leads to a small value of \(\tilde{\alpha }\). However, from Fig. 15 one can see a remarkable similitude between the FTLE and \(\tilde{\alpha }\) to the point that the latter appears simply to be a scaled version of the former.

Fig. 15
figure 15

Uncertain pendulum. Comparison between pseudo-diffusion exponent in the case of deterministic parameter a and uncertain initial conditions and FTLE

This result can be understood if one considers the polynomial approximation of the propagated states. In fact assumes that one computed the FTLE from a linear approximation of \(\textbf{z}(t_f)\) with respect to the uncertain vector \(\textbf{p}=\textbf{z}_0\) so that \(\textbf{z}(t_f)\approx \sum _i^m \textbf{c}_{i}\Psi _i(\textbf{p})\) with \(m=1\), then we can demonstrate the following proposition.

Proposition 4

The eigenvalues \(\lambda _i^{C_v}\) of the covariance matrix \(\textbf{C}_v\) in (48) are proportional to the eigenvalues \(\lambda _i^c\) of the matrix:

$$\begin{aligned} \tilde{\varvec{\Delta }}= \left[ \frac{\textrm{d}\textbf{z}(t_f)}{\textrm{d}\textbf{z}_0}\right] ^T\left[ \frac{\textrm{d}\textbf{z}(t_f)}{\textrm{d}\textbf{z}_0}\right] \end{aligned}$$
(66)

with \(\textbf{z}(t_f)\approx \sum _i^m \textbf{c}_{i}\Psi _i(\textbf{p})\), \(\textbf{p}=\textbf{z}_0\), \(m=1\) and \(\Psi (\textbf{z}_0)\) the Chebyshev polynomials of type 2.

Proof

Considering a first-order expansion of \(\textbf{z}(t_f)\approx \sum _i^1 \textbf{c}_{i}\Psi _i(\textbf{z}_0)\). One can derive the matrix \(\tilde{\Delta }\):

$$\begin{aligned} \varvec{\tilde{\Delta }} =\left[ \begin{array}{ccc} \sum _{i=1} c_{1,i}^2(t)&{}...&{}\sum _{i=1} c_{1,i}(t)c_{n,i}(t)\\ ...&{}...&{}...\\ ...&{}\sum _{i=1} c_{j,i}^2(t)&{}...\\ ...&{}...&{}...\\ \sum _{i=1} c_{n,i}(t)c_{1,i}(t)&{}...&{}\sum _{i=1} c_{n,i}^2(t)\\ \end{array} \right] \end{aligned}$$
(67)

where the index j loops over the number of dimensions n. From the definition of the covariance in (48) the terms in the summation are multiplied times the factors \(s_i\) which descend from the integration over \(\Omega \) of the product of basis functions. Assuming that the uncertainty in the components of \(\textbf{z}_0\) is independent and uncorrelated and that also in the covariance the polynomial expansion is up to first order, all terms \(s_i\) have the same value \(\tilde{s}\) and thus we can write:

$$\begin{aligned} \textbf{C}_v=\tilde{s}\tilde{\varvec{\Delta }} \end{aligned}$$
(68)

\(\square \)

Remark 3

From linear algebra, the Cauchy–Green deformation tensor \(\varvec{\Delta }= \left[ \frac{\textrm{d}\textbf{z}(t_f)}{\textrm{d}\textbf{z}_0}\right] ^T\left[ \frac{\textrm{d}\textbf{z}(t_f)}{\textrm{d}\textbf{z}_0}\right] \) has the same eigenvalues of the matrix \(\tilde{\varvec{\Delta }}\); thus, for a first-order expansion with respect to \(\textbf{p}=\textbf{z}_0\), it can be concluded that the eigenvalues used in the computation of the pseudo-diffusion exponent are proportional to the eigenvalues of the Cauchy–Green deformation tensor.

Remark 4

For infinitesimally small uncertainty in the initial conditions, an expansion up to the first order is often a reasonable approximation and is consistent with a first-order Taylor expansion of \(\textbf{z}(t_f)\) with respect to \(\textbf{z}_0\). If a higher-order expansion is used instead the matrix \(\tilde{\varvec{\Delta }}\) will contain products of higher-order coefficients and also the terms \(s_i\) in the covariance will correspond to higher-order polynomials. Thus, an extension of Proposition 4 is not straightforward; however, one can notice that if the expansion is convergent the contribution of higher-order terms will be small and the linear approximation in Proposition 4 will capture the main contribution to the value of the eigenvalues.

Note that although in this paper we limited our attention only to the case of parametric uncertainty, the same methodology can be applied to the study of dynamical systems driven by stochastic processes via the Karhunen–Loève expansion (Deheuvels 2006).

7.1 Relation to other indicators derived from polynomial expansions

In Pérez-Palau et al. (2015), two dynamical indicators were derived from Jet Transport. One indicator was measuring the rate of contraction or expansion of the region propagated with Jet Transport. The rate was calculated with respect to the size of the set of initial conditions that was propagated. In the definition of \(\tilde{\alpha }\), as demonstrated in Proposition 4, the set of initial conditions is \(\Omega \) and a measure of its size is accounted for in the integrals of the polynomial bases, see (16). The expansion/contraction is directly measured by the eigenvalues of the covariance matrix \(\textbf{C}_v\). In fact, given a covariance matrix, the ellipsoid enclosing a given percentile of the propagated states has the direction of its axes defined by the eigenvectors of \(\textbf{C}_v\) and their lengths by \(2c\sqrt{\lambda _i}\), where \(\lambda _i\) are the eigenvalues and c defines the percentile. In Proposition 4, we also demonstrated that the eigenvalues of \(\textbf{C}_v\) are scaled by the integral of the basis functions over \(\Omega \). Thus, it can be concluded that if the pseudo-diffusion exponent is used to quantify the uncertainty in the propagated states from a set of uncertain initial conditions, it contains the same information, on the expansion or contraction of the initial uncertainty set, as the contraction/expansion indicator proposed in Pérez-Palau et al. (2015).

In Fodde et al. (2022), an indicator was derived from the magnitude of the predicted and observed coefficients of a polynomial expansion of the propagated states. This indicator was called “n+1”. As it was argued above, SFTLE2 is, by its nature, capturing the variation in the high-order coefficients of the polynomial expansion and is, therefore, related to the n+1 indicator. In fact, it was shown that irregular types of motion require higher-order expansions to achieve a good accuracy of the polynomial representation. At the same time neighbouring initial conditions are shown to lead to different evolutions of the polynomial expansions when two trajectories tend to diverge. In this sense, the SFTLE2 is also connected to the indicator, proposed in Pérez-Palau et al. (2015), measuring the precision of the polynomial expansion of the propagated states. However, SFTLE2 presents two important differences: (i) SFTLE2 is not suitable to quantify the uncertainty in the initial conditions because the difference of the coefficients is computed with respect to an infinitesimal variation of the initial conditions; (ii) SFTLE2 encapsulates both a measure of the divergence of two neighbouring trajectories and a measure of the uncertainty in the propagated states induced by model uncertainty.

8 Practical utility of the indicators

In this section, we present two practical uses of the proposed indicators. The first practical use is the identification of robust initial conditions in the elliptical restricted three-body problem. We will give a definition of robust initial conditions and show how \(\tilde{\alpha }\) can be used to design trajectories that are weakly affected by the uncertainty in the dynamic model. The second practical use is the identification of regions of practical stability in the CR3BP. For all calculations in this section, polynomials were expanded to order 3 and 9 abscissa points per dimension of the uncertain vector \(\textbf{p}\) were used. The expectation \(\mathbb {E}\) was computed by drawing 100 uniformly distributed samples from the space \(\Omega \) and evaluating the polynomial chaos at \(t_f\).

8.1 Identification of robust initial conditions

As previously mentioned, the major utility of the indicators proposed in this paper is to study the effect of model uncertainty on the evolution of a trajectory starting from a given initial condition \(\textbf{z}_0\). For example, in Gawlik et al. (2009), the authors studied how Lagrange coherent structures would change due to a variation of the eccentricity in the elliptical restricted three-body problem. We can understand this variability as an uncertainty in the existence and location of the LCS induced by an uncertainty in the eccentricity. The whole study in Gawlik et al. (2009) can be revisited by computing the SFTLE1, which would quantify the effect of the uncertainty in e on the FTLE. A low value of SFTLE1 would correspond to initial conditions that display a low sensitivity to a variation of the eccentricity. The same logic can be applied to the pseudo-diffusion exponent as, for a given initial condition, \(\tilde{\alpha }\) would be small if the trajectories in an ensemble presented a small variance with respect to a variation of the eccentricity. Following this idea, we define the robustness of a given initial condition \(\textbf{z}_0\) as:

Definition 4

The initial condition \(\textbf{z}_0\) is robust, with respect to the uncertainty vector \(\textbf{p}\), with robustness index \(\rho _p\), if \(\bar{\alpha }<\rho _p\), where \(\bar{\alpha }=\tilde{\alpha }\), if the pseudo-diffusion exponent is used to study the dynamics, or \(\bar{\alpha }=\alpha _1^2\), if SFTLE1 is used to study the dynamics instead. Therefore, minimum \(\rho _p\) initial conditions maximise robustness with respect to the uncertainty in \(\textbf{p}\).

Consider now the case in which a mission analyst needs to identify minimum control trajectories in a binary system with poorly known physical parameters. This is, for example, the case of missions to binary asteroids. Given the limited knowledge of the exact mass of the asteroids and the uncertainty in the orbital parameters of the secondary, there is an interest in finding initial conditions that are robust with respect to the uncertainty in the physical parameters of the binary system. Definition 4 can be straightaway applied to this case. As an illustrative example, consider the problem of finding robust initial conditions in the uncertain elliptical restricted three-body problem (ER3BP). Following Gawlik et al. (2009) and Pérez-Palau et al. (2015), the planar ER3BP problem can be modelled as follows:

$$\begin{aligned} \textbf{z}' = \frac{\textrm{d} \textbf{z}}{\textrm{d} \theta _s} = \begin{array}{c} \begin{bmatrix} v_x \\ v_y \\ 2 v_y + \frac{\partial J}{\partial x}/(1+e\cos \theta _s) \\ - 2 v_x + \frac{\partial J}{\partial y}/(1+e\cos \theta _s) \end{bmatrix} \end{array} \end{aligned}$$
(69)

where e is the eccentricity of the orbit of the secondary body, \(\textbf{z}=[x,y,v_x,v_y]^T\) and \(\theta _s\) its true anomaly. As in Pérez-Palau et al. (2015), we use the pseudo-energy:

$$\begin{aligned} E(x, y, v_x, v_y) = \frac{1}{2} \big (v_x^2 + v_y^2\big ) - J(x, y)/(1+e\cos \theta _s) \end{aligned}$$
(70)

to reduce the number of free initial conditions and define the value of \(v_y\) as:

$$\begin{aligned} v_y = - \sqrt{2(E_0 + J/(1+e\cos \theta _{s0})) - v_x^2} \end{aligned}$$
(71)

with \(\theta _{s0}=0\). We then consider an uncertainty in both the eccentricity e and the mass parameter \(\mu \) (see Table 2) around the values in the examples presented in Pérez-Palau et al. (2015). Figure 16 shows the pseudo-diffusion exponent and the expectation for \(\epsilon =0.1\). The \(\tilde{\alpha }\) looks similar to the one of the CR3BP; however, Fig. 16b displays an interesting area in the upper right corner that is less pronounced in the case of the CR3BP.

Table 2 Summary of parameter settings for the ER3BP
Fig. 16
figure 16

Pseudo-diffusion exponent for the ER3BP

Figure 17 shows the initial conditions for which \(\tilde{\alpha }\) is, respectively, below 0.01 and within the interval [0.4, 0.6] for the ER3BP.

Fig. 17
figure 17

Robustness regions for the ER3BP

Figures 18a, b show two ensembles of 81 trajectories, one starting, respectively, from initial condition \(\textbf{z}_0=[-0.416457, 1.51759]\) belonging to the region identified in Fig. 17a and the other starting from initial condition \(\textbf{z}_0=[-0.390955, 0.532663]\) belonging to the region identified in Fig. 17b. From Fig. 18, one can see how the ridges identified by the pseudo-diffusion exponent correspond to ensembles of trajectories that start from the same identical initial condition but due to the effect of uncertainty display very different behaviours and diverge quite quickly. On the contrary regions of low \(\tilde{\alpha }\) corresponds to ensembles where trajectories remain close to each other.

Fig. 18
figure 18

Example of ensemble of trajectories for: a \(\tilde{\alpha }<0.01\) and, b \(0.4<\tilde{\alpha }<0.6\). Integration time \(t_f=28\)

8.2 Identification of practical stability regions of the CR3BP

In this section, we show how the indicators proposed in this paper can be used to identify regions of practical stability in the CR3BP in the case in which the model of the dynamical system is uncertain. The analyses in this section extends the one in Pérez-Palau et al. (2015) in that the dynamics is considered uncertain, and thus it reflects more closely the situation in which a space mission to a new binary system is designed. The indicator are calculated for different values of the initial condition \(\textbf{z}_0=[x_0,y_0,0,0]^T\) assuming an uncertainty in the mass parameter. The expected value of the mass parameter is chosen to be \(\mu =0.039\) which is slightly above the limit of the linear stability condition for the triangular points. We then considered an uncertainty on the value of the mass parameters so that \(\mu \in [0.039-10^{-3},0.039+10^{-3}]\). Thus, for some realisations of \(\mu \) the triangular points are linearly stable and for others are not. The question is whether there are regions around L5 and L4 that provide practical stability for all realisations of the uncertain parameter. Figure 19 shows the regions around L4 identified by the pseudo-diffusion exponent and the SFTLE2 of the first three coefficients of the polynomial expansion. In Fig. 19a, one can read the value of \(\tilde{\alpha }\) for an integration time \(t_f=20\). Dark blue means low diffusion, while all values equal to 1 (red regions) imply that at least one realisation has a collision with one of the two primary bodies. Figures 19b, c and d show, respectively, \(\sigma _2^1\),\(\sigma _2^2\) and \(\sigma _2^3\), while Fig. 19e shows the FTLE for the nominal value \(\mu =0.039\). Finally, Fig. 19f shows the expectation for \(\epsilon =0.1\). In this last case, yellow regions correspond to low diffusion and no collisions. At this point, one might want to know if the solutions that appear to be practically stable for \(t_f=20\) can be extended for longer integration times. To this end, we analysed a smaller region around L4. We restricted the range of values of x and y to the intervals [0.3, 0.7; 0.7, 1.0], extended the integration time to \(t_f=80\) and re-calculated \(\tilde{\alpha }\). The result can be seen in Fig. 20. Figure 20a shows the value of the pseudo-diffusion exponent where values of 1 correspond to collisions of at least one trajectory in the ensemble. In Fig. 20b, we isolated only the regions for which \(\tilde{\alpha }<0.025\). We then took two random initial conditions from region A and region B in Fig. 20b and integrated, from those initial conditions, an ensemble of trajectories, for \(t_f=800\). In particular, the two samples have initial conditions \(x=0.446231\), \(y=0.874874\), in region A, and \(x=0.384848\), \(y=0.718182\), in region B. The individual components and the corresponding trajectory ensembles in configuration space are represented in Fig. 20c and e, and d and f, respectively. Note that region B was identified also in Pérez-Palau et al. (2015) that also presents an example of trajectories similar to the ones in Fig. 20f.

Fig. 19
figure 19

Stability regions around L4 in the CR3BP: a \(\tilde{\alpha }\) and, b \(\sigma _2^1\), c \(\sigma _2^2\), d \(\sigma _2^3\). e FTLE, f \(\mathbb {E}_{0.1}\) Integration time \(t_f=20\)

Fig. 20
figure 20

Close-up of stability regions around L4 in the CR3BP for extended integration time \(t_f=80\): a \(\tilde{\alpha }\) and, b \(\tilde{\alpha }<0.025\), c components for sample taken from region A for integration time \(t_f=800\), d components for sample taken from region B for integration time \(t_f=800\), e trajectories of sample taken from region A for integration time \(t_f=800\), f trajectories of sample taken from region B for integration time \(t_f=800\)

9 Conclusions and future work

This paper introduced three indicators that quantify the effect of parametric uncertainty on the time evolution of nonlinear dynamical systems. Two are derived from the concept of finite-time Lyapunov exponents and one from the relationship between mean square displacement and time in the case of anomalous diffusion. It was shown how the three indicators provide consistent information on the dynamics when used to build a cartography of the phase space.

While SFTLE1 simply quantifies the statistical moments of the standard FTLE, the other two indicators were shown to relate the time evolution of the coefficients of polynomial expansions with the chaotic and diffusive nature of the motion. It was also experimentally and theoretically demonstrated that the quantification of the uncertainty in the initial conditions is equivalent to the computation of the FTLE when this uncertainty is infinitesimal.

The paper presented a measure of the probability associated with the diffusion of an ensemble of trajectories. At the same time, it was argued that the weight function does not need to be a probability distribution. Any orthogonal polynomials with respect to any weight function can be used. More in general any form of polynomial-based quantification of uncertainty, whether intrusive or non-intrusive, can be used provided that the polynomials can be orthogonalised.

The computational complexity of the calculation of these indicators is mainly related to the complexity of the propagation of uncertainty with polynomials. On the other hand it was shown that the pseudo-diffusion exponent has lower computational complexity for the same number of uncertainty parameters because it does not require the propagation of the variational equations. Note that in this paper the indicators were computed for a particular final times \(t_f\) but could be equally computed for multiple times t to study their evolution.

From a practical applicability standpoint, it was shown how the indicators could be used to find sets of robust initial conditions and the pseudo-diffusion indicator could identify regions of practically stable trajectories around L4, in the CR3BP, also in the case in which the uncertainty in the mass parameter would imply that the triangular points are linearly unstable.

Future work will further extend these indicators to account for stochastic processes driving the dynamical systems and imprecision in the distribution functions.