1 Introduction

The logistic equation introduced by May [27] models the behavior of a population that grows exponentially, but some constraints of the environment limit this growth. We can express it as

$$\begin{aligned} v(n+1)=\eta v(n)(1-v(n)),\quad \text { for }n\in \mathbb {N}_0, \end{aligned}$$
(1)

where \(v(0)\in [0,1]\) and \(\eta \in \mathbb {R}\). This equation provides the simplest example of a one-parameter nonlinear dynamical system with nontrivial dynamics. It is very well-known that if \(0\le \eta \le 4\), we have a well-defined dynamical system on [0, 1]. For \(\eta > 4\), we still can have a discrete dynamical system, but this will only be defined on the complementary of a particular Cantor set in [0, 1]; see for instance [34].

In order to incorporate some difference operator that we can later extend naturally with a discrete fractional derivative, we can transform the logistic equation by applying the change of variable \(v(n)=\frac{\eta }{\eta -1}u(n)\). In this way, instead of having a formula for computing the term \(u(n+1)\) by recurrence, we express that the forward Euler operator \(\Delta \) is equal to the nonlinear right term of the logistic, that is

$$\begin{aligned} \Delta u(n):=u(n+1)-u(n). \end{aligned}$$
(2)

So, we obtain the logistic equation of parameter \(\mu :=\eta -1\) with the initial condition rescaled by a factor \(\frac{\mu +1}{\mu }\). More precisely, we have

$$\begin{aligned} \Delta u(n)=\mu u(n)(1-u(n)), \qquad u(0)=\frac{\mu +1}{\mu }v(0). \end{aligned}$$
(3)

Wu and Baleanu [36] considered a fractional version of the dynamical system generated by replacing the Euler operator by the left Caputo-like discrete difference operator \(\Delta ^\nu \) defined as follows

$$\begin{aligned} \Delta ^{\nu }u(n):= \sum _{j=0}^n k^{-\nu }(n-j)u(j), \quad n\in \mathbb {N}_0, \end{aligned}$$
(4)

where \(k^{-\nu }(j)\) is defined by the generating series

$$\begin{aligned} \sum _{j=0}^{\infty }k^{-\nu }(j)z^j = (1-z)^{\nu }, \quad \nu \in \mathbb {R}. \end{aligned}$$
(5)

In the last years, this definition of the difference operator \(\Delta ^{\nu }\) has proven to be very useful due to its convolutional form and for being equivalent, up to translation, with others more commonly used in the literature. See [11, 12, 22, 23] for an overview and properties. It is worth noting that for any \(\nu \in \mathbb {R}\) that is not a positive integer, we have the explicit formula [3]

$$\begin{aligned} k^{-\nu }(j)= \frac{\Gamma (j-\nu )}{\Gamma (-\nu ) j!}, \quad j\in \mathbb {N}_0. \end{aligned}$$
(6)

See also the references [35, 37] for recent works showing the applications of discrete fractional calculus. It is interesting to observe that with the given definition of \(\Delta ^\nu \), the fractional version of the logistic equation adopts a convolutional form and reads as follows

$$\begin{aligned} u(n)=\mu \sum _{j=1}^n k^{\nu }(n-j)u(j-1)(1-u(j-1)). \end{aligned}$$
(7)

This representation gives an interpretation of the fractional version of the logistic equation as the one that incorporates a memory kernel in terms of a discrete parameter, thus incorporating a different measure of the trajectories. Given an arbitrary condition \(u(0)\in [0,1]\), the trajectory \(\{u(n)\}_{n=1}^N\) obtained with (7) will be called a Wu Baleanu trajectory. It is worth to mention that for \(\nu =1\) and \(n=1\) we have \(\Delta u(0)=\mu u(0)(1-u(0))\), recovering (2) for \(n=0\). Fixing an initial condition and a fractional parameter \(\nu \), we can generate Feigenbaum diagrams in order to illustrate the dynamics of this dynamical system in terms of the parameter \(\mu \), see Figs. 1 and 2.

Fig. 1
figure 1

Feigenbaum plots for the dynamical system given by (7) for \(u_0 = 0.3\) and \(\nu =0.01\) (top left), \(\nu =0.2\) (top right), \(\nu =0.6\) (bottom left), and \(\nu =1\) (bottom right). For each value \(\mu \), we compute 200 terms of the sequence, and we plot the last 100 values

Fig. 2
figure 2

Feigenbaum plots for the dynamical system given by (7) for \(u_0 = 0.8\) and \(\nu =0.01\) (top left), \(\nu =0.2\) (top right), \(\nu =0.6\) (bottom left), and \(\nu =1\) (bottom right). For each value \(\mu \), we compute 200 terms of the sequence, and we plot the last 100 values

Anomalous diffusion trajectories \(\{u(n)\}_{n=1}^N\) are those whose average width or variance, computed as the mean square displacement (MSD) \(\langle u(n)-u(0)\rangle \) do not grow linearly respect to n, that is \(\langle u^2\rangle \sim n^\nu \), with \(\nu \ne 1\). The exponent \(\nu \) is known as the anomalous diffusion exponent. Examples of models generating anomalous diffusion trajectories are: Annealed Transient Time Motion (ATTM) [26], Continuous-Time Random Walk (CTRW) [33], Fractional Brownian Motion (FBM) [15, 25], Lévy Walks (LW) [16, 29], and Scaled Brownian Motion (SBM) [20].

Recently, within the frame of the Andi Challenge [30], machine learning methods, alone or combined with some statistical measures, have demonstrated their efficiency in (i) classifying anomalous diffusion noisy trajectories according to one of the previous five generative models and (ii) inferring the anomalous diffusion exponent. We refer the reader to [29] and some subsequent works in which some of these models were fully developed [1, 7, 9]; see also [6, 8, 28]. It is also worth mentioning the recent interest in incorporating machine learning methods and intelligent algorithms in studying formal mathematical problems. We can find some examples of this approach incorporating these techniques for solving nonlinear models, such as artificial neural networks, swarm optimization, and active-set algorithms [31], or neuro-swarming heuristics [32].

Therefore, we wonder if this approach lets us infer some fractional-related trajectories’ characteristics. In this work, we study if we can infer the \(\mu \) parameter and the fractional derivative order \(\nu \) of Wu Baleanu trajectories. We have chosen an architecture based on recurrent neural networks (RNN), the same that provided the best results in inferring the exponent \(\alpha \) of one-dimensional trajectories in the Andi Challenge [7, 29], for trying to infer these parameters and measuring up to which point there is a straightforward relation any given trajectory of this type and the corresponding parameters \(\mu \) and \(\nu \) involved in generating it. To the best of our knowledge, this paper is the first to seek this type of approach. In Sect. 2, we give some details of the model architecture, revisiting some basic fundamentals of our machine learning models. We set the training, validation, and test data sets, as long as the results, in Sect. 3. Finally, we draw some conclusions in Sect. 4.

2 Architecture of the method

We propose the architecture shown in Fig. 3 to infer the parameters \(\mu \) and \(\nu \) of a given trajectory are introduced as input. Such architecture has been successfully applied for analyzing trajectories [7] and time series [24]. It is a mixture of convolutional and recurrent neural networks. It consists of three parts.

  1. 1.

    First, we have two convolutional layers that permit the extraction of spatial features from the trajectories. The first convolutional layer is set with 32 filters and a sliding window (kernel) of size 5, which slides through each trajectory extracting spatial features from them. The second convolutional layer has 64 filters to extract higher-level features.

  2. 2.

    Second, the output of the convolutional layers feeds three stacked bidirectional LSTMs layers that permit learning the sequential information. After each of these layers, we include a dropout layer of the \(10\%\) neurons to avoid over-fitting. We tested several dropout levels, from \(5\%\) to \(20\%\), being \(10\%\) the one with the best performance.

  3. 3.

    Finally, we use two fully connected dense layers: the first one with 20 units and the second one with 1 or 2 units. This last choice depends if we want to predict a single parameter or both of them at the same time.

Fig. 3
figure 3

Machine learning used for inferring the generating \(\mu \) and \(\nu \) parameters of Wu Baleanu trajectories

Our model will be fed with trajectories of length between 10 and 50, which are the most frequent in experiments and the hardest to be classified [29]. Let us briefly describe each part of the model:

2.1 Convolutional neural networks (CNN)

Convolutional neural networks preserve the spatial structure of data. They do so by connecting a patch (or section) from data to single neurons, so every neuron learns the properties from this single patch, whose size is defined by the kernel size (5 in our model). By doing so, spatially close portions of data are likely to be related and correlated to each other since only a small region of the input data influences the output from each neuron [4, 19]. The patch is slid across the input sequence, and each time we slide it, we have a new output neuron in the following layer. This lets us consider the spatial structure inherent to the input sequence [17, 18]. Through these layers, we are able to learn trajectory features by weighting the connections between the patches and the neurons so that particular features can be extracted by each patch. By using multiple filters (32 and 64 in our case) the CNN layers are extracting multiple different features (linear and nonlinear) that feed our LSTM layers.

2.2 Recurrent neural networks (RNN)

Sequential information can be decomposed in single-time steps, such as words or characters in language, notes in music, codons in DNA sequences. So, if one considers sequential data, it is very likely that the output at a later time step will depend on the inputs at prior time steps. In practice, we need to relate the information from a particular time step also with prior time steps and pass this information to future times.

Recurrent neural networks (RNN) address this problem by adding an internal memory or cell state, denoted by h, which is passed from the time t to the time \(t+1\), that is from \(h_t\) to \(h_{t+1}\). This recurrent relation is capturing some notion of memory of what the sequence looks like. Therefore, the RNN output is not only a function of the input at a particular time step but also a function of the past memory of the cell state. In other words, the output \(\overline{y_t} = f(x_t, h_{t-1})\) depends on the current input \(x_t\) and the previous inputs to the RNN \(h_{t-1}\), as it can be seen in Fig. 4.

Fig. 4
figure 4

Basic representation of a general RNN (left). An RNN with an inner \(\tanh \) activation function (center). A scheme of an LSTM layer (right)

An RNN adapts the internal hidden state (or memory state) \(h_t\) through the result of multiplying two weight matrices \(W_{hh}\) and \(W_{xh}\) to the previous cell state \(h_{t-1}\) and the current input \(x_t\), respectively. The weight matrix \(W_{hh}\) is modified at each time step to let the cell learn how to fit the desired output, and \(W_{xh}\) is the weight matrix that modules the contribution of the input at each time step to the learning process. The result is passed to an activation function \(\tanh \) that modifies the current state at each time step, i.e., \(h_t = \tanh \left( W _{hh}^T h_{t-1} + W _{xh}^T x_t\right) \).

The problem with RNNs arises when dealing with long sequences since composing multiple \(\tanh \) functions entails that the hidden state tends to extinguish by reaching values very close or equal to zero. In practice, this means that only recent cell states will modify the current cell state or, in other words, that RNNs have short-term memory.

2.3 Long short-term memory (LSTM)

Long short-term memory (LSTM) [14, 21] amends the aforementioned short-term memory problem implicit to RNN by including gated cells that allow them to maintain long-term dependencies in the data and to track information across multiple time steps. This improves the sequential data modeling. LSTM structure is shown in Fig. 4 where \(\sigma \) and \(\tanh \) stand for the sigmoid and the hyperbolic tangent activation functions. The circles in red represent matrix multiplication and additions. An LSTM incorporates a new cell state channel c which can be seen as a transportation band where the info is selectively updated by the new gates and is independent of the previously defined hidden state h and, therefore, independent of what is outputted in the form of hidden state or current time step out.

One LSTM cell’s composition can be seen in Fig. 4 (right), and the gates are used to control the flow of information as follows:

  • The first sigmoid gate decides what information is kept or rid of. Since the sigmoid output ranges from 0 to 1, this can be seen as a switch that modulates how much information from the previous state has to be kept.

  • The second gate, consisting of a sigmoid and a \(\tanh \) functions store relevant information to the newly added cell state channel (c).

  • Then, the outputs of the two previous gates are used to update the cell state (c) selectively.

  • And the last sigmoid and \(\tanh \) functions produce two different outputs; the new cell state (c), which is forwarded to the next LSTM cell, and the current time step output, which is a filtered version of the cell hidden state (h).

Further details about LSTM functioning and implementation can be found in [2, 13].

3 A general model for inferring \(\mu \) and \(\nu \) parameters

We have built three independent data sets to infer both \(\mu \) and \(\nu \) simultaneously. The train, validation, and test data sets have been built with the following parameters:

  • \(\mu \in [2,3.2]\) with increments of 0.001.

  • \(\nu \in [0.01,1]\) with increments of 0.01.

  • trajectory length N, with \(N\in [10,50]\) randomly selected.

  • \(u_0 \in [0,1]\) randomly chosen with a resolution of \(10^{-2}\).

We visit the \(\mu \) range with higher accuracy, in order to capture the chaotic dynamics that appears in some regions, see Figs. 1 and 2. We iterate over the values of \(\mu \) and \(\nu \) for building the aforementioned data sets. At each iteration, per each combination of \(\mu \) and \(\nu \) values we randomly select 5 length value N and 5 different values of \(u_0\), one in each one of these intervals [0.0, 0.2], (0.2, 0.4], (0.4, 0.6], (0.6, 0.8], and (0.8, 1.0], thus producing 5 trajectories of different lengths. When computing the trajectories, if we attain a value lower than \(-\)0.5 or greater than 2.0 in the trajectory, we stop the trajectory generation and save the trajectory as it is, provided it has a length greater than 10. The whole pool of trajectories is split into training (65%), validation (15%), and test (20%). As a result of this procedure, we get a training data set containing 618199 trajectories, a validation data set of 142822 trajectories, and a test data set with 190334 trajectories. The data sets can be found in supplementary material.

In all data sets, we pad each trajectory with 0’s at its beginning to make them of a fixed length equal to 50. This permits homogenizing the lengths and feeding the first convolutional layer of our proposed architecture, see [10, Ch. 5  & Ch. 9]. We used an early stopping callback with a patience value of 20, which in practice means that the model stops training when validation mean average error (MAE) does not improve after 20 consecutive epochs. We have used a computer with 16 cores configured with 128 GB RAM and Nvidia RTX 3090 GPU with 22 GB RAM, running Ubuntu 20.10. The complete training process took less than 2 h, running up to 23 epochs.

First, we provide a description of the MAE distribution in terms of the true value to be predicted in Fig. 5. A point \((\mu ,\mu )\) in the diagonal represents that one trajectory was generated with \(\mu \) as a parameter (x coordinate), and the model infers \(\mu \) as the potential value used for generating the trajectory (y-coordinate). The color bar represents the density of points in the picture. Therefore, the accumulation of spots along the diagonal represents that the model predicts very well the parameters \(\mu \) and \(\nu \). On the one hand, when inferring the parameter \(\mu \), the darker region along the diagonal indicates that the best results are given for medium values of \(\mu \), between 2.24 and 2.96; on the other hand, the results behave more homogeneously when predicting \(\nu \).

Fig. 5
figure 5

Truth versus predicted values of \(\mu \) and \(\nu \) in the validation data set

We point out that the results in Fig. 5 do not shed light on the importance of the trajectory length to improve the accuracy of the predictions. It is expected that the longer the trajectories are, the lower the MAE is. In order to check it, we compare the MAE results for the shortest trajectories, lengths between 10 and 19, and the longest ones, lengths between 40 and 50, in Fig. 6. We see that for long trajectories (lengths between 40 and 50), the spots concentrate along the diagonal, which indicates that the model improves its accuracy when predicting values of long trajectories. We also appreciate that in the \(\mu \) case, the model improves quite a lot when predicting values of \(\mu \) higher than 2.96. In contrast, in the \(\nu \) case, the long trajectories show a performance improvement for values lower than 0.5.

Fig. 6
figure 6

Truth versus predicted values for \(\mu \) and \(\nu \) in the validation data sets for trajectory lengths [10–19] (left) and [40–50] (right)

Here, it is not easy to appreciate at first sight; however, looking at Table 1, we can see that it really holds. We can see that, except in the last case, as we increase the trajectory length, the results improve since the parameters can be better identified. In all cases, we are in MAEs of the order of \(10^{-2}\), that is the same order of magnitude of the \(\nu \) parameter discretization, and one order less than the \(\mu \) one. This justifies our choice of a thinner discretization for \(\mu \) with respect to \(\nu \).

Table 1 \(\mu \) and \(\nu \) MAE in the test data set
Fig. 7
figure 7

Quartiles Q1 (red), Q2 (blue), and Q3 (green) of the MAE distribution on the evaluation data for \(\mu \) (left) and \(\nu \) (right)

Fig. 8
figure 8

Quartiles Q1 (red), Q2 (blue), and Q3 (green) of the MAE distribution on the evaluation data set associated with every single value of \(\mu \) and \(\nu \) for trajectory lengths [10–19] (left) and [40–50] (right)

Fig. 9
figure 9

MAE of \(\mu \) (left) and \(\nu \) (right) on the evaluation data set as a function of \(u_0\). The darkest region coincides with the boxes, the medium gray stands for the whiskers, and the light gray for the outliers

Fig. 10
figure 10

MAE of \(\mu \) (up) and \(\nu \) (down) on the evaluation data set as a function of \(u_0\) for trajectory lengths [10–19] (left) and [40–50] (right). The darkest region coincides with the boxes, the medium gray stands for the whiskers, and the light gray for the outliers

In order to look for more insightful descriptions of the MAE, for each truth value of \(\mu \) and \(\nu \) we have represented the quartiles \(Q_1\), \(Q_2\), and \(Q_3\) of the MAE error distribution in Fig. 7. Looking at the quartile values for each value of \(\mu \) and \(\nu \), and especially to Q3 (green), we can see that the predictions for \(\mu \) are less accurate at the extremes for \(\mu \in [2.0,2.2]\cup [2.8,3.2]\) and \(\nu \in [0,0.2]\cup [0.8,1]\) than in the complementary of these sets, as we have already noticed in Fig. 5. However, we see here more clearly that the models are less accurate close to the extreme values of \(\nu =0\) and 1 due to the strong connection existing between both fractional derivative values. We provide a comparison of these MAE distributions for short and long trajectories in Fig. 8.

Due to the fractional nature of equation (7), the model has a memory component that is strongly dependent on the initial condition u(0). We show boxplots of the MAE distribution in terms of u(0) in Fig. 9. We can see that initial conditions are more influential for \(\mu \) predictions since boxes (green) and whiskers (gray) are higher than for \(\nu \). Despite this, in both cases, the results are slightly worse for initial conditions closer to 1, due also to the nonlinear term of the logistic terms of (7). The same conclusion can be extracted when evaluating trajectories by length, as shown in Fig. 10, where we can see that the impact is of initial conditions close to 1 leads to much higher values of MAE for short trajectories, either for boxes and for whiskers.

4 Conclusions

The success of machine learning and deep learning models has arrived in almost all scientific fields. The development of mathematical proofs and arguments seems to be one of the most difficult challenges. Nevertheless, some barriers have already fallen with the discovery of new multiplication algorithms [5].

Machine learning methods can also help us in modeling tasks and in the search and fitting of parameters. In this line, we have shown these methods permit us to infer the fractional nature of a given trajectory. In particular, we have seen that such a model permits us to elucidate if, given a set of trajectories, we can propose a fractional model based on the logistic equation that would represent the underlying process with reliability. Moreover, with reasonable use of resources, we can tune the model in order to estimate the parameter of the model \(\mu \) and the parameter \(\nu \) of the fractional discretization, within a similar order of magnitude.

We expect that this study will foster the incorporation of machine learning tools into the study of dynamical systems and the modeling of real-life problems.