Interpreting Recurrent Neural Networks Behaviour via Excitable Network Attractors

Machine learning provides fundamental tools both for scientific research and for the development of technologies with significant impact on society. It provides methods that facilitate the discovery of regularities in data and that give predictions without explicit knowledge of the rules governing a system. However, a price is paid for exploiting such flexibility: machine learning methods are typically black boxes where it is difficult to fully understand what the machine is doing or how it is operating. This poses constraints on the applicability and explainability of such methods. Our research aims to open the black box of recurrent neural networks, an important family of neural networks used for processing sequential data. We propose a novel methodology that provides a mechanistic interpretation of behaviour when solving a computational task. Our methodology uses mathematical constructs called excitable network attractors, which are invariant sets in phase space composed of stable attractors and excitable connections between them. As the behaviour of recurrent neural networks depends both on training and on inputs to the system, we introduce an algorithm to extract network attractors directly from the trajectory of a neural network while solving tasks. Simulations conducted on a controlled benchmark task confirm the relevance of these attractors for interpreting the behaviour of recurrent neural networks, at least for tasks that involve learning a finite number of stable states and transitions between them.


Introduction
Artificial recurrent neural networks (RNNs) are widely used to solve tasks involving temporal data, e.g., speech [17] and handwriting recognition [42], audio classification [24,48] or time series forecasting [8].RNNs are characterised by the presence of feedback connections in a hidden layer, which allows generating a state-space representation that equips the network with short-term memory capability.RNNs are universal approximators of dynamical systems [14,18] meaning that, given enough neurons in the hidden layer, it is possible to fine tune the weights to achieve any desired level of accuracy.Nevertheless, training via backpropagation through time is difficult due to the vanishing/exploding gradient problem [22,41].This has led to the development of new and faster techniques for training RNNs, including a novel paradigm known as reservoir computing [31,32].Echo state networks (ESNs) [20,29] constitute an important example of reservoir computing, where a recurrent layer (called a reservoir) is composed of a large number of neurons with randomly initialised connections that are not fine-tuned via gradient-based optimisation mechanisms.The main idea behind ESNs is to exploit the rich dynamics generated by the reservoir with an output layer, the read-out that is optimised to solve a specific task.

Problem statement and research hypothesis
The high-dimensional and non-linear nature of RNNs complicates interpretability of their internal dynamics, which are characterised by complex, input-dependent spatio-temporal patterns of activity [43,51].This poses constraints on understanding the behaviour of RNNs: they are usually viewed as black-boxes from which it is hard to extract useful knowledge about their inner workings.As highlighted by recent research efforts [10,23,38], similar interpretability issues affect many other machine learning methods.Furthermore, an increasing societal need to develop accountability and explainability of decision making by AI [16] is driving the development of methodologies for explaining the behaviour of such methods.
Our aim here is to develop effective models that capture the essential dynamical behaviour of RNNs on computational tasks as input-driven responses of a dynamical system, while neglecting microscopic details of the RNN dynamics in phase space (i.e., the space of all possible neuron activations).To this end, we hypothesise that RNNs can undertake computations by exploiting (i) transient dynamical regimes and (ii) excitable connections to switch between different stable attractors, depending on input and current state.The RNN behaviour depends on the task at hand: for example, long transients are mostly exploited in time series forecasting problems, while switching between attractors is mostly exploited in classification problems or tasks requiring to learn a finite set of memory states.These mechanisms are not mutually exclusive and can be exploited synergistically to realise more complex computations.

Contribution and paper organisation
In order to test our hypothesis, we develop a theoretical framework based on network attractors of dynamical systems [5,56].These papers consider both heteroclinic network attractors composed of heteroclinic connections between saddles, and excitable network attractors (ENAs) between stable attractors.Without loss of generality, we consider ENAs between stable fixed points [5], although ENAs can similarly be defined between stable limit cycles or strange attractors.More precisely, we show how to construct ENAs describing RNN behaviour for tasks that involve switching between a finite set of attractors.Network attractors can be represented as directed graphs in phase space, from which it is possible to reverse engineer [4] a set of ordinary differential equations ruling, in our case, the RNN behaviour.This, in turn, opens the way to a more analytic description of how RNNs solve computational tasks.
In this paper, we focus on the flip-flop task with two bits [51], since it provides a controlled workbench to test our hypothesis.The input to discrete-time RNN is assumed to be a discrete temporal sequence with values from {+1, 0, −1}.In general, we consider discrete time k varying input (u[k]) and output (z[k]) of a system related by where R represents an RNN with evolving internal state (x[k]) and connection weights that are optimised during the training phase.We consider response to inputs where, independently for any k, u[k] is (0, 0) with probability 1 − p or otherwise chosen to be one of the four inputs (±1, 0) or (0, ±1) with equal probability.This generates a sequence of inputs that remain at (0, 0) for an exponentially distributed period of time, but occasionally takes one of the four values (0, ±1), (±1, 0).The target output to be learned by the RNN is a two-dimensional vector z[k] that can assume four possible configurations: (1, 1), (1, −1), (−1, 1), and (−1, −1).The system (1) is considered to successfully accomplish the task if the network is able to reliably and accurately reproduce all 20 possible actions described in Table 1.
Here we take into account RNNs that are ESNs trained with a perturbation matrix obtained by injecting the output into the ESN dynamics.This choice does not limit the validity of our hypothesis: we claim that our results are general and describe the behaviour of all discrete-time, input-driven RNNs.The contributions of this paper can be summarised as follows: Table 1: The two-bit flip-flop task.Depending on the output z[k − 1] and input u[k], we expect the output z[k] to be as given in the table, where N.C.indicates "not change".
Outputs\Inputs (0, 0) (1, 0) (−1, 0) (0, 1) (0, −1) • We provide a theoretical framework to describe the behaviour of RNNs by means of ENAs.The proposed theoretical framework is general and covers several types of computational tasks.However, here we present a specific instance of such a framework that is suitable to describe RNN behaviour on tasks requiring to learn a finite set of memory states; • By means of bifurcation analysis, we show how to manually design (low-dimensional) ESNs that give rise to ENAs able to solve the flip-flop task with any number of bits.This allows us to justify on choice of modelling framework and its validity in the context of RNNs; • As ESNs are driven by inputs and subject to training via perturbation matrices, bifurcation analysis alone is not sufficient to explain their behaviour.In fact, inputs and perturbation matrices can affect ESN behaviour in a non-trivial way.To this end, we introduce an algorithm to extract the ENA describing the ESN behaviour for the task at hand.The algorithm analyses an ESN trajectory and constructs a directed graph encoding the underlying ENA on which the dynamics takes place.The vertices (nodes) of such a graph are associated with the fixed points of the dynamics and directed edges describe excitable connections between them.We apply this algorithm to an ESN trained to solve the flip-flop task and show that the resulting ENA is able to explain how ESNs perform computations in a detailed and mechanistic way; • We define a suitable excitability threshold for this high-dimensional, non-linear dynamical system driven by inputs.We propose a method for computing this excitability threshold that accounts for inputs and can directly be applied to trajectories generated by a neural network while solving a task; • Our simulations suggest three important findings.First, as already noted by recent research [1,51], the dynamics of high-dimensional RNNs takes place in a much lower-dimensional phase space region that is determined by the structure introduced with training and inputs.Here, we observe that the dynamics is indeed low-dimensional, but highlight the fact that additional dimensions are used occasionally to switch between stable states according to control inputs.This suggests that, for example, a simplistic use of methods based on explained variance to reduce dimensions needs to be avoided.Second, we show how ENA models describing RNN behaviour can be exploited to provide a mechanistic interpretation of errors occurring during the computation undertaken by RNNs.Finally, we show how excitability thresholds of the extracted ENAs allow us to assess the robustness of RNNs to noise-induced perturbations.
The remainder of this paper is organised as follows.Section 2 introduces the essential background material needed in this paper.Section 3 shows how to design low-dimensional ESN models that give rise to ENAs able to solve the flip-flop task.In Section 4 we propose an algorithm for automatically extracting ENAs directly from an ESN trajectory generated while solving a task.In Section 5, we present and discuss the results of the simulations.Finally, Section 6 draws conclusions and points to future research directions.We include three appendices.The first one, Appendix A, introduces notions of linear stability that will be useful throughout the paper.Appendix B discuss bifurcations of fixed points in low-dimensional ESN maps.Appendix C provides details about the procedure used to determine fixed points.

Echo State Networks
We consider a specific system of the form (1), corresponding to a discrete-time ESN state-update and related output: x It is worth mentioning [41] that (2) is often written However, Miller and Fumarola [36] proved that the two formulations are equivalent up to a change of coordinates; they produce the same discrete-time dynamics.In this paper, we build on (2) and consider a network of discrete-time leaky-integrator neurons [21] of the form: Here, α ∈ (0, 1] is called a leak rate and explicitly sets the time-scale of the ESN [52].The term represents additive white Gaussian noise with spherical covariance matrix and unit standard deviation.The reservoir W r ∈ R Nr×Nr and input W in ∈ R Nr×Ni matrices are usually random with i.i.d.entries drawn from uniform or Gaussian distributions [30,31].However, in the literature it is possible to find reservoirs with different connection patterns, including deterministic topologies [46] and those exploiting the norm-preserving property of orthogonal matrices [35].Relevant hyperparameters directly affecting ESN performance include the number of neurons and sparseness of their connections, the spectral radius of the reservoir matrix, and leak rate α [9,28].The so-called echo-state property (ESP) [15,33,57] guarantees the existence and uniqueness of a global attracting trajectory for any input sequence in a compact set.The ESP, although useful in some tasks like forecasting tasks, it is in practice difficult to verify and it is formulated only for ESNs of the form shown in Eq. 2. Therefore, it is not suitable to ESN models and tasks we discuss here; see the following subsection.

Training ESNs with low-rank perturbation matrices
Training of RNNs is typically implemented by means of stochastic gradient descent or variations of thereof [47].Learning long-term dependencies in RNNs with gradient descent is known to be problematic, as a consequence of the so-called vanishing/exploding gradient problem [41].To this end, different approaches have been proposed that can be summarised in two categories: (i) methods using gating mechanisms (such as Long Short-Term Memory [19] and Gated Recurrent Unit [12] networks) and (ii) those based on unitary matrices and constant-slope activations [3].On the other hand, training of the recurrent layer in ESNs is typically realised by perturbing a randomly-initialised reservoir with a low-rank, deterministic matrix.This is conventionally accomplished by feeding back the ESN output to the recurrent layer [25,27,44,45,55] or, as Mastrogiuseppe and Ostojic [34] recently proposed, by designing the reservoir directly as W r = X + D, where X is a random matrix and D is a deterministic, low-rank matrix encoding the task of interest.
In this paper, we use the feedback of the ESN output as a mechanism for training the recurrent layer; see Fig. 1.The state-update equation (4) takes then the following form: where W f b ∈ R Nr×No is a matrix connecting the output with the reservoir (output feedback).By expanding z[k − 1] in Eq. 5 with (3), we obtain: Figure 1: Illustration of an ESN (6), where its output is fed back into the reservoir (closed loop system).
where we denoted with the "trained reservoir" obtained by means of matrix in Eq. 6 play the role of control inputs and are typically constant or impulsive signals.The ESN readout matrix W o is conventionally determined by solving a regularised least-squares problem.However, we note that also online training schemes have been developed, e.g., the FORCE learning algorithm originally introduced in [50] and further extended by De Pasquale et al. [13].Depending on whether training of W o is performed batch or online, z[k − 1] in (5) takes the form of either the target signal or the output produced by the ESN (3), respectively.In the first case, the resulting system is called "open-loop system" with stateupdate different from (6); the latter is called "closed-loop system" [45].During the test phase, regardless of the adopted training mechanism, state-update of ESNs is described by Eq. 6.
In this paper, we consider batch training via regularised least-squares (ridge regression) and analyse the trajectory generated by (6) during the test phase.

Network attractors
Many common dynamical systems encountered in Nature are dissipative [11,49].In such systems, the absence of any conservative law brings the system to an attracting subset of dimension strictly less than the original phase space dimension.Intuitively, an attractor A is a subset of phase space to which the system asymptotically converges.We define the basin of attraction of A to be the set of all initial conditions for which the system evolves toward A. Stable fixed points are the simplest possible attractors, which are points in phase space that represent steady-state solutions where all variables of the dynamical system assume constant values.However, there exist fixed points that are only partially stable (or are totally unstable), which are called saddle points (repellers).
Here, we consider the following noise-free discrete-time dynamical system with inputs: where G : R Nr × R Ni −→ R Nr is related to Eq. 6 as follows: where is M the trained reservoir matrix (7), the subscript (i) denotes the ith rows of a matrix and • the usual dot product.As mentioned before, the input signal for the flip-flop task is null most of the time.Hence, it is worth studying the autonomous dynamics of Fixed points p ∈ R Nr are solutions of F(p) = p.Related to the notion of attractor is the notion of limit set [11].The ω-limit set of a point x 0 under the iterated map (10) is defined by which is the set of cluster points of the forward trajectory {F h (x 0 )} h∈N .For a given fixed point p, its basin of attraction is formed by all points x ∈ R Nr such that ω F (x) = {p}.If such a fixed point is stable, then there exists a neighbourhood of p whose ω-limit set corresponds to this fixed point.More complex attractors consisting of networks of invariant sets in phase space have been proposed in the literature [39,56].Such models found renewed interest in neuroscience [37,54] and other fields of research, as they provide a fundamental tool to describe dynamic processes occurring on transients that explore excitable connections.More relevant to our paper, we focus on networks of stable fixed points that are connected by excitable connections.Following [5,6], we say that there exists an excitable connection for amplitude δ > 0 from stable fixed point p i to p j whenever where B δ (p i ) stands for the closed ball centred on p i with radius δ > 0 and W s (p j ) = x ∈ R Nr | ω F (x) = {p j } denotes the basin of attraction1 of the fixed point p j .We say an excitable connection from p i to p j has threshold δ th (p i , p j ), where The relation (13) can be informally interpreted as the fact that the fixed point p i is δ th (p i , p j ) away from the basin of p j ; see Fig. 2 for a visual explanation.A set X exc ⊂ R Nr is called an ENA for amplitude δ > 0 if there exists a collection of fixed points {p i } M i=1 , such that where F(B δ (p i )) := ∪ x∈B δ (pi) F(x) is based on Eq. 10 (see [5,6]).The autonomous dynamics on such a set converges to one of the stable fixed points; hence, external inputs are necessary to get interesting dynamics.
If the external input is large enough, then the state will escape from the current basin of attraction and switch to a different one, until another sufficiently large input will lead to another change of basin.The excitability threshold (13) is defined as the (Euclidean) distance between a given stable fixed point, p i , and the basin of attraction of another fixed point, say p j .Such a quantity measures the minimum distance in phase space necessary to escape from the basin of p i and converge towards p j .Nevertheless, if Figure 2: Depiction of an excitable connection from p i to p j .The red area is a closed ball centred on p i of radius δ.The blue area represents the stable manifold of p j , i.e., its basin of attraction.The open circle represents a saddle whose stable manifold (blue curves) denotes the boundary of the basin of attraction of p j .Some points of B δ (p i ) converge to p i itself, while those points beyond the basin boundary converge towards p j , as suggested by the black arrows.
the dynamical system is high dimensional, then there will be a large number of possible escaping directions that could be exploited by inputs.Therefore, excitability thresholds (13) alone may not be representative of nonautonomous systems driven by inputs.For this purpose, in order to take into account the action of inputs on the dynamics, we introduce the notion of input-driven excitability threshold of an excitable connection.Given an input signal u[k] for k ≥ 0 and a certain point x 0 ∈ R Nr in phase space, we define the set: The set G(x 0 ; u[k]) contains all states reachable from x 0 under the action of input u[k].This definition depends only on x 0 and the subset of R Ni formed by values assumed by the input signal u[k], for k ≥ 0. Considering K as a compact subspace of R Ni , we define We note that G(x 0 ; K), as a subspace of R Nr , is compact if K ⊂ R Ni is compact.If K represents all possible inputs of a particular task2 , then we can drop it and denote ( 16) simply as G(x 0 ).Therefore, the set G(x 0 ) denotes all reachable states from x 0 under the action of all inputs allowed by the task under consideration.
We say an excitable connection from p i to p j has input-driven excitability threshold δ inp (p i , p j ), where Excitability threshold in Eq. 17 has a similar meaning to the one defined in (13), although it considers only the subspace exploited by the action of inputs nearby p i , where the excitable connection starts from.Finally, from the fact that holds for all fixed points.

Designing low-dimensional ESNs to solve flip-flop tasks
In this section, starting from ESN models, we show how to manually design ENAs that realise computations needed for the flip-flop task.This step allows us to justify the choice of the modelling framework adopted here and its validity in the context of RNNs.For this purpose, we rely on bifurcation theory; see Appendix B for technical notions.Bifurcations describe relevant changes in phase space topology that lead to qualitative changes in the system behaviour [26].Training the recurrent layer of RNNs corresponds to shaping their phase space topology, possibly inducing bifurcations that lead to a qualitative change in behaviour.In particular, we argue that adding a low-rank perturbation matrix to the reservoir (7) induces bifurcations, leading to the creation of attracting regions in ESN phase space useful to solve the task at hand.Let us assume the origin as the only global attractor for the autonomous system ) associated with (5).Then, the bifurcation induced by ( 7) leads to a transition from such a trivial dynamic to one where the origin becomes unstable3 and repels trajectories towards other attracting regions in phase space, where the nonautonomous dynamics actually takes place.In Sec.3.1, we provide a low-dimensional example of an ESN that is able to solve the flip-flop task; the reservoir of such ESN is formed by two neurons.In Sec.3.2, instead, we design an ESN model with a reservoir formed by 2k neurons which is able to solve the flip-flop task with k bits.For both examples we show the underlying ENA ruling their dynamics.

A minimal-dimension example to solve the two-bit flip-flop task
In order to solve the k-bit flip-flop task, one needs to learn 2 k memory states (stable fixed points) and the related switching patterns dictated by control inputs.Here, we show how to design an ESN with two neurons in the recurrent layer, giving rise to an ENA able to solve the flip-flop task with k = 2 bits.According to the analysis of Appendix B, we know that it is possible to obtain, with k neurons, up to 3 k fixed points, 2 k of which are stable, 3 k − 2 k − 1 are saddles, and 1 (the origin) is a repeller.Therefore, we set the trained reservoir weights (7) according to condition (43).In particular, with ω r = 3 scaling the reservoir weights, and b acting as tuning parameter.As shown in Fig. 3, for every 0 ≤ b < 0.47, the autonomous system The input is injected into the ESN via the following weight matrix, W in = ω in I 2 , i.e., a scaled 2 × 2 identity matrix, setting the scaling factor (called a hyperparameter in machine learning) to ω in = 6.Let us set the remaining parameters in Eq. 6 as = 0 and α = 1, and let the ESN output (3) be the identity mapping.Let us assume that, at time k, the system is in state p = (p 1 , p 2 ), which is close to one of the four attractors, i.e., p 1 ≈ (1, 1), p 2 ≈ (−1, 1), p 3 ≈ (−1, −1), p 4 ≈ (1, −1).The possible, non-null inputs at time-step k are u[k] ∈ {(1, 0), (0, 1), (−1, 0), (0, −1)}.The action of the input pulse is encoded in a vector ∆ ∈ [−1, 1] 2 , representing the difference between the state before and after the occurrence of such a pulse, whose components are considering ω r = 3, ω in = 6 and, for the sake of simplicity, b = 0, which corresponds to the case of two independent neurons.Hence, the switching mechanism between attractors is ruled by the signs of both p j and u j ; the former encoding the position and the latter the direction where to move.Therefore, the state changes if and only if p j and u j assume different signs for some j ∈ {1, 2}.In fact, if they have the same sign, then ( 19) is null because tanh sgn(p j )[3 + 6] ≈ tanh sgn(p j )3 due to saturating activation functions.While, if they have different sign, then (19) becomes close to −1 or 1, according to sgn(p j ), because tanh sgn(p j )[3 − 6] = − tanh sgn(p j )3 .Clearly, it is not necessary that |ω r − ω in | = ω r holds, as in our set up.Although we assumed b = 0 for clarity of explanation, the conclusion is the same for each b ∈ [0, 0.47).However, when the parameter approaches the bifurcation value b ≈ 0.47, the escaping dynamics from certain fixed points become significantly slower.Given a set of stable fixed points, a δ > 0 gives rise to a specific ENA; see definition of ENA in (14).For instance, a large δ will potentially activate all excitable connections between the attractors.However, in order to properly solve the flip-flop task, some of the connections need not to be active, namely, the diagonal The whole phase space is divided in four basins of attraction associated to the four stable points.Boundaries between these basins coincide with the four heteroclinic trajectories connecting the origin with the saddles.The legend shows output values produced at every attractor, which, in this specific example, correspond with the internal state, i.e., the phase space coordinates of the attractors.Points on the plane have been coloured according with the attractor on which they converged to after 100 steps.
connections between p 1 and p 3 , and between p 2 and p 4 .Due to symmetry, there are only three different excitability thresholds that an excitable connection can have, that is, δ th (p 2 , p 1 ), δ th (p 1 , p 2 ), δ th (p 1 , p 3 ).In the particular configuration shown in Fig. 3, it holds that δ th (p 2 , p 1 ) < δ th (p 1 , p 2 ) < δ th (p 1 , p 3 ).Therefore, the underlying ENA supporting the nonautonomous ESN dynamics is defined as X exc ({p i }4 i=1 , δ), for every δ th (p 1 , p 2 ) < δ < δ th (p 1 , p 3 ).We note that δ th (p 1 , p 3 ) ≈ √ 2, regardless of b ∈ [0, 0.47).We can reduce the size of the basin of attraction of p 2 and p 4 by increasing the value of b, which in turn reduces the excitability threshold of the corresponding connections, until at b ≈ 0.47 a bifurcation occurs making them disappear 4 .However, the basins of attraction of the other two stable points, p 1 and p 3 , become bigger (δ th (p 1 , p 2 ) ≈ 2 − δ th (p 2 , p 1 )), so increasing their excitability thresholds.Therefore, when the ESN state is close to p 1 (or p 3 ), the input needs to be very precise to throw back the state to the narrow basins of p 2 and p 4 .Moreover, it must have a large amplitude compared to what is needed to escape from such narrow basins.Nevertheless, setting ω in large enough and depending on b (until a value of ω in = 6, which is enough for every 0 < b < 0.47), the system is able to reproduce the flip-flop dynamics without errors.Unfortunately, it is not possible to design a two-dimensional ESN for the two-bit flip-flop where the excitability of each attractor can be tuned by changing the location of the nearby saddles.Nevertheless, as we will show in the next section, this becomes possible by including two additional dimensions in the model.

A 2k-dimensional model for k-bit flip-flop tasks
In this section, we propose a 2k-dimensional model able to solve k-bit flip-flop tasks.For the sake of clarity, but without loss of generality, we show the k = 2 bit case.The key insight is to model the switching dynamics between two fixed points in a two-dimensional space and then build a model formed by two independent systems with two-dimensional switching dynamics.
Unlike the previous example, here we want to tune the excitability of all connections.This can be obtained by imposing the condition in (42) and tuning the reservoir weights to change the position of the saddles.Therefore, we define where s is a real parameter.Hence, for 0 ≤ s < 2.15, the autonomous dynamical system defined by   Let us define the input matrix as follows: where ω in is a positive real parameter.For instance, setting s = 2 and ω in = 1, the two-dimensional dynamical system defined by is able to accomplish the switching mechanism between p 1 and p 2 according to inputs (1, 0), (−1, 0), and ignore other inputs, i.e., (0, 1), (0, −1).Of course, replacing zeros in W in with relatively small values (compared to ω in ) does not change the results.
The complete system able to solve the two-bit flip-flop dynamics can be obtained by defining the reservoir as the following four-dimensional block diagonal matrix, where 0 denotes a matrix with all zeros.Starting from a given initial condition, for every 0 ≤ s < 2.15, the state of the four-dimensional autonomous dynamical system x[k] = tanh Mx[k − 1] converges to one of these four attractors (p 1 , p 1 ), (p 1 , p 2 ), (p 2 , p 1 ), (p 2 , p 2 ).In order to produce a two-dimensional output (3) suitable for the task at hand, we define W o := 1 0 0 0 0 0 1 0 , which basically corresponds to a projection onto the first and third component, i.e., (x 1 , x 2 , x 3 , x 4 ) −→ (x 1 , x 3 ).Finally, we define the input matrix as The dynamics remains qualitatively the same even if the coupling between the two, two-dimensional systems is weak, i.e., if the zero matrices in (21) are replaced by matrices with relatively small (absolute) values.With those definitions in mind and, as before, = 0 and α = 1, the ESN ruled by ( 6) and ( 3) correctly implements the flip-flop task with two bits.
As the dynamics of systems (x 1 , x 2 ) and (x 3 , x 4 ) are independent from each other, the set of fixed points of the overall dynamics turns out to be the Cartesian product of the set of fixed points of (x 1 , x 2 ) and the fixed points of (x 3 , x 4 ) system.This gives rise to a large number of fixed points: there are 4 stable points, 8 saddles with 1 unstable directions, 8 saddles with 2 unstable directions, 4 saddles with 3 unstable directions and 1 repeller.However, the set of fixed points where the nonautonomous flip-flop dynamics take place is formed by 4 stable fixed points and 8 saddles with only one unstable direction; see Sec. 5.1 for a detailed example.Every stable fixed point is close to a pair of saddles and, due to symmetry, they are all at the same distance δ th (p 1 , p 2 ).This quantity defines the excitability thresholds of the connections needed in the flip-flop task, and therefore implicitly defines the ENA ruling the behaviour of the four-dimensional ESN.

Extracting ENAs from the ESN trajectory
In this section, we describe the proposed algorithm to extract an ENA from an ESN trajectory.The proposed algorithm, which is schematically illustrated in Fig. 5, takes a trajectory of the nonautonomous ESN (6) as input and produces a weighted directed graph, representing the ENA describing the ESN behaviour, as output.The algorithm is composed of two main steps.In Sec.4.1, we describe the procedure to find fixed points of the ESN dynamics (corresponding to the vertices of the graph).Successively, in Sec.4.2, we show how to determine the excitable connections and related thresholds between stable fixed points (corresponding to the weighted directed edges).

Finding fixed points of the dynamics
The optimisation algorithm to find fixed points is based on Sussillo and Barak [51].The key idea is to define a scalar function, whose minima correspond to fixed points of the ESN dynamics.We call velocity field the vector field Q : R Nr −→ R Nr , defined as with F being the map in Eq. 10.Therefore, the velocity field takes the following form: where ) is the vector that needs to be added to the current state x[k] to obtain the next one.In fact, We define a "kinetic energy" of the autonomous system (10) as the following scalar function, By this definition, it holds that fixed points x * ∈ R Nr satisfy Q(x * ) = 0 if and only if q(x * ) = 0. Fixed points are hence identified as the global minima of (25).We used the well-known quasi-Newton algorithm called BFGS [40] to minimise (25).In order to speed-up the optimisation by several orders of magnitude, we explicitly provided the gradient of (25) to BFGS, which reads: where I Nr is an N r × N r identity matrix, J Q (x 0 ) denotes the Jacobian matrix of the velocity field (23) and D(x 0 ) is a diagonal matrix defined in (31) of Appendix A, both evaluated on x 0 .The initial conditions for minimising (25) are determined by randomly sampling states from a trajectory of the nonautonomous ESN (6).The convergence of the BFGS algorithm depends on a tolerance.In fact, the algorithm might converge to similar solutions that, depending on chosen tolerance, are numerically different.As these solutions represent fixed points of the dynamics, we aggregate them in a meaningful way and return a non-redundant set of fixed points.For this purpose, as post-processing step, we run a clustering algorithm on the set of all solutions and return only cluster representatives; details are provided in Appendix C.

Determining excitable connections between attractors
Once stable fixed points have been identified, we determine the excitable connections between them.As discussed before, the ESN training (7) induces bifurcations that generate new fixed points; and possibly also other attracting regions in phase space that, however, are not explicitly modelled in this work.The ESN is driven by control inputs that allow us to correctly perform switches between stable states.As a consequence, we first need to understand how such inputs affect the dynamics from a geometric point of view.This is done in Sec.4.2.1 by introducing the notion of local switching subspace (LSS).Then, in Sec.4.2.2 we describe how we determine all excitable connections that are relevant for the task under consideration and compute their excitability thresholds.The method we propose is based on a grid of points lying in the LSS, accounting for the input action on the dynamics.By simulating the autonomous dynamics with initial conditions taken from such a grid, we are able to approximate input-driven excitability thresholds (17) and also to quantify how likely it is that the RNN uses such connections while solving the task.

Local switching subspaces
Let us consider an ESN trajectory (6), x[0], x [1], x [2], . . ., x[k], . .., obtained with inputs u [1], u [2], . . ., u[k + 1], . ...Moreover, let us denote with K := {k i } i∈N the set of indices for which u[k i ] = 0. We define the pulse difference vector (PDV) as a vector containing the difference between pre-and post-input states, namely The PDVs convey relevant information about the action of inputs on ESN state and therefore we will exploit such information to determine the excitable connections and related thresholds.We remind the reader that the number of non-zero inputs is controlled by the parameter p of the exponential distribution (see Eq. 1 and related discussion in the text), which in turn determines the (average) number of PDVs available for the following analysis.
In order to compute only those connections that are actually used by the ESN while solving the task, we need to focus on the action of inputs in the neighbourhood of each attractor.To this end, we consider an Euclidean ball B r (p) of radius r ≥ 0 centred on an attractor p, and call local PDVs of p the finite set of PDVs that originate inside B r (p).Referring to definition (16), the local PDVs of p can be thought as a discrete sampling of the manifold G(B r (p)) = x∈Br(p) G(x).We note that, as r → 0 + , the set G(B r (p)) approaches G(p).For each attractor p, we perform principal components analysis of the related local PDVs and retain only l N r principal components.The projection along such l basis vectors defines a vector space that we denote as L(p).We define the LSS of p as the affine space composed by attaching the vector space L(p) over p, and we denote it as p + L(p); see Fig. 6, top panel.

Estimation of excitability thresholds
The idea is to simplify the nonlinear manifold G(B r (p i )) and describe it as a grid of points G(p i ).Then, we consider those points on the grid as initial conditions for the autonomous system (10), which is then iterated for a sufficiently large number of steps to ensure convergence nearby attractors.Finally, tracing the evolution of these initial conditions allows us to estimate excitability thresholds and other relevant quantities.
The proposed algorithm for finding excitability thresholds is graphically illustrated in Fig. 6.The algorithm is based on a hypercube H(p i ) centred on attractor p i that is contained within the LSS p i +L(p i ).The length of the hypercube is such that H(p i ) contains the projection of G(B r (p i )) on p i + L(p i ).In order to estimate excitability thresholds, we consider a mesh with a pre-defined density of points on the hypercube H(p i ), thus obtaining a grid of points, G(p i ) = {g i j }.To simplify the notation, we use a single index j to enumerate points of the grid.Through the ω-limit set (11) of the grid G(p i ), that is: we can define the following positive integer c(p i ) := |ω F (G(p i ))| − 1, which counts the number of excitable connections originating from p i .Indeed, ω F (G(p i )) is composed of a set of stable fixed points, {p i0 , p i1 , p i2 , . . ., p i c(p i ) }, determining the endpoints of the c(p i ) different excitable connections.Note that if the grid is sufficiently dense, then the attractor itself is always included in such a set of fixed points.For that reason, it must not be counted as an excitable connection.In what follows, we assume that the attractor is the one indexed by i 0 , i.e., p i0 = p i .Bottom right: illustration of the partition of a two-dimensional grid G(p i ).Every colour represents a subset (27).Dashed black lines track the boundary of the basins corresponding to those attractors reachable from p i through excitable connections, which can be enabled by inputs allowing to explore the hypercube centred on p i .
With these definitions in mind, we are ready to compute thresholds of excitable connections.To this end, let us denote with an indexing function such that p σi(j) = ω F (g i j ).Then, through the preimage σ −1 i (•), we obtain a partition of points of the grid into the following subsets: The points of the grid belonging to the subset defined by ( 27) are all destined to converge to p it .Therefore, we estimate the excitability threshold (17) of the connection from p i to p it as follows: The excitability thresholds (28) represent geometric properties of the attractors and related basins in phase space learned through training.In order to determine the effective excitability (accounting for inputs) of each connection outgoing from p i , we exploit the local topology of the LSS of p i by means of the grid partition (27) induced by σ i (•).To this end, we define the ratio of initial conditions taken from the grid that converged to attractor p it as follows: In the limit of infinite number of points in the grid, the quantity (29) converges to the ratio of volumes between the portion of the hypercube belonging to the basin of p it and the portion of the hypercube that does not belong to the basin of p i .Therefore, dense grids give ratios (29) providing accurate information about how the LSS is distributed between basins of attraction of stable fixed points.Finally, merging both ( 28) and ( 29) into a single expression, we define effective excitability of the connection from p i to p it as follows: Note that this quantity takes into account both the distance between the attractor p i and the basin's boundary, and the volume of the basin itself.A low value for β i,it indicates that it is difficult to activate the connection from p i to p it during the task.This may be due (i) to the small volume occupied by the basin of the attractor p it in the LSS of p i or (ii) to a very high excitability threshold associated with such a connection.On the other hand, β i,it 1 necessarily implies that such a connection has a low excitability threshold, since ν i,it ∈ [0, 1].As a consequence, the distance between the basin of attraction of p it and p i is small, thus the connection can be easily activated during the task.
Remarks on computational complexity There are three parameters controlling the complexity and, accordingly, the accuracy of the search in the grid: the dimension ζ 1 of the hypercube, the length ζ 2 of the hypercube's edge, and the number of points ζ 3 on each edge determining the density of the grid.ζ 3 and ζ 2 have a linear and polynomial impact on the computational complexity of the algorithm, respectively.However, ζ 1 is more critical as it increases exponentially the number of points in the grid and, accordingly, the overall complexity.In the simulations we always set ζ 1 = l, that is, the dimension of the hypercube is equal to the dimension of the LSS, which in our case is always a low-dimensional subspace.More generally, the dimension of LSSs depends on the complexity of the inputs and their impact on the dynamics, which needs to be assessed on a case-by-case basis.

Simulations
In this section, we discuss the results of simulations, highlighting different aspects of our theoretical developments.In Sec.5.1, in order to evaluate the correctness of the algorithm proposed in Sec. 4, we apply it to the manually-designed, low-dimensional ESN maps discussed in Sec.3.1 and Sec.3.2.Successively, in Sec.5.2, we apply our method to high-dimensional trained ESNs.First, we show that, even though the ESN is high-dimensional, the dynamics is always low-dimensional due to the low-rank perturbation introduced during training.Second, we show the usefulness of ENAs for giving a mechanistic interpretation of prediction errors occurring during the task execution.Finally, in Sec.5.3, we show how ENA models can be used to assess the robustness to noise of trained ESNs.
In all simulations, we control inputs with p = 0.1 and set the tolerance of the kinetic energy (25) for detecting minima to 10 −6 .

Minimal-dimension model
The LSS, determined as described in Sec.4.2.1, corresponds to the whole 2D phase space.As a consequence, excitability thresholds (13) match the corresponding input-dependent counterparts (17).For each attractor, the Euclidean distances from the two closest saddles are consistent with the estimation of excitability thresholds provided by our method.As discussed in Sec.3.1 and graphically represented in Fig. 7, bottom-centre panel, we find three excitability thresholds in the computed ENA: 0.49, 1.39, and 1.42.In Fig. 7, bottom-right panel, we show that undesired connections (i.e., connections corresponding to state transitions that are not defined in the flip-flop task) between fixed points (0.97, −0.97) and (−0.97, 0.97) have very low effective excitability values (30), indicating that it is unlikely to use such connections during the task execution.The low effective excitability is due to the small volume ratio (29) of the basins associated with the two attractors.On the other hand, larger thresholds characterise the undesired connections from (−1, −1) to (1,1), reflecting the disproportion between basin volumes of (1, 1), (−1, −1) and (0.97, −0.97), (−0.97, 0.97).Four-dimensional model The first two principal components are related to all identified fixed points and produce a cumulative variance ratio larger than 0.96; Fig. 8 shows a trajectory of the map described in Sec.3.2 and related fixed points projected on the plane spanned by the two principal components.For every attractor, the computed LSS is a two-dimensional plane; this is expected, since it is actually the plane depicted in Fig. 4. Furthermore, we note that none of these planes is aligned with the one where the attractors lie, stressing the importance of defining reference frameworks local to each attractor that take the action of inputs into account.Fig. 8, middle-right panel, shows how the input moves the states out of the plane, using additional dimensions for the switches.The computed ENA, weighted with excitability thresholds (28) and effective excitability (30), is shown in Fig. 8, bottom-left and bottom-right panels, respectively.Symmetries of the dynamical system are clearly present in the resulting directed graphs.All desired connections are characterised by an excitability threshold of 0.83, while undesired ones have higher thresholds equal to 1.19.We note that the presence of undesired connections does not imply that the ESN actually used such connections during the task execution.To this end, the effective excitability thresholds (30), shown in the bottom-right panel of Fig. 8, provide us with a more realistic picture of the behaviour under the action of inputs.In fact, effective excitability thresholds are very low for undesired connections, implying that the LSS used by inputs is mostly occupied by basins corresponding to attractors adjacent to the end-point of desired connections.

Application of the proposed method to high-dimensional trained ESNs
From now on, all results are assumed to be obtained using ESNs (6) with a reservoir composed of 500 neurons.In doing so, we show that our theoretical framework does not depend on the number of neurons, so our choice is mostly dictated by hardware and computing time considerations.For the grid search algorithm, we used ζ 1 = 3, ζ 2 = 18,and ζ 3 = 12.The state-update ( 6) is configured without leakage, α = 1; the standard deviation of noise is set to 10 −4 during training.Entries of matrices W in , W f b , and W r are i.i.d.drawn from a uniform distribution in [−1, 1]; the sparseness of W r is 95%.Moreover, the reservoir matrix was rescaled to obtain a spectral radius equal to 0.9.Finally, the training set length is always 50000 time-steps.
Low-dimensional dynamics Fig. 9, top panel, shows the output produced by an ESN achieving high prediction performance.The extracted ENA, shown in the bottom-right panel, reveals that undesired connections have excitability thresholds (17) significantly higher than those of desired connections.This means that, in the LSS of every stable point, basins corresponding to attractors adjacent to the end-point of undesired connections stand relatively far away from the stable point compared to basins of attractors adjacent to the end-point of desired connections.The fixed points of the dynamics lie in a two-dimensional subspace of the 500-dimensional phase space (the cumulative variance ratio is close 1), highlighting that the dynamics of ESNs is low-dimensional after training.Nevertheless, during the task execution, the trajectory is occasionally driven away from such a 2D plane by the inputs.The cumulative variance ratio for the identification of the LSS of every attractor, as described in Sec.4.2.1, revealed that the switching between stable fixed points takes place in a three-dimensional subspace of the phase space.It is worth stressing that such LSSs are usually not aligned with the standard coordinate system of the original phase space and the subspaces where attractors lie, suggesting that inputs operate in phase space regions that are disjoint with respect to the low-dimensional linear subspace of the attractors; this is consistent with what is reported in [51].
Computation accuracy and spurious attractors Various measures of accuracy exist for quantifying the performance on prediction tasks.For instance, the mean-squared-error (MSE) is typically adopted in tasks involving continuous targets.The MSE is a real-valued scalar that informs us about how close the computed output is to the target one.However, it is not possible to infer additional insights by looking only at the MSE; for instance, it is not possible to provide a mechanistic description of errors in the computation, i.e., why and how they occur.Here, we show how the effective ENA extracted from the ESN trajectory can be used to describe how the computation takes place in phase space and, in particular, how to diagnose the nature of prediction errors.The top panel in Fig. 10 shows the output produced by an ESN achieving low MSE of the order of 10 −3 .The small errors observable around time-step 13200 can be explained by looking at the ENA model depicted in the bottom panels of Fig. 10, which is represented with excitability thresholds (28), left panel, and with effective excitability values (30), right panel.The directed graphs (whose topology is clearly identical) reveal the presence of two extra stable fixed points in the ESN phase space, which are generated during training.Nodes of these graphs are coloured according to output values produced by the ESN.The activation of the related excitable connections brings the ESN to operate in the proximity of such superfluous states, producing inaccurate output values and hence explaining the origin of such errors.Notably, the directed edge -in the graph on the right-hand side -connecting the cyan with the yellow node has a relatively high value of effective excitability.This means that, in the LSS of the related attractor (cyan), whose output is (−1.03,−1.07), it is relatively easy, starting nearby the attractor, to transition inside the basin of the other attractor (yellow).This, in turn, produces an output value equal to (−0.84, −0.40), which is significantly different from the target output, i.e., (−1, −1).The prediction error, visible to the naked eye in the top panel around time-step 13200, is indeed due to the activation of that excitable connection.
It is worth emphasizing that, both spurious attractors act as a sort of surrogates for the correct ones (i.e., those producing lower prediction errors).In fact, once the internal state switches to a spurious attractor, the ESN still behaves consistently.More precisely, spurious attractors are associated with higher effective excitability values on connections ending up in the correct attractors -see the bottom-right panel of Fig. 10.The discovery of such extra attractors and the unravelling of their roles in the ESN computation could not be inferred by looking only at the MSE or plotting the outputs produced by the ESN, hence stressing the importance of ENA models for describing the ESN behaviour.

Noise tolerance and effective excitability of ENAs
Here, we aim to provide a further example of the importance of ENAs for characterising the computation of ESNs.In particular, we ask the following question: given a set of ESNs trained on the same task and achieving the same performance during training, which one will be more robust to noise during test?Typical performance measures, such as MSE, cannot be used to answer such a question and provide useful insights.We show that an ENA model of the computation, weighted with effective excitability values (30), allows us to assess robustness to noise of ESNs.
As a perturbation, we consider the usual white Gaussian noise term in the state-update ( 6), corresponding to perturbations directly applied to all neuron pre-activation values.We stress that such perturbations are applied only during the test phase; training is implemented as described in the previous section.By increasing the noise standard deviation, the ESN trajectory gets increasingly perturbed neglecting the possibility to reach the proximity of attractors, hence affecting the accuracy of the resulting output values.
We note that: (i) ESNs yielding ENAs with higher effective excitability values are less robust to noise perturbations than ESNs giving rise to ENAs with low effective excitability values (see [6]), and (ii) ESNs producing ENAs with balanced edge weights on desired outgoing connections are more robust to noise perturbations than ENAs with unbalanced outgoing weights.Regarding (i), high excitability values imply the existence of connections with low excitability thresholds.That is, the basin of the attractor corresponding to the end-point of such a connection is very close to the attractor associated with the origin of the connection, resulting in unnecessary switches that induce errors.Concerning (ii), for a given attractor, unbalanced outgoing connection weights could be a symptom of unbalanced distribution of space between basins of attraction in the LSS or the existence of some basins significantly closer to the attractor than other basins.In the latter case, a reasoning similar to (i) applies.On the other hand, if there is indeed a disproportionate distribution of volumes between basins, then basins of attractors corresponding to connections possessing large volume ratios will fill most of the LSS, leaving little space for basins of those attractors related to connections with small volume ratios.Therefore, if the ESN state is close to an attractor with unbalanced outgoing connection weights, but with similar excitability thresholds, then under noise perturbations it is more likely to switch towards an attractor reachable through a connection with high effective excitability even when such a connection should not be activated.For these reasons, an ENA where each node is characterised by balanced weights on outgoing connections and very low effective excitability values on undesired connections, provides a prototypical example of ESNs that are robust to noise.
To provide a quantification of what has been discussed so far, we selected two ESNs that solve the two-bit flip-flop task with very high and comparable accuracy -MSE during training is 10 −4 .Successively, we tested them on the same test series composed of 10 5 time-steps and recorded their MSEs by considering different noise instances with increasing standard deviation.Directed graphs representing the extracted ENAs are shown in Fig. 11.Results for increasing noise standard deviation are divided in two columns: on the left column, we report results obtained by the ESN that is least tolerant to noise.The directed graph on the left-hand side of Fig. 11 shows the presence of two undesired connections.Furthermore, edge weights of desired connections are not balanced, especially those outgoing from attractors with output values (1, 1) and (−1, −1).Conversely, the directed graph on the right-hand side does not contain undesired connections and possess balanced outgoing weights.The absence of undesired connections indicates that, in the LSS of every attractor, the basins of the attractors corresponding to undesired connections do not exist or, alternatively, they are very small and hence not detectable with the grid density used in our simulations; therefore, they are not relevant for describing the ESN behaviour according with the numerical precision we considered in our simulations.Finally, we highlight that a performance breakdown for the ESN on the left-hand side panel is observed starting from noise standard deviation of 8 × 10 −2 .On the other hand, the ESN on the right-hand side is significantly more robust to noise, denoting a performance breakdown for noise standard deviation of 1.4 × 10 −1 .

Conclusions
In this paper, we presented a novel methodology for modeling and interpreting the behaviour of RNNs driven by inputs.In order to obtain a mechanistic model describing how RNNs solve tasks, we exploited the theoretical framework offered by excitable network attractors, which are defined as networks of stable fixed points connected by excitable connections.Here, we introduced a procedure to extract excitable network attractors directly from a trajectory generated by a trained RNN.Such a procedure is composed of two main steps: first, fixed points are computed by solving a non-linear optimisation problem, and successively excitable connections, with related thresholds, are determined by simulating the dynamics of the autonomous system.
We validated our theoretical developments by considering echo state networks trained on the flip-flop task, a simple yet relevant benchmark that consists of learning a prescribed number of stable states and related switching patterns guided by control inputs.Simulation results provided several interesting insights on how RNNs solve tasks and highlighted the usefulness of excitable network attractors in describing how RNN undertake computations.Here, we trained echo state networks by means of ridge regression.Our results (not shown here) denote that the regularisation parameter has a direct impact on the number of attracting regions in phase space generated through training: using too low values produces under-regularised models with a large number of attractors.An interesting future perspective consists of studying the impact of different training mechanisms (e.g., via FORCE learning or similar online approaches) on the resulting network attractor.
We believe that the proposed modelling framework based on excitable network attractors will be suitable to describe the RNN behaviour for all tasks requiring the learning of a number of attractors (which need not be stable fixed points) and related switching patterns.To this end, in future we will also examine classification tasks.A related next step includes the possibility to handle inputs that are not instantaneous pulses, opening the way to more interesting case studies of practical relevance.As mentioned in the introductory sections network attractors can in principle be constructed between any type of invariant sets, including limit cycles and strange attractors.Our focus on fixed points was mainly dictated by the fact that computing fixed points from a trajectory is significantly easier than computing limit cycles, for instance.Obviously, fixed points are not powerful enough to accurately model all possible behaviours in phase space.Therefore, an interesting future perspective consists in extending our modeling framework to handle networks composed of heterogeneous attractors, such as a network in phase space connecting fixed points and limit cycles.In turn, this will allow modeling more complex RNN behaviour.Finally, future directions include embedding the directed graph representing the excitable network attractor extracted from the trajectory in a new phase space, thus producing a set of ordinary differential equations [4] describing the RNN behaviour on the task under consideration.Bottom left: ENA with edge weights representing excitability thresholds (28).Bottom right: ENA with edge weights representing the effective excitability of connections (30).Node labels represent output values produced by the ESN on the attractors.

Appendix A Linear stability analysis
By considering the case where the neural network is autonomous, we follow Eq. 10 and write the related map as x , where δ ij is the Kronecker delta, it is possible to write the Jacobian matrix evaluated onto x 0 as J F (x 0 ) = (1 − α)I Nr + αD(x 0 )M, where is an N r × N r diagonal matrix representing the squashing action of tanh(•) along the saturating components of x 0 .By linearizing the network state-update (6) around a given fixed point x * , we obtain the linear system δx It is known [49] that if a fixed point x * is hyperbolic (i.e., J F (x * ) has no eigenvalues on the unit circle in the complex plane), then the linear approximation provides a bona-fide characterisation of the non-linear behaviour around that fixed point.Therefore, the (linear) stability of x * is completely determined by the spectral radius of J F (x * ).If all eigenvalues of J F (x * ) are inside the unit circle, then x * is a stable fixed point; on the other hand, if even just one eigenvalues has norm larger than one, then the linearized map is expanding along the corresponding eigenvectors and x * is called a saddle5 .We conclude observing that holds , hence the number of fixed points and their positions in phase space do not change by varying α ∈ (0, 1].However, their linear stability properties are directly affected by α.

Appendix B Bifurcation of fixed points in ESNs
In what follows, we perform a bifurcation analysis of one-dimensional ESN map and provide some sufficient conditions to design two-dimensional ESNs with a desired number of fixed points.Particularly, we focus on fold bifurcations6 of low-dimensional ESN maps, which constitute the only codimension-1 bifurcation that generate new fixed points; as discussed by Tiňo et al. [53], it is the usual mechanism for creating new attractive fixed point in RNNs.Moreover, as shown by Beer [7], some fold bifurcations are responsible for reducing the dimensions where the actual dynamics takes place, dividing the RNN parameter space in different regions of effective dimensionality.

B.1 ESN maps in one dimension
Here, we consider the following one-dimensional map, where (m, w) ∈ R 2 are the bifurcation parameters; we simplify the state-update in Eq. 6 and set leak rate α = 1, = 0, and remove explicit reference to inputs.Nevertheless, studying the map (32) is still useful to obtain insights about high-dimensional input-driven ESN dynamics.In fact, in the high-dimensional case (6), the activation function of the jth neuron is ruled by Hence, the parameter w in (32) can be interpreted as the weighted sum of all incoming neurons, plus input and noise terms.Fixed points of the map F (x) = tanh(mx + w) are the solutions x * of the equation Unfortunately, it is not possible to find a closed-form expression for the fixed points.However, it is possible to state that: (i) for every (m, w) ∈ R 2 , there exists at least one fixed point; (ii) there can be one, two, or three fixed points.The proof of these statements follow straightforwardly from the fact that lim x→±∞ Q(x) = ∓∞ and because, if m < 1 then Q is monotonic.Otherwise, there exist two critical points, x l < x r , namely such that the function Q(x) folds.Moreover, since Q(x * ) = 0 ⇐⇒ x * = tanh(mx * + w) ∈ (−1, 1) for every (m, w) ∈ R 2 , all fixed points lie in the open interval (−1, 1).Critical points in (35) are solutions to Q (x) = 0, or equivalently to F (x) = 1.
Let us assume m > 0. We observe a fold bifurcation whenever a critical point assumes the zero value.The condition Q(x l,r ) = 0 gives rise to the following parametrization of the fold bifurcation curve, which possesses two symmetric branches ending on a cusp; see Fig. 12 for an illustration.Crossing that curve in parameter space towards the region containing the semi-axis of m > 1, a new fixed point is formed (x l or x r , depending on the branch) and splits in a pair of fixed points, one stable and one unstable 7 .This curve delimits the boundary between a dynamic regime where two stable points coexist with an unstable one, and a regime where only one fixed point exists.Due to symmetry, in the m < 0 case, there are two bifurcation branches identifying a flip bifurcation, which is analytically described by the curve w ± (−m) with m ≤ −1.Crossing that curve towards the region containing the semi-axis of m < −1, a stable fixed point loses stability and gives rise to a 2-periodic attracting trajectory surrounding it.We note that the flip bifurcation is detrimental for the flip-flop task considered here, as it implies autonomous jumps corresponding to undesired RNN behaviours.

B.2 ESN maps in two dimensions
Here, we consider a two-neuron reservoir.We denote the activation functions of these neurons as x and y, so that the ESN state evolution defines a trajectory (x[k], y[k]) ∈ [−1, 1] 2 , k = 1, 2, ..., ruled by: It is know that a fixed point can undergo only fold or flip bifurcations in one-dimensional, discrete-time dynamical systems [26].Nevertheless, considering two or more dimensions, also Neimark-Sacker bifurcations could occur, where a stable point loses stability and an invariant curve surrounding it is created.These are the only possible codimension-1 bifurcations of a fixed point for a discrete-time dynamical system.Among them, only the fold bifurcation can generate new fixed points.The origin is always a fixed point of (10).Considering Eq. 37, the Jacobian matrix evaluated on the origin reads W r = a b c d .
Therefore, training ESNs via (7) implies a qualitative change of the dynamics around the origin if some eigenvalue crosses the unit circle, i.e., fixed point (0, 0) bifurcates.However, adding a low-rank matrix to 7 The particular case of crossing that curve through the cusp gives rise to a pitchfork bifurcation of the origin.), then the fold bifurcation curve is crossed, making the dynamics for x variable trivial.As a consequence of this, the upper triangular matrix induces dynamics with just two attractors (plus two saddles and the origin is a repeller), while the diagonal matrix induces four attractors (plus four saddles and the repeller).
In order to count the number of fixed points of the map (37), we can draw its nullclines.Defining function and they represent the locus of points where x-dynamic / y-dynamic is stationary.As a consequence, the solutions of the algebraic nonlinear system (38) coincide with the set of fixed points.In the last part of this subsection, we show some sufficient conditions to control the number of fixed points assuming a, d > 1, i.e., when both nullclines are folding.We refer to Fig. 13 for a graphical illustration.
• Case bc ≥ 0. The first one is a condition on the nullclines intersecting the origin, that is, (40) Fig. 13 depicts a nullcline configuration where there are 5 intersections8 .From that configuration, we can make the humps of y = N a,b (x) more pronounced by increasing the a b ratio, until a fold bifurcation occurs, giving rise to a new couple of pair of fixed points.The case of 7 fixed points is obtained in the particular setting of the fold bifurcation.
In both cases, a simple sufficient condition to ensure the existence of 9 fixed points, see Fig.
The maximum number of nullcline intersections is 9, setting the maximum number of fixed points that can be generated in a two-dimensional ESN map.

Figure 3 :
Figure3: Basins of attraction, nullclines, fixed points and corresponding eigenvectors of the two-dimensional system defined by(18), with b = 0.2.Black curves represent the nullclines whose reciprocal intersections determine fixed points (square black points).Red segments indicate eigenvectors associated with real eigenvalues larger than one.Blue line segments represent eigenvectors of real eigenvalues in (0, 1).Particularly important are blue lines of saddles, which represent local linear approximations of the boundary of the basins of attraction.The whole phase space is divided in four basins of attraction associated to the four stable points.Boundaries between these basins coincide with the four heteroclinic trajectories connecting the origin with the saddles.The legend shows output values produced at every attractor, which, in this specific example, correspond with the internal state, i.e., the phase space coordinates of the attractors.Points on the plane have been coloured according with the attractor on which they converged to after 100 steps.

Figure 4 :
Figure4: Basins of attraction, nullclines, fixed points and eigenvectors of the two-dimensional autonomous system having(20) as reservoir matrix, with s = 2. Red segments indicate real eigenvectors associated with real eigenvalues larger than one.Blue line segments represent real eigenvectors of real eigenvalues in (0, 1).The origin has complex conjugated eigenvalues lying outside the unit circle and a pair of eigenvectors; in the figure, only the real-valued one is depicted.Points of the plane have been coloured according with the attractor on which they converged after 100 steps.

Figure 5 :
Figure 5: Illustration of the proposed method to extract ENAs from trajectories.

Figure 6 :
Figure 6: Top: in blue, the low-dimensional space containing the attractors; in red, a sketch of the LSS of attractor p i .Bottom left: representation of G(B r (p i )), B r (p i ) and the grid G(p i ) in the LSS of p i .Bottom right: illustration of the partition of a two-dimensional grid G(p i ).Every colour represents a subset(27).Dashed black lines track the boundary of the basins corresponding to those attractors reachable from p i through excitable connections, which can be enabled by inputs allowing to explore the hypercube centred on p i .

Figure 7 :
Figure 7: Top: output produced by the two-dimensional ESN defined in Sec.3.1.Bottom left: fixed points found by the optimisation algorithm with 100 initial conditions.Length of the test trajectory was 1000 steps.Bottom centre: extracted ENA with edges weighted by excitability thresholds (28).Bottom right: extracted ENA with edges weighted according to (30).Node labels represent output values (3) produced on the attractors.

Figure 8 :
Figure 8: Top: output of the four-dimensional ESN defined in Sec.3.1.Centre left: fixed points found by the optimisation algorithm with 200 initial conditions, depicted on the space spanned by the first two principal components.In black, it is shown a trajectory starting on the attractor (marked by a black triangle); to improve readability, the trajectory is limited to two switches only.Centre right: fixed points and trajectory depicted in the centre-left picture are embedded in the space spanned by the three principal components.Bottom left: ENA with edge weights representing excitability thresholds(28).Bottom right: ENA with edge weights representing the effective excitability of connections(30).Node labels represent output values produced by the ESN on the attractors.

Figure 9 :Figure 10 :
Figure 9: Top: output produced by a trained ESN and target output.Bottom left: fixed point found by means of the optimisation algorithm with 500 initial conditions.The plane in the picture represents the space spanned by the first two principal components determined over all 500 solutions returned by the optimisation algorithm.Bottom right: extracted ENA with edge weights representing excitability thresholds (28).Node labels represent output values produced by the ESN (3) on the attractors.

Figure 13 :
Figure 13: In all panels, filled points denote stable attractors, while circles denote saddles or repellers.Top left: geometric representation of the sufficient condition (39); nullcline slopes on the origin are highlighted with different colours.Top right: illustration of the condition (40); the black dashed line acts as an upper bound ensuring that the maximum height of the peak of the x-nullcline (represented by the green dashed line) does not cross further the y-nullcline.Centre left: special case of 7 fixed points.Crosses indicate neutral fixed points where fold bifurcations take place.Centre right: an example of nullclines configuration holding the condition (43) in the case of bc > 0. Both curves come out of the invariant square [−1, 1] 2 with their humps, that guarantees 9 intersections, i.e., 9 fixed points.Bottom left: depiction of the sufficient condition (41); the origin is the only fixed point and it is unstable.Bottom right: illustration of the sufficient condition (42).