Background

Neural field models, such as [1, 2] are continuum limits of large-scale neural networks. Typically, their dynamic variables describe either mean voltage [2] or mean firing rate [1, 3] of a population element of neural tissue (see [4, 5] for recent reviews).

The present article considers the paradigmatic Amari equation [2] describing the spatiotemporal dynamics of mean potential V(x, t) over a cortical d-dimensional manifold Ω d :

∂V ( x , t ) ∂t = - V ( x , t ) + Ω K ( x , y ) S [ V ( y , t ) ] d y ,
(1)

where K(x, y) is the spatial synaptic connectivity between site y ∈ Ω and site x ∈ Ω, and S is a nonlinear, typically sigmoidal, transfer function. This model neglects external inputs for simplicity but without constraining the generality of the subsequent results. Possible synaptic time scales are supposed to be included in the kernel function K and can be introduced by a simple scaling of time.

In general, the connectivity kernel K(x, y) fully depends on both sites x and y, which case is referred to as spatial heterogeneity. If the connectivity solely depends on the difference between x and y, i.e. K(x, y) = K(x - y), the kernel is called spatially homogeneous[2]. Furthermore, if the connectivity depends on the distance between x and y only, i.e. K(x, y) = K(||x - y||), with ||x|| as some norm in Ω, the kernel is spatially homogeneous and isotropic[6].

Spatially homogeneous (respectively isotropic) kernels have been intensively studied in the literature due to their nice analytical properties. In this case, the evolution equations have exact solutions such as bumps [2, 7], breathers [810] or traveling waves [7, 11]. Moreover, such kernels allow the application of the technique of Green’s functions for deriving partial neural wave equations [7, 12, 13].

The present work focuses on spatially heterogeneous neural fields which have been discussed to a much lesser extent in previous studies than homogeneous neural fields [1422]. This study resumes these attempts by investigating stationary states of the Amari equation (1) with heterogeneous kernels and their stability. Such a theory would be mandatory for modeling transient neurodynamics as is characteristic, e.g., for human cognitive phenomena [23], human early evoked potentials [24] or, among many other phenomena, for bird songs [25].

The article is structured in the following way. In the “Results” section we present new analytical results on stationary solutions of the Amari equation (1) and their stability in the presence of heterogeneous connectivity kernels. Moreover, we present numerical simulation results for the kernel construction and its stability analysis. The “Methods” section is devoted to construct such kernels through dyadic products of desired stationary states (cf. the previous work of Veltz and Faugeras [26]). A subsequent linear stability analysis reveals that these stationary solutions could be either attractors or saddles, depending on the chosen parametrization. Finally, we present a way to connect such saddle state solutions via heteroclinic sequences [27, 28] in order to construct transient processes.

Results

In this section we present the main results of our study on heterogeneous neural fields.

Stationary states and their stability

Analytical study

The Amari equation (1) has a trivial solution V0(x) = 0 and non-trivial solutions V0(x) ≠ 0 that obey the Hammerstein integral equation [29]

V 0 ( x ) = Ω K ( x , y ) S [ V 0 ( y ) ] d y .
(2)

Inspired by Hebbian learning rules for the synaptic connectivity kernel K(x, y) which found successful applications, e.g., in bi-directional associative memory [30], we consider symmetric spatially heterogeneous kernels K(x, y) = K(y, x) that can be constructed from dyadic products of the system’s non-trivial stationary states

K ( x , y ) = ( V 0 V 0 ) ( x , y ) = V 0 ( x ) V 0 ( y ) .
(3)

Together with Eq. (2), this choice yields the additional condition for non-trivial stationary states

1 = Ω V 0 ( y ) S [ V 0 ( y ) ] d y ,
(4)

which is a nonlinear integral equation of Fredholm type. Since 0 < S(x) < 1 for a logistic transfer function, a necessary condition for non-trivial stationary states is

1 < Ω V 0 ( y ) d y
(5)

which indicates immediately a method to find a non-trivial solution numerically as shown below.

Small deviations u(x, t) = V(x, t) - V0(x) from a non-trivial stationary state V0(x) obey the linear integro-differential equation

∂u ( x , t ) ∂t = - u ( x , t ) + Ω L ( x , y ) u ( y , t ) d y ,
(6)

where L(x,y)=K(x,y) S (y) and S (y)=dS[ V 0 (y)]/d V 0 (y). A linear stability analysis of (6) carried out in the next section shows that a non-trivial stationary state V0(x) is either a fixed point attractor, or (neglecting a singular case) a saddle with one-dimensional unstable manifold. Such saddles can be connected to form stable heteroclinic sequences.

Numerical study

To gain deeper insight into possible stationary solutions of the Amari equation (1) and their stability, the subsequent section presents the numerical integration of equation (1) in one spatial dimension for a specific spatial synaptic connectivity kernel.

Since previous experimental studies [31] have revealed Gaussian distributed probability densities of neuron interactions in the visual cortex of rats, it is reasonable to look for spatially discretized stationary states in the family of Gaussian functions

V 0 , i = W 0 exp - ( i - i 0 ) 2 Δ 2 x / 2 σ 2 / 2 π σ + κ η i
(7)

parameterized by the amplitude W0, the variance σ2, the noise level κ and the spatial discretization interval Δ x.

By virtue of this parametrization of the discrete Hammerstein equation, it is sufficient to fit the model parameters optimally in such a way that the Hammerstein equation holds. Figure 1(a) illustrates a noisy kernel K and (b) shows the corresponding stationary state V0(x) for certain parameters.

Figure 1
figure 1

Illustration of (a) a noisy kernel K ( x , y ) and (b) the corresponding solution of the Hammerstein equation (solid line) together with the first eigenmode e 1 ( x ) corresponding to the eigenvalue with the maximum real part ε 1 . Parameters are α = 0.86, W0 = 1.76, σ = 0.15, θ = 3.0, κ = 0.2 and n = 200, L = 1.

For each stationary state, one obtains a kernel L(x, y) of the linear stability analysis whose spectrum characterizes the stability of the system in the vicinity of the stationary state. If the eigenvalue with the maximum real part ε1 > 1, then the stationary state V0(x) is exponentially unstable whereas ε1 < 1 guarantees exponential stability. Figure 1(b) shows the eigenmode e1(x) corresponding to the eigenvalue with maximum real part which has a similar shape as the stationary state.

Moreover Figure 2 presents parameters for which V0(x) fulfills the Hammerstein equation (2), i.e. for which V0(x) is the stationary solution. We observe that some parameter sets exhibit a change of stability, i.e. the eigenvalue with maximum real part may be ε1 > 1 or ε1 < 1 for certain parameter subsets.

Figure 2
figure 2

Parameters which guarantee the solution of the Hammerstein equation ( 2) which are the stationary solutions of Eq. ( 1) and the line styles encode their exponential stability (stable: solid line, unstable: dashed line). Further parameters are κ = 0.0,θ = 3.0.

Taking a closer look at the stability of V0(x), the computation of the eigenvalues ε k , k = 1, …, n reveals a dramatic gap in the spectrum: the eigenvalue with maximum real part ε1 is well isolated from the rest of the spectrum ε k > 1 with |εk>1| < 10-14. This is in accordance to the discussion of Eq. (17) on the linear spectrum.

Figure 3 presents the spatiotemporal evolution of the heterogeneous neural field starting close to the stable stationary state V0(x), see point (1) in Figure 2. As expected, the field activity remains in the vicinity of the stable state.

Figure 3
figure 3

Numerically simulated spatio-temporal dynamics of the heterogeneous neural field (left) and stationary states (right). Here the stationary state V0(x) is exponentially stable with the parameters of point (1) in Figure 2. The simulation starts close to the stationary state V(x,0) = V0(x) + ξ(x) with the random numbers ξ(x) taken from a zero-mean Gaussian distribution with variance 0.3. The plot on the left hand side shows the deviation from the stationary state V0(x). The plot on the right hand side shows the stationary states V0(x) (solid line) and the final field activity at large times V(x,t = 125) (dashed line) which is almost identical to V0(x). The spatial domain has length L = 1 with n = 300.

In contrast, for the system starting close to an unstable stationary state, cf. point (2) in Figure 2, the field activity moves away from V0(x) and approaches a new stationary state close to but different from V0(x), cf. Figure 4. This new stationary state obeys the Hammerstein equation (2).

Figure 4
figure 4

Numerically simulated spatio-temporal dynamics of the heterogeneous neural field (left) and stationary states (right). Here the stationary state V0(x) is exponentially unstable with the parameters of point (2) in Figure 2. The plot on the left hand side shows the deviation from the stationary state V0(x), the right hand side panel presents the stationary states V0(x) (solid line) and the final field activity at large times V(x,t = 125) (dashed line) which is close to V0(x). Other parameters are taken from Figure 3.

Recalling the presence of the trivial stable solution V = 0, the activity shown in Figure 4 indicates the bistability of the system for the given parameter set.

Figure 5 supports this bistability for the same parameter set but different initial conditions, which presents the jump from the unstable stationary state V0(x) to the trivial stable stationary state V = 0. The choice whether the system approaches the upper or lower stable stationary state depends on the initial condition of the simulation and is random for random initial conditions as implemented in Figures 3, 4 and 5. Hence, this example reveals the existence of a saddle-node bifurcation in heterogeneous neural fields.

Figure 5
figure 5

Numerically simulated spatio-temporal dynamics of the heterogeneous neural field (left) and stationary states (right). Here the stationary state V0(x) is exponentially unstable with the parameters of point (2) in Figure 2. All parameters are identical to Figure 3, however the final field activity at large times V(x, t = 125) (dashed line) is close to the trivial solution V = 0. This jump of activity recalls the presence of the trivial stationary solution V = 0 and hence reflects the multi-stability of the heterogeneous neural field.

Finally, we would like to stress that the analysis presented above does not depend on the smoothness of the kernel and stationary state. For a strong noise level in the synaptic interaction kernel K, the analytical discussion above still describes the stationary state and the linear stability quite well as shown in Figure 6 for a stable stationary state V0(x) close to the stability threshold.

Figure 6
figure 6

Strongly random connectivity kernels yielding noisy exponentially stable stationary states. Panel (a) shows K(x, y), (b)L(x,y)=K(x,y) S (y), (c) spatio-temporal simulation starting close to the stationary state V0, (d) stationary states V0(x) (red) and V(x, t = 25) (black). Parameters are κ = 0.5, α = 0.88, W0 = 1.75, σ = 0.15,θ = 3.0 yielding a maximum eigenvalue ε1 = 0.95, i.e. close to the stability threshold.

Heteroclinic orbits

The previous section has shown that heterogeneous neural fields may exhibit various stationary states with different stability properties. In particular we found that stationary states could be saddles with one-dimensional unstable manifolds that could be connected to stable heteroclinic sequences (SHS: [27, 28]) which is supported by experimental evidence [24, 32, 33]. We present in the following paragraphs our main findings on heterogeneous neural fields exhibiting heteroclinic sequences and also hierarchies of such sequences.

One level heteroclinic sequence

It is possible to expand the integral in the Amari equation (1) into a power series yielding

∂V ( x , t ) ∂t + V ( x , t ) = K 0 + Ω K 1 ( x , y ) V ( y , t ) d y + 1 2 Ω Ω K 2 ( x , y , z ) V ( y , t ) V ( z , t ) d y d z .
(8)

with kernels

K 0 = 0 K 1 ( x , y ) = k ( σ k + 1 ) V k + ( y ) V k ( x ) K 2 ( x , y , z ) = 2 kj ρ kj σ j V k + ( y ) V j + ( z ) V k ( x ) .
(9)

The solution of Eq. (8) represents a heteroclinic sequence that connects saddle points {V k (x)} along their respective stable and unstable manifolds. Its transient evolution is described as winnerless competition in a Lotka-Volterra population dynamics governed by interaction weights ρ i k between neural populations k and i and their respective growth rates σ i .

In Eq. (9) the { V k + (x)} comprise a bi-orthogonal system of the saddles {V k (x)}. Therefore, the kernel K1(x, y) describes a Hebbian synapse between sites y and x that has been trained with pattern sequence V k . This finding confirms the previous result of [18]. Moreover the three-point kernel K2(x, y, z) further generalizes Hebbian learning to interactions between three sites x, y, z ∈ Ω. Note that the kernels K i are linear combinations of dyadic product kernels, similar to those introduced in (3). Thus, our construction of heteroclinic orbits straightforwardly results in Pincherle-Goursat kernels used in [26].

Multi-level hierarchy of heteroclinic sequences

Now we assume that the neural field supports a hierarchy of stable heteroclinic sequences in the sense of [25, 34]. For the general case one has to construct integral kernels for a much wider class of neural field equations which can be written as

V(x,t)= - t dτG(t,τ) Ω dyK(x,y)S[V(y,τ)]
(10)

where the new temporal kernel G describing synaptic-dendritic filtering is usually the Green’s function of a linear differential operator Q, such that G(t, τ) = G(t - τ) and the temporal integration in Eq. (10) is temporal convolution. Equation (10) can be simplified by condensing space x and time t into spacetime s = (x, t). Then, (10) becomes

V(s)= M H(s, s )S[V( s )]d s
(11)

with a tensor product kernel H(s, s )=(KG)(s, s ) and integration domain M=Ω×]-,t].

For only two levels in such a hierarchy, we obtain

V ( x , t ) = Ω d x 1 - t d t 1 K 1 ( x , x 1 ) V ( x 1 , t 1 ) - Ω d x 1 Ω d x 2 - t d t 2 K 2 ( x , x 1 , x 2 ) V ( x 1 , t 1 ) V ( x 2 , t 1 ) - Ω d x 1 Ω d x 2 Ω d x 3 - t d t 1 - t d t 2 × K 3 ( x , x 1 , x 2 , x 3 ) V ( x 1 , t 1 ) V ( x 2 , t 1 ) V ( x 3 , t 2 ) + Ω d x 1 Ω d x 2 Ω d x 3 Ω d x 4 - t d t 1 - t d t 2 × K 4 ( x , x 1 , x 2 , x 3 , x 4 ) V ( x 1 , t 1 ) V ( x 2 , t 1 ) V ( x 3 , t 2 ) V ( x 4 , t 2 )
(12)

with kernels

K 1 ( x , x 1 ) = k 1 = 1 n 1 σ k 1 ( 1 ) τ ( 1 ) V k 1 ( 1 ) + ( x 1 ) V k 1 ( 1 ) ( x ) + k 2 = 1 n 2 σ k 2 ( 2 ) τ ( 2 ) V k 2 ( 2 ) + ( x 1 ) V k 2 ( 2 ) ( x ) K 2 ( x , x 1 , x 2 ) = k 2 = 1 n 2 j 2 = 1 n 2 ρ k 2 j 2 ( 2 ) τ ( 2 ) V k 2 ( 2 ) + ( x 1 ) V j 2 ( 2 ) + ( x 2 ) V k 2 ( 2 ) ( x ) K 3 ( x , x 1 , x 2 , x 3 ) = k 1 = 1 n 1 j 1 = 1 n 1 l 2 = 1 m 2 σ l 2 ( 2 ) r k 1 j 1 l 2 ( 1 ) τ ( 1 ) τ ( 2 ) V k 1 ( 1 ) + ( x 1 ) V j 1 ( 1 ) + ( x 2 ) V l 2 ( 2 ) + ( x 3 ) V k 1 ( 1 ) ( x ) K 4 ( x , x 1 , x 2 , x 3 , x 4 ) = k 1 = 1 n 1 j 1 = 1 n 1 l 2 = 1 m 2 j 2 = 1 n 2 r k 1 j 1 l 2 ( 1 ) ρ l 2 j 2 ( 2 ) τ ( 1 ) τ ( 2 ) V k 1 ( 1 ) + ( x 1 ) V j 1 ( 1 ) + ( x 2 ) V l 2 ( 2 ) + ( x 3 ) × V j 2 ( 2 ) + ( x 4 ) V k 1 ( 1 ) ( x ) .

Here, the V k ν ( ν ) (x) denote the k ν -th saddle in the ν-th level of the hierarchy (containing n ν stationary states). Saddles are chosen again in such a way that they form a system of bi-orthogonal modes V k ν ( ν ) + (x) whose behavior is determined by Lotka-Volterra dynamics with growth rates σ k ν ( ν ) >0 and time-dependent interaction weights ρ k ν j ν ( ν ) (t)>0, ρ k ν k ν ( ν ) (t)=1 that are given by linear superpositions of templates r k ν j ν l ν + 1 ( ν ) . Additionally, τ(ν) are the characteristic time scales for level ν. Levels are temporally well separated through τ(ν)τ(ν + 1)[35].

Interestingly these kernels are time-independent. Since neural field equations can be written in the same form as Eq. (12), this result shows that hierarchies of Lotka-Volterra systems are included in the neural field description. Again we point out that the resulting kernels are linear combinations of dyadic products as introduced in Eq. (3), see also the work of Veltz and Faugeras [26].

Discussion

This study considers spatially heterogeneous neural fields describing the mean potential in neural populations according to the Amari equation [2]. To our best knowledge this work is one of the first deriving the implicit conditions for stationary solutions and identifying the corresponding stability constraint as the Hammerstein integral equation. Moreover, as one of the first studies our work derives conditions for the linear stability of such stationary solutions and derives an analytical expression for stability subjected to the properties of heterogeneous synaptic interaction. The analytical results are complemented by numerical simulations illustrating the stability and instability of heterogeneous stationary states. We point out that the results obtained extend previous studies both on homogeneous neural fields and heterogeneous neural fields as other studies in this context have done before [21].

By virtue of the heterogeneity of the model, it is possible to consider more complex spatiotemporal dynamical behavior than the dynamics close to a single stationary state. We show how to construct hierarchical heteroclinic sequences built of multiple stationary states each exhibiting saddle node dynamics. The work demonstrates in detail how to construct such sequences given the stationary states and their saddle node dynamics involved. Motivated by a previous study on hierarchical heteroclinic sequences [25, 34], we constructed such sequences in heterogeneous neural fields of the Amari type. Our results indicate that such a hierarchy may be present in a single heterogeneous neural field whereas previous studies [25, 34] consider the presence of several neural populations to describe heteroclinic sequences. The kernels obtained from heteroclinic saddle node dynamics are linear superpositions of tensor product kernels, known as Pincherle-Goursat kernels in the literature [26].

Conclusion

The present work is strongly related to the literature hypothesizing the presence of chaotic itinerant neural activity, cf. previous work by [36, 37]. This concept of chaotic itinerancy is attractive but yet lacking a realistic neural model. We admit that the present work represents just a first initial starting point for further model analysis of sequential neural activity in heterogeneous neural systems. It opens up an avenue of future research and promises to close the gap between the rather abstract concept of sequential, i.e. temporally transient, neural activity and corresponding mathematical neural models.

Methods

Stationary states and their stability

In order to learn more about the spatiotemporal dynamics of a neural field, in general it is reasonable to determine stationary states and to study their linear stability. This already gives some insight into the nonlinear dynamics of the system. Since the dynamics depends strongly on the interaction kernel and the corresponding stationary states, it is necessary to work out conditions for the existence and uniqueness of stationary states and their stability.

Analytical study

For stationary solutions, the left hand side of Eq. (1) vanishes and we obtain the Hammerstein equation (2) [29]. It has a trivial solution V0(x) = 0 and further non-trivial solutions V0(x) ≠ 0 under certain conditions. The existence and number of solutions of Eq. (2) depends mainly on the operator A=I+KF[38] and its monotonicity [39], with the unity operator I, the linear integral operator

Ku = Ω K ( x , y ) u ( y ) d y

and the Nemytskij operator Fu(x)=S(u(x)). For instance, a simple criterion for the existence of at least a single non-trivial solution is the symmetry and positive definiteness of the kernel K(x, y) and the condition S(u) ≤ C1u + C2[29]. Moreover previous studies have proposed analytical [40] and numerical [41] methods to find solutions of the Hammerstein equation (2).

To illustrate the non-uniqueness of solutions of the Hammerstein equation, let us expand the stationary solution into a set of bi-orthogonal spatial modes {ϕ n (x)}, {ψ n (x)}

V 0 ( x ) = n = 0 V n ϕ n ( x ) = n = 0 U n ψ n ( x )

with constant coefficients V n , U n and

ψ m , ϕ n = Ω ψ m ( x ) ϕ n ( x ) d x = δ n , m .

with the Kronecker symbol δn,m. Then the Hammerstein equation recasts to

V m = S 0 κ m + S n = 0 K m , n V n + f m { V n }
(13)

with

κ m = ψ m , K 1 , K m , n = ψ m , K ϕ n , f m = ψ m , Kf

and

f ( x ) = p = 2 1 p ! p S ( z ) z p z = 0 n = 0 V n ϕ n ( x ) p .

Since f m is a nonlinear function of V n , Eq. (13) has multiple solutions {V n } for a given bi-orthogonal basis. Hence, V0(x) may not be unique.

Considering small deviations u(x, t) = V(x, t) - V0(x) from a non-trivial stationary state V0(x), these deviations obey the linear equation (6) with L(x,y)=K(x,y) S (y) and S (y)=dS[ V 0 (y)]/d V 0 (y).

A solution of (6) is then u(x,t)=exp(λt)e(x),λ with mode e(x). Inserting this solution into Eq. (6) yields the continuous spectrum of the corresponding linear operator L(x, y) determined implicitly by the eigenvalue equation

( λ + 1 ) e ( x ) = Ω L ( x , y ) e ( y ) d y
(14)

Under the above assumption of a dyadic product kernel (3), the eigenfunctions {e k (x)} of the kernel L(x, y), defined through

ε k e k ( x ) = Ω L ( x , y ) e k ( y ) d y ,
(15)

with eigenvalues ε k = λ k +1 form an orthonormal system with respect to the scalar product

u , v S = Ω u ( x ) v ( x ) S ( y ) d y
(16)

with weight S (y). This follows from the dyadic product kernel (3) through

Ω L ( x , y ) e k ( y ) d y = Ω K ( x , y ) S ( y ) e k ( y ) d y = Ω V 0 ( x ) V 0 ( y ) S ( y ) e k ( y ) d y = V 0 ( x ) Ω V 0 ( y ) e k ( y ) S ( y ) d y ε k e k ( x ) = V 0 ( x ) V 0 , e k S .
(17)

From (17) we deduce two important results:

  • a certain eigenmode e k 0 (x) is proportional to the stationary state V0(x) with scaling factor V 0 , e k 0 S / ε k 0 , and

  • other eigenmodes in the eigenbasis e k k 0 (x) are orthogonal to V0(x) yielding 〈V0,e k S  = ε k  = 0.

Hence for dyadic product kernels (3) the spectrum of L includes one eigenmode with eigenvalue ε1 ≠ 0, i.e. λ1 ≠ -1, while all other eigenmodes are stable with εk≠1 = 0 and thus λk≠1 = -1. Therefore, a non-trivial stationary state could become either an asymptotically stable fixed point, i.e. an attractor, for ε n  < 1, for all n, or a saddle with a one-dimensional unstable manifold, for ε1 > 1 and εn≠1 < 1 (neglecting the singular case ε1= 1 ).

Finally we have to justify the necessary condition (5) which follows from 0 < S(x) < 1 and

1 = Ω V 0 ( y ) S [ V 0 ( y ) ] d y < Ω V 0 ( y ) d y ,

inserted into the Hammerstein equation (2) for a dyadic product kernel (3).

Numerical study

In order to investigate stationary states of the Amari equation numerically, we choose a finite one-dimensional spatial domain of length L and discretize it into a regular grid of n intervals with grid interval length Δ x = L / n. Then the kernel function is K(x = n Δ x, y = m Δ x) = K n m  / Δ x and Eq. (1) reads

V i ( t ) ∂t = - V i ( t ) + j = 1 n K ij S [ V j ( t ) ]
(18)

where V n (t) = V(n Δ x, t). The corresponding Hammerstein equation (2) is given by

V 0 , i = j = 1 n K ij S [ V 0 , j ] .
(19)

Taking into account the insight from the discussion of Eq. (3) and its consequences for the eigenmodes, we have chosen the spatial kernel to

K ij = V 0 , i V 0 , j .
(20)

We employ an Euler-forward integration scheme for the temporal evolution with discrete time step Δ t = 0.05.

We render the stationary state random by adding the noise term η i which are random numbers taken from a normal distribution with zero mean and unit variance. We point out that we choose η i such like | i = 1 n η i |<0.05 in order to not permit an amplitude increase in the dynamics by noise, but just as a modulation. The sigmoid function is chosen as S(V)=1/(1+exp(-α(V-θ))) parameterized by the slope parameter α and the mean threshold θ.

Heteroclinic orbits

The general neural field equation (10) supplies the Amari equation (1) as a special case for G(t) = e-tΘ(t) (Θ(t) as Heaviside’s step function). Then Q =  t  + 1 is the Amari operator for the Green’s function G(t). For second-order synaptic dynamics [42] and for the filter properties of complex dendritc trees [43], more complicated kernels or differential operators, respectively, have to be taken into account.

As a first step for constructing heteroclinic sequences for (1) or (10), we expand the nonlinear transfer function S in Eq. (11) into a power series about a certain state V ̄ (s)[15],

S [ V ( s ) ] = n = 0 S ( n ) ( s ) n ! ( V ( s ) - V ̄ ( s ) ) n = n = 0 S ( n ) ( s ) n ! ( - 1 ) n ( V ̄ ( s ) - V ( s ) ) n = n = 0 S ( n ) ( s ) n ! ( - 1 ) n k = 0 n ( - 1 ) k n k V ̄ ( s ) n - k V ( s ) k = n = 0 k = 0 n ( - 1 ) n + k S ( n ) ( s ) n ! n k V ̄ ( s ) n - k V ( s ) k

Inserting V(s) from Eq. (11) yields

S [ V ( s ) ] = n = 0 k = 0 n ( - 1 ) n + k S ( n ) ( s ) n ! n k V ̄ ( s ) n - k M H ( s , s ) S [ V ( s ) ] d s k = n = 0 k = 0 n ( - 1 ) n + k S ( n ) ( s ) n ! n k V ̄ ( s ) n - k × M M H ( s , s 1 ) H ( s , s 2 ) H ( s , s k ) × S [ V ( s 1 ) ] S [ V ( s 2 ) ] S [ V ( s k ) ] d s 1 d s k = n = 0 k = 0 n ( - 1 ) n + k S ( n ) ( s ) n ! n k V ̄ ( s ) n - k × M M j = 1 k H ( s , s j ) S [ V ( s j ) ] d s j .

By eliminating the nonlinearities S[ V(s j )] using the power series above again, we get

S [ V ( s ) ] = n = 0 k = 0 n ( - 1 ) n + k S ( n ) ( s ) n ! n k V ̄ ( s ) n - k × M M j = 1 k H ( s , s j ) n j = 0 S ( n j ) ( s j ) n j ! ( V ( s j ) - V ̄ ( s j ) ) n j d s j .

Inserting this expression into Eq. (11) leads to a generalized Volterra series

V ( s ) = H 0 ( s ) + m = 1 1 m ! M M H m ( s , s 1 , , s m ) V ( s 1 ) V ( s m ) d s 1 d s m
(21)

with a sequence of integral kernels H m that can be read off after some further tedious rearrangements [44].

One-level hierarchy

In a previous work, we have derived a one-level hierarchy of stable heterogenic sequences [44], which we briefly recapitulate here. We assume a family of n stationary states V k (x), 1 ≤ k ≤ n that are to be connected along a heteroclinic sequence. Each state is assumed to be realized by a population in the neural field governed by (10), that is characterized by a population activity α k (t) ∈ [0, 1]. Then the overall field quantity is obtained through an order parameter expansion [15, 45]

V ( x , t ) = k = 1 n α k ( t ) V k ( x ) .
(22)

These population amplitudes result from a winnerless competition in a generalized Lotka-Volterra system approximating a Wilson-Cowan model [46, 47]

d ξ k ( t ) d t = ξ k ( t ) σ k - j = 1 n ρ kj ξ j ( t ) α k ( t ) = ξ k ( t ) σ k
(23)

with growth rates σ k  > 0, and interaction weights ρ k j  > 0, ρ k k  = 1, that are trained by the algorithm of [27] and [28] for the desired sequence of transitions.

For the following construction, we also assume that the system of stationary states {V k (x)} is linearly independent such that there is a bi-orthogonal system { V k + (x)}, obeying

Ω V j + ( x ) V k ( x ) d x = δ jk .
(24)

Then, we obtain from (22)

Ω V j + ( x ) V ( x , t ) d x = k = 1 n α k ( t ) Ω V j + ( x ) V k ( x ) d x = k = 1 n α k ( t ) δ jk

and hence

α j ( t ) = Ω V j + ( x ) V ( x , t ) d x ξ j ( t ) = σ j Ω V j + ( x ) V ( x , t ) d x .
(25)

Next, we take the derivation of (22) with respect to time t, by exploiting (23)

∂V ( x , t ) ∂t = k 1 σ k d ξ k d t V k ( x ) = k 1 σ k ξ k σ k - j ρ kj ξ j V k ( x ) = k ξ k V k ( x ) - kj ρ kj σ k ξ k ξ j V k ( x ) .
(26)

Adding (22), we obtain

∂V ( x , t ) ∂t + V ( x , t ) = k ξ k V k ( x ) - kj ρ kj σ k ξ k ξ j V k ( x ) + k 1 σ k ξ k V k ( x ) = k 1 + 1 σ k ξ k V k ( x ) - kj ρ kj σ k ξ k ξ j V k ( x )
(27)

from which we eliminate all occurrences of ξ by means of (25). This yields

∂V ( x , t ) ∂t + V ( x , t ) = k 1 + 1 σ k σ k Ω V k + ( y ) V ( y , t ) V k ( x ) d y - kj ρ kj σ k σ k σ j Ω Ω V k + ( y ) V ( y , t ) V j + ( z ) V ( z , t ) V k ( x ) d y d z = Ω k σ k + 1 V k + ( y ) V k ( x ) V ( y , t ) d y - Ω Ω kj ρ kj σ j V k + ( y ) V j + ( z ) V k ( x ) V ( y , t ) V ( z , t ) d y d z .
(28)

Moreover, the nonlinear spatial integral transformation in the Amari equation (1) may be written as a generalized Volterra series (21)

J [ V ( · , t ) ] ( x ) = K 0 ( x ) + m = 1 1 m ! Ω Ω K m ( x , y 1 , y 2 , , y m ) V ( y 1 , t ) V ( y 2 , t ) V ( y m , t ) d y 1 d y 2 d y m .
(29)

Comparison of the first three terms with (28) yields the result (9).

Multi-level hierarchy of heteroclinic sequences

Here we assume that the SHS are given by (possibly infinitely many) generalized Lotka-Volterra systems

τ ( ν ) d α k ν ( ν ) ( t ) d t = α k ν ( ν ) (t) σ k ν ( ν ) - j ν = 1 n ν ρ k ν j ν ( ν ) ( t ) α k ν ( ν ) ( t )
(30)

with growth rates σ k ν ( ν ) >0, and time-dependent interaction weights ρ k ν j ν ( ν ) (t)>0, ρ k ν k ν ( ν ) (t)=1. Again, ν indicates the level within the hierarchy, n ν is the number of stationary states and τ(ν) represents the characteristic time scale of that level. Levels are temporally well separated through τ(ν)τ(ν + 1)[35].

Following [34], the population amplitudes α k ν ( ν ) (t) of level ν prescribe the control parameters of the higher level ν-1 by virtue of

ρ k ν j ν ( ν ) ( t ) = l ν + 1 = 1 m ν + 1 α l ν + 1 ( ν + 1 ) ( t ) r k ν j ν l ν + 1 ( ν )
(31)

where the constants r k ν j ν l ν + 1 ( ν ) serve as parameter templates.

Finally, the amplitudes α k ν ( ν ) (t) recruit a hierarchy of modes V k ν ( ν ) (x) in the neural field such that

V ( s ) = V ( x , t ) = ν = 1 k ν = 1 n ν α k ν ( ν ) ( t ) V k ν ( ν ) ( x ) .
(32)

Assuming a system of bi-orthogonal modes V k ν ( ν ) + (x) with

Ω V k ν ( ν ) + ( x ) V j ν ( ν ) ( x ) d x = δ k ν j ν
(33)

we obtain from (32)

α k ν ( ν ) ( t ) = Ω V k ν ( ν ) + ( x ) V ( x , t ) d x .
(34)

Next we convert the Lotka-Volterra differential equations (30) into their corresponding integral equations by formally integrating

α k ν ( ν ) (t)= 1 τ ( ν ) - t α k ν ( ν ) ( t ν ) σ k ν ( ν ) - j ν = 1 n ν ρ k ν j ν ( ν ) ( t ν ) α j ν ( ν ) ( t ν ) d t ν
(35)

and obtain a recursion equation after inserting (31)

α k ν ( ν ) (t)= 1 τ ( ν ) - t α k ν ( ν ) ( t ν ) σ k ν ( ν ) - j ν = 1 n ν l ν + 1 = 1 m ν + 1 r k ν j ν l ν + 1 ( ν ) α l ν + 1 ( ν + 1 ) ( t ν ) α j ν ( ν ) ( t ν ) d t ν .
(36)

Eventually, we insert all results of the recursion of (36) into the field equation (32) and eliminate the amplitudes α k ν ( ν ) (t) by means of (34) in order to get a series expansion of V(s) in terms of spatiotemporal integrals. This series must equal the generalized Volterra series (21), such that the kernel H(s,s) can be reconstructed to solve the neural field inverse problem [44]. For more mathematical details on the derivation of the two-level hierarchy, see Additional file 1.