Chaotic Behavior in Diffusively Coupled Systems

We study emergent oscillatory behavior in networks of diffusively coupled nonlinear ordinary differential equations. Starting from a situation where each isolated node possesses a globally attracting equilibrium point, we give, for an arbitrary network configuration, general conditions for the existence of the diffusive coupling of a homogeneous strength which makes the network dynamics chaotic. The method is based on the theory of local bifurcations we develop for diffusively coupled networks. We, in particular, introduce the class of the so-called versatile network configurations and prove that the Taylor coefficients of the reduction to the center manifold for any versatile network can take any given value.


Introduction
Coupled dynamical systems play a prominent role in biology [8], chemistry [11], physics and other fields of science [21].Understanding the emergent dynamics of such systems is a challenging problem depending starkly on the underlying interaction structure [14,16,12,19,9].
In the early fifties, Turing thought of the emergent oscillatory behavior due to diffusive interaction as a model for morphogenesis [23].We note that weak coupling of globally stable individual systems cannot alter the stability of the homogeneous regime, this is the globally attracting state.At the same time, no matter what the individual dynamics are, the strong diffusive coupling by itself stabilizes the homogeneous regime.Therefore the idea that the intermediate strength diffusive coupling can create a non-trivial collective behavior is quite paradoxical.However, in the mid-seventies, Smale [20] proposed an example of diffusion-driven oscillations.He considered two 4th order diffusively coupled differential equations which by themselves have globally asymptotically stable equilibrium points.Once the diffusive interaction is strong enough, the coupled system exhibits oscillatory behavior.Smale posed a problem to find conditions under which diffusively coupled systems would oscillate.
Tomberg and Yakubovich [22] proposed a solution to this problem for diffusive interaction of two systems with scalar nonlinearity.For networks, Pogromsky, Glad and Nijmeijer [18] showed that diffusion-driven oscillations can result from an Andronov-Hopf bifurcation.Moreover, they presented conditions to ensure the emergence of oscillations for general graphs.While this provides a good picture of the instability leading to periodic oscillations, there is evidence that the diffusive coupling may also lead to chaotic oscillations.Indeed, Kocarev and Janic [10] provided numerical evidence that two isolated Chua circuits having globally stable fixed points may exhibit chaotic behavior when diffusively coupled.Along the same lines, Perlikowski and co-authors [17] investigated numerically the dynamics of rings of unidirectionally coupled Duffing oscillators.Starting from the situation where each oscillator has an exponentially stable equilibrium point, once the oscillators are coupled akin to diffusion the authors found a great variety of phenomena such as rotating waves, the birth of periodic dynamics, as well as chaotic dynamics.
Drubi, Ibanez and Rodriguez [4] studied two diffusively coupled Brusselators.Starting from a situation where the isolated systems have a globally stable fixed point, they proved that the unfolding of the diffusively coupled system can display a homoclinic loop with an invariant set of positive entropy.
In this paper, we provide general conditions for diffusively coupled identical systems to exhibit chaotic oscillations.We describe necessary and sufficient conditions (the so-called skewness condition) on the linearization matrix at an exponentially stable equilibrium point of the isolated system such that for any network of such systems there exists a diffusive coupling matrix such that the network has a nilpotent singularity and thus a nontrivial center manifold.When the network structure satisfies an extra condition, which we call versatility, we show that Taylor coefficients of the vector field on the center manifold are in general position.This allows us to employ the theory of bifurcations of nilpotent singularities due to Arneodo, Coullet, Spiegel and Tresser [2] and Ibanez and Rodríguez [7] and to show that when the isolated system is at least four-dimensional, invariant sets of positive entropy (i.e., chaos) emerge in such networks.
The paper is organized as follows: In Section 2, we formulate the main theorem.We introduce basic concepts about graph theory, and introduce the notion of versatility.In Section 3, we present examples and constructions of ρ-versatile graphs and illustrate some of the other concepts appearing in the main theorem.In Section 4, we show the existence of a positive-definite coupling matrix D that yields a nilpotent singularity in the network system.In Section 5, we discuss the stability of the center manifold.Finally, in Section 6, we prove that chaotic behavior emerges in the coupled system by investigating the dynamics on the center manifold.

The model
We consider ordinary differential equations ẋ = f (x) with f ∈ C ∞ (U, R n ), n ∈ N for some open set U ⊂ R n .We assume that f has an exponentially stable fixed point in U ; with no loss of generality we put the origin of coordinates to this point.We study a network of such systems coupled together according to a given graph structure by means of a diffusive interaction.Namely, we consider the following equation: where α > 0 is the coupling strength, W = (w ij ) is the adjacency matrix of the graph, thus, w ij = 1 if nodes i and j are connected and zero otherwise.Moreover, D is a positive-definite matrix (that is x T Dx > 0 for all non-zero vectors x).
The homogeneous regime x = 0 persists for every value of the coupling strength α.It keeps its stability for small α and is, typically, stable at sufficiently large α.However, at intermediate values of the coupling strength, the stability of the homogeneous regime can be lost.Our goal is to investigate the accompanying bifurcations.The difficulty is that the structure of system (1) is quite rigid: all network nodes are the same (are described by the same function f ) and the diffusion coupling αD is the same for any pair of nodes.Therefore, the genericity arguments, standard for the bifurcation theory, cannot be readily applied and must be re-examined.

Informal statement of main results
Our main goal is to give conditions for the emergence of non-periodic dynamics in system (1).Denote by A = Df (0) the linearization matrix n × n of the individual uncoupled system at zero.Recall that matrix A is Hurwitz when all its eigenvalues have strictly negative real parts.Our main result can be stated as follows.
Suppose that for some orthogonal basis the Hurwitz matrix A has m positive entries on the diagonal.Then, there exists a positive-definite matrix D such that the linearization of system (1) at the homogeneous equilibrium at zero has a zero eigenvalue of multiplicity at least m for a certain value of the coupling parameter α > 0. If the network satisfies a condition we call versatility, for an appropriate choice of the nonlinearity of f , the corresponding center manifold has dimension precisely m and the Taylor coefficients of the restriction of the system on the center manifold can take on any prescribed value.
The last statement means that the bifurcations of the homogeneous state of a versatile network follow the same scenarios as general dynamical systems.Applying the results for triple instability [4,7] we obtain the following result.
For n ⩾ 4, for any generic 2-parameter family of nonlinearities f and any versatile network graph, one can find the positive-definite matrix D such that the homogeneous state of the coupled system (1) has a triple instability at certain value of the coupling strength α, leading to chaotic dynamics for a certain region of parameter values.
The condition on the Jacobian of the isolated dynamics can be understood in a geometric sense as follows.We write ẋ = f (x) = Ax + O(|x| 2 ).We claim that if a nonzero vector x 0 ∈ R n exists for which ⟨x 0 , Ax 0 ⟩ > 0, then there are points arbitrarily close to the origin, whose forward orbit has its Euclidian distance to the origin increasing for some time, before coming closer to the (stable) origin again.To see why, consider ∥x(t)∥ 2 = ⟨x(t), x(t)⟩, then it follows that d dt ∥x∥ 2 = 2⟨Ax, x⟩+O(|x| 3 ), so ⟨Ax, x⟩ > 0 implies the growth of this derivative.
The property of versatility holds for graphs with heterogeneous degrees -the simplest example is a star network.In a sense, versatility means that the network is not very symmetric.Given a graph, one verifies whether the versatility property holds by evaluating the eigenvectors of the graph's Laplacian matrix, so it is an effectively verifiable property.
It is possible that a similar theory can be developed for the Andronov-Hopf bifurcation in diffusively coupled networks (an analysis of diffusion-driven Andronov-Hopf bifurcations was undertaken in [18] but the question of genericity of the restriction of the network system to the central manifold was not addressed there).
We also point out that symmetry is often instrumental in explaining and predicting anomalous behavior in network dynamical systems [1,14,15].
The network of just two symmetrically coupled systems has the corresponding graph Laplacian that is not versatile, yet the emergence of chaos via the triple instability has been established in [4] for the system of two diffusively coupled Brusselators.In general, we do not know when the genericity of the Taylor coefficients of the center-manifold reduced vector field would hold if the graph is not versatile or when graph symmetries would impose conditions on the dynamics that forbid the existence of limiting sets of positive entropy.

Main results
We start by introducing the basic concepts involved in the set-up of the problem.

Graphs
A graph G is an ordered pair (V, E), where V is a non empty set of vertices and E is a set of edges connecting the vertices.We assume both to be finite and the graph to be undirected.The order of the graph G is |V | = N , its number of vertices, and the size is |E|, its number of edges.We will not consider graphs with self-loops.The degree of a vertex is the number of edges that are connected to it.
for i = 1, . . ., N. We also define K = diag{k 1 , . . ., k N } to be the diagonal matrix of vertex degrees.We only consider undirected graphs G, meaning that a vertex i is connected with a vertex j if and only if it is vice-versa.Thus, the adjacency matrix W is a symmetric matrix.In this context there is another important matrix related to the graph G, which is the well-known Laplacian discrete matrix L G .It is defined by: so that each entry l ij of L G can be written as where δ ij is Kronecker's delta.The matrix L G provides us with important information about connectivity and synchronization of the network.It follows from Gershgorin disk theorem [5] that L G is positive semi-definite and thus its eigenvalues can be ordered as and let {v 1 , . . ., v N } be the corresponding eigenvectors.We assume the network is connected.This implies that the eigenvalue λ 1 = 0 is simple.We are interested in a well-behaved class of graphs G whose structure induces a special property of the associated Laplacian matrix.This property will be the existence of an eigenvector where the sum of certain coordinate powers is non-vanishing, which corresponds to a simple eigenvalue of L G .To this end, we define: Definition 1 (ρ-versatile graphs) Let G = (V, E) be a graph and ρ ∈ N a positive integer.We say that G is ρversatile for the eigenvalue-eigenvector pair (λ, v) with λ > 0, if the Laplacian matrix L G has a simple eigenvalue λ with corresponding eigenvector v = (ν 1 , . . ., ν N ), satisfying Note that any eigenvector v = (ν 1 , . . ., ν N ) for a non-zero eigenvalue necessarily satisfies N i=1 ν i = 0.This is because ν is orthogonal to the eigenvector (1, . . ., 1) for the eigenvalue 0.

Parametrizations
We show that a system of diffusively coupled stable systems can display a wide variety of dynamical behavior, including the onset of chaos.As the coupling strength α increases, a non-trivial center manifold can emerge with no general restrictions on the Taylor coefficients of the reduced dynamics.
Note that we may alternatively write Equation (1) in terms of the Laplacian: Let X := col(x 1 , . . ., x N ) denote the vector formed by stacking x i 's in a single column vector.In the same way we define F (X) := col(f (x 1 ), . . ., f (x N )).We obtain the compact form for equations ( 1) and (5) given by where ⊗ stands for the Kronecker product.In order to analyze systems of the form (1), we allow f to depend on a parameter ε taking values in some open neighborhood of the origin Ω ⊆ R d .For simplicity, we assume the fixed point at the origin persists: We assume the origin to be exponentially stable for ε = 0, from which stability follows for sufficiently small ε as well.Note that the non-linear diagonal map F now depends on the parameter ε as well.
We start with our working definition of center manifold reduction.

Definition 2 Let
be a family of vector fields on R n , parameterized by a variable ε in an open neighborhood of the origin Ω ⊆ R d .Assume that H(0; ε) = 0 for all ε ∈ Ω, and denote by E c ⊆ R n the center subspace of the Jacobian D x H(0; 0) in the direction of R n .A (local) parameterized center manifold of the system (8) is a (local) center manifold of the unparameterized system H on R n × Ω, given by for x ∈ R n and ε ∈ Ω.We say that the parameterized center manifold is of dimension dim(E c ), and is parameterized by d variables.Under the assumptions on H, the center subspace of H at the origin is equal to E c × R d .We can show that the dynamics on the center manifold of Equation ( 9) is conjugate to that of a locally defined system on E c × Ω, where the conjugation respects the constant-ε fibers.The map R satisfies R(0; ε) = 0 for all ε for which this local expression is defined, and we have We will refer to R : E c × Ω → E c as a parameterized reduced vector field of H.
In the definition above, the constant and linear terms of the parameterized reduced vector field R are given.Motivated by this, we will write H [2,ρ] for any map H to denote the non-constant, non-linear terms in the Taylor expansion around the origin of H, up to terms of order ρ.In other words, we have Given vector spaces W and W ′ , we will use P l 2 (W ; W ′ ) to denote the linear space of polynomial maps from W to W ′ with terms of degree 2 through l.It follows that H [2,l] ∈ P l 2 (W ; W ′ ) for H : W → W ′ .We are interested in the situation where the domain of H involves some parameter space Ω, in which case H [2,ρ] involves all non-constant, non-linear terms up to order ρ in both types of variables (parameter and phase space).For instance, if H is a map from R × Ω to R with Ω ⊆ R, then H [2,3] (x; ε) involves the terms a 1 x 2 , a 2 xε, a 3 ε 2 , a 4 x 3 , a 5 x 2 ε, a 6 xε 2 and a 7 ε 3 , with some constants a i .Note that a condition on H might put restraints on H [2,ρ] as well.For instance, if H(0; ε) = 0 for all ε ∈ Ω, then H [2,3] (x; ε) does not involve the terms ε 2 and ε 3 .

Main theorems
We now formulate the main theorem, along with an important corollary.
Theorem 3 (Main Theorem) For any α ≥ 0, consider the ε-family of network dynamical systems given by Denote by A = D x f (0; 0) the Jacobian of the isolated dynamics.If there exist m mutually orthogonal vectors x 1 , . . ., x m such that ⟨x i , Ax i ⟩ > 0, then there exists a positive-definite matrix D together with a number α * > 0 such that the system of Equation (11) has a local parameterized center manifold of dimension at least m for α = α * .Suppose that the graph G is ρ-versatile for the pair (λ, v).After an arbitrarily small perturbation to A if needed, there exists a positive-definite matrix D and a number α * > 0 such that the following holds: 1.The system of Equation (11) has a local parameterized center manifold of dimension exactly m for α = α * .
The above result guarantees the existence of the center manifold and the reduced vector field.When the dimension of the isolated dynamics is at least 4, the reduced vector field can exhibit invariant sets of positive entropy as the following result shows.

Corollary 4 (Chaos)
Assume the conditions of Theorem 3 to hold for m = 3 and ρ = 2.Then, in a generic 3-parameter system we have the emergence of chaos through the formation of a Shilnikov loop on the center manifold.In particular, chaos can form this way in a system of 4-dimensional nodes coupled diffusively in a network.
In the Appendix, we show that the conditions of Theorem 3 are natural, by constructing multiple classes of networks that are ρ-versatile for any ρ ∈ N, as well as by giving examples of matrices A that satisfy the conditions of the theorem.In Subsection 6.1, we present a geometric way of constructing ρ-versatile graphs, by means of the so-called complement graph.In Subsection 6.2, we then show using direct estimates that star graphs satisfy ρ-versatility.Finally, in Subsection 6.3, we present examples of matrices that satisfy the conditions of Theorem 3. In particular, we will see that being Hurwitz is no obstruction.

Proof of main theorem
In this section we present the proof of Theorem 3. We start by analyzing the linearized system in Subsection 3.1, after which we perform center manifold reduction and have a detailed look at the reduced vector field in Subsection 3.2.

Linearization
In this subsection we investigate the linear part of the system from Theorem 3. Writing A ∈ R n×n for the Jacobian of f at the origin, we see that the linearization of Equation ( 12) at the origin is given by An important observation is the following: if v is an eigenvector of L G with eigenvector λ ∈ R, then the linearization above sends a vector v ⊗ x with x ∈ R n to is kept invariant by the linear map of Equation (13).We claim that v ⊗ R n is in fact a linear subspace.Equation ( 14) tells us that the linearization (13) Recall that eigenvectors v p and v q of L G corresponding to distinct eigenvalues can be chosen to be orthonormal.Finally, we write for the corresponding linear subspaces of R N ⊗ R n .We thus have a direct sum decomposition where each component is respected by the linearization (13), with the restriction to V p conjugate to A − αλ p D : R n → R n .We see that the spectrum of the linearization ( 13) is given by the union of the spectra of the maps A − αλ p D, with a straightforward relation between the respective algebraic and geometric multiplicities.This observation motivates the main result of this subsection, Proposition 6 below.
In what follows, we denote by M n (R) the space of n by n matrices over the field R. We write ⟨x, y⟩ := x T y for the Euclidean inner product between vectors x, y ∈ R n .
We state a technical lemma.Suppose we are given a block matrix with blocks M 11 , . . ., M 22 .If M 22 is invertible then we may form the Schur complement of M , given by This expression has various useful properties.We are interested in the situation where M is symmetric, so that if and only if there exists a positive-definite matrix D such that A − D has at least m zero eigenvalues, counted with algebraic multiplicity.
Figure 1 shows an illustration of Proposition 6.

Remark 7
Note that any Hurwitz matrix A has a negative trace, as this number equals the sum of its eigenvalues.It follows that Equation (23) can then only hold when m < n, where n is the size of A. In this case, if our goal is to find a 3-dimensional center manifold for the network, we need 3 zero eigenvalues and so we must have at least n = 4.

Remark 8
The number of zero eigenvalues for A − D is directly connected with the number of mutually orthogonal vectors for Equation (23).Moreover, the positive-definite matrix D that we construct is in general not unique.Hence, if we have m such orthogonal vectors, then for each k ≤ m we might construct a different matrix D such that A − D has k zero eigenvalues.

Spec(A − D)
Triple zero eigenvalue

Proof of Proposition 6:
Suppose first that we have m mutually orthogonal vectors x i such that for each i = 1, . . ., m we have Note that we then also have for all i.We may re-scale the x i by any non-zero factor, so that we will now assume without loss of generality that ∥x i ∥ = 1 for all i.We start by constructing an auxiliary upper-diagonal (m × m)-matrix P as follows: where each entry p i,j is defined by the rule: , for all i < j; p i,j = 0, for all i ≥ j.
We construct D by first defining it on the mutually orthogonal vectors x 1 , . . ., x m as: ) and so forth.This shows that the restriction of A − D to span(x 1 , . . ., x m ) is nilpotent.Equation ( 26) can be rewritten as with -matrix with columns given by the vectors x 1 , . . ., x m .
To complete our construction of D, we let y m+1 , . . ., y n ∈ R n be mutually orthogonal vectors of norm 1 such that y k ⊥ x i for all i = 1, . . ., m and k = m + 1, . . ., n.We define D on span(y m+1 , . . ., y n ) by simply setting Dy k = cy k for all k and some constant c > 0 that will be determined later.
To show that c can be chosen such that D is positive-definite, let z ∈ R n be any non-zero vector and write -matrix with columns the vectors y k , and where a ∈ R m , b ∈ R n−m express the components of z with respect to the basis {x 1 , . . ., x m , y m+1 , . . ., y n }.Note that we have by construction.We calculate where in the third step we have used Equation ( 27), and where me make use of the identities in (28).We see that D is positive-definite if the same holds for the matrix Next, we claim that the (m × m)-matrix X T AX − P is positive-definite.Indeed, by definition of P we have , which is a diagonal matrix and positive-definite by the hypothesis (23).We may thus apply Lemma 5 to D, so that for c > 0 sufficiently large D and D are indeed positive-definite.
Conversely, suppose there exists a positive-definite matrix D such that A − D has m zero eigenvalues.
We will prove that m mutually orthogonal vectors x 1 , . . ., x m exist satisfying By assumption, we may choose m linearly independent vectors y 1 , . . ., y m such that where ι i ∈ {0, 1} for all i > 1. Next, we apply the Gram-Schmidt orthonormalization process to the vectors y i .That is, we set where each coefficient is given by α i,j = ⟨y i , x j ⟩ ⟨x j , x j ⟩ for j < i.
It follows that ⟨x i , x j ⟩ = 0 whenever i ̸ = j.Moreover, we see from Equation (32) that we may write for some coefficients β i,j , β ′ i,j ∈ R. We therefore have (A − D)x 1 = 0, and for 2 ≤ i ≤ m we find for certain γ i,j ∈ R. By orthogonality of the x i we get for all i = 2, . . ., m.Finally, it follows that which completes the proof.■

Corollary 9
The proof of Proposition 6 tells us that the remaining eigenvalues of A − D may be assumed to have (large) negative real parts.
Proof: If m mutually orthogonal vectors x 1 , . . ., x m exist such that then a positive-definite matrix D is constructed such that the restriction of A − D to span(x 1 , . . ., x m ) is nilpotent.In particular, A − D maps the space span(x 1 , . . ., x m ) into itself.It follows that the remaining eigenvalues of A−D are given by those of the 'other' diagonal block P U (A−D)| U : U → U , where U is some complement to span(x 1 , . . ., x m ) and P U is the projection onto U along span(x 1 , . . ., x m ).If we choose U = span(y m+1 , . . ., y n ) as in the proof of Proposition 6, then we see that Choosing c > 0 large enough then ensures that the remaining eigenvalues of A − D are stable.■ Corollary 10 It follows from the proof of Proposition 6 that A − D can generically be assumed to have a one-dimensional kernel.In other words, whereas the algebraic multiplicity of the eigenvalue 0 is m, its geometric multiplicity is generically equal to 1.
Proof: To see why, assume c > 0 is large enough so that A − D has a generalized kernel of dimension precisely m, see Corollary 9. From the proof of Proposition 6 we see that the restriction of A − D to its generalized kernel is conjugate to P .It follows that the dimension of the kernel of A − D is equal to 1 if This may be assumed to hold after a perturbation of the form for some arbitrarily small ε 1 , . . ., ε m−1 > 0, if necessary.As a result, the matrix A − D has a single Jordan block of size m for the eigenvalue 0. ■ Example 1 below shows that the condition (23) imposed on A does not exclude Hurwitz matrices.This might seem surprising, as for any eigenvector x corresponding to a real eigenvalue λ < 0 we have x T Ax = λ∥x∥ 2 < 0.Moreover, it holds that any positive-definite matrix D has only eigenvalues with positive real part, see Lemma 11 below.This result is well-known, but included here for completeness.

Lemma 11
Let D ∈ M n (R) be a real positive-definite matrix (though not necessarily symmetric).That is, assume we have x T Dx > 0 for all non-zero x ∈ R n .Then, any eigenvalue of D has positive real part.
Proof: Let λ be an eigenvalue of D with corresponding eigenvector x.We write λ = ξ + iζ and x = u + iv for their decomposition into real and imaginary parts.On the one hand, we find On the other, we have Comparing the real parts of equations ( 38) and (39), we conclude that where we use that u and v cannot both be zero.Hence, we see that indeed ξ > 0. ■ Example 1 Consider the (4 × 4) matrix: The matrix A is Hurwitz and Equation (23) holds for m = 3 with e 1 = (1, 0, 0, 0) T , e 2 = (0, 1, 0, 0) T and e 3 = (0, 0, 1, 0) T .We may determine the upper-diagonal (3 × 3)-matrix P from the proof of Proposition 6 by calculating where It follows that p 1,2 = p 1,3 = p 2,3 = 0, which implies we have P = 0.As in the proof of Proposition 6, we first define D ∈ M 4 (R) on span(e 1 , e 2 , e 3 ) = {x ∈ R 4 | x 4 = 0} by setting: Hence, D agrees with A in the first three columns.To complete our construction of D, we have to choose a non-zero vector u such that u ⊥ e i for i = 1, 2, 3, and set Du = cu for some c > 0. To this end, we set u = e 4 , so that D becomes:

It follows that
which has a zero eigenvalue with geometric multiplicity 3 and a negative eigenvalue (−17.94− c) equal to its trace.Moreover, by the Lemma 5, D is positive-definite for large enough c > 0. Indeed, in this case, we numerically found that for all c ≥ 9.24 is enough.

Example 2 The matrix
is Hurwitz, but symmetric.Thus, there are no vectors x ∈ R 4 such that ⟨x, Ax⟩ > 0.
To control a bifurcation in the system (12), we need to rule out additional eigenvalues laying on the imaginary axis.Recall that the eigenvalues of the linearization (13) are given by those of A − αλ p D with λ p ≥ 0 an eigenvalue of L G .Lemma (12) below shows that generically only one of the matrices A − αλ p D is nonhyperbolic.In what follows we denote by ∥ • ∥ the operator norm induced by the Euclidean norm on R n .Lemma 12 Let A, D ∈ M n (R) be two given matrices with D positive-definite, and let α * ∈ R be a positive scalar.We furthermore assume {λ 1 , . . ., λ K } is a set of real numbers and consider the matrices A − α * λ i D for i ∈ {1, . . ., K}.Given any ε > 0, there exist a matrix Ã and a positive-definite matrix D such that ∥A − Ã∥, ∥D − D∥ < ε and has a purely hyperbolic spectrum (i.e.no eigenvalues on the imaginary axis).

Remark 13
From ∥A − Ã∥, ∥D − D∥ < ε we get so that we may arrange for Ã − α * λ i D to be arbitrarily close to the original A − α * λ i D for all i.Moreover, if A is hyperbolic then for ε small enough so is Ã, with the same number of stable and unstable eigenvalues.In particular, Ã may be assumed Hurwitz if A is.

Proof of Lemma 12:
Let δ ̸ = 0 be given and set Note that the symmetric parts of Dδ and D differ by δ α * λ K Id as well, so that Dδ remains positive-definite for |δ| small enough.It is also clear that

Center manifold reduction
Let us now assume A, D and α * are given such that for a particular eigenvalue λ > 0 of L G the matrix A−α * λD has an m-dimensional center subspace.We moreover assume λ is simple and, motivated by Lemma 12, that the matrices A − α * κD are hyperbolic for any other eigenvalue κ of L G .It follows that the linearization of ( 13) has an m-dimensional center subspace as well.
In what follows, we write Îs for the indices of all remaining eigenvalues of L G except the index s.In other words, writing 0 = λ 1 < λ 2 ≤ • • • ≤ λ N for the eigenvalues of L G , we have λ = λ s for some s ∈ {2, . . ., N } and we set Îs = {1, . . ., N } \ {s}.We will likewise fix an orthonormal set of eigenvectors v 1 , . . ., v N for L G and simply write v = v s for the eigenvector corresponding to our fixed eigenvalue λ = λ s .Arguably the most natural situation is given by s = N , corresponding to the situation where α is increased until the eigenvalues of A − αλ N D first hit the imaginary axis for α = α * .However, we will not need this assumption here.
Next, given a vector u ∈ R N we denote by ϕ u : R N → R N the linear map defined by Note that ϕ u is a projection if ∥u∥ = 1.Finally, we write E c , E h ⊆ R n for the center-and hyperbolic subspaces of A − α * λD, respectively.It follows that and we denote the projections onto the first and second component by π c and π h = Id n −π c , respectively.Likewise, we denote the center-and hyperbolic subspaces of the map (45) by E c , E h ⊆ R N ⊗ R n .Their projections are denoted by Π c and Π h .The following lemma establishes some important relations between the aforementioned maps and spaces.

Lemma 14
The spaces E c and E c are related by and we have Moreover, it holds that Proof: The identities (48) and (49) follow directly from the fact that the linear map (45) sends a vector for all i ∈ {1, . . ., N } and x ∈ R n .To show that Π c is indeed given by ϕ v ⊗ π c , we have to show that the latter vanishes on E h and restricts to the identity on E c .To this end, note that for all i ∈ Îs and x ∈ R n we have Next, we investigate the dynamics on a center manifold of the system which will lead to a proof of Theorem 3.
In light of Lemma 14, we may write for certain maps ψ : We then likewise have ψ(0; ε) = 0, Dψ(0; 0) = 0 and ψ i (0; ε) = 0, Dψ i (0; 0) = 0 for all ε ∈ Ω and i ∈ Îs .The dynamics on the center manifold M c is conjugate to that of a vector field on E c × Ω given by where we write for the vector field on the right hand side of (53), with X c ∈ E c , X h ∈ E h and ε ∈ Ω.We further conjugate to a vector field R on E c × Ω by setting In order to describe R, we first introduce some useful notation.Given X ∈ R N ⊗ R n , we may write with e 1 , . . ., e N the canonical basis of R N and for some unique vectors x p ∈ R n .In general, given p ∈ {1, . . ., N } we will denote by x p ∈ R n the pth component of X as in the decomposition (58).Recall that v = (ν 1 , . . ., ν N ) is the eigenvector associated with λ.Using this notation, we have the following result.

Proposition 15
Denote by h : R n × Ω → R n the non-linear part of f .That is, we have f (x; ε) = Ax + h(x; ε).The reduced vector field R is given explicitly by for x c ∈ E c and ε ∈ Ω.
Proof: We write S(X; ε) = S(X c , X h ; ε) = JX + H(X; ε) with J = D X S(0; 0) and where H denotes higher order terms.It follows that We start by focusing on the first term.As J sends E c to E c and E h to E h , we conclude that Π c J = JΠ c .We therefore find Writing X c = v ⊗ x c and using Expression (13) for J, we conclude that the linear part of R is given by We next focus on the second term in Equation (60).Note that we have Now, by Lemma 14 it follows that we may write We therefore find Next, we have (X c ) p = (v ⊗ x c ) p = ν p x c , so that we find Combining equations ( 62) and (66), we arrive at Finally, from Equation 57 we get which completes the proof.■ To further investigate the Taylor expansion of R(x c ; ε), we need to know more about how the coefficients of Ψ : E c × Ω → E h depend on those of F .
To this end, let us consider for a moment the general situation where S is some vector field on R k (in our case k = nN ) satisfying S(X) = JX + H(X) for some H : R n → R n satisfying H(0) = 0, DH(0) = 0. We furthermore let Êc and Êh denote the center-and hyperbolic subspaces of J, respectively, and write Π c , Π h for the corresponding projections.Suppose Ψ : Êc → Êh is a locally defined map whose graph is a center manifold M c for the system Ẋ = S(X).Recall that we have Ψ(0) = 0 and DΨ(0) = 0.Moreover, as M c is a flow-invariant manifold, we see that S| M c takes values in the tangent bundle of M c .This can be used to iteratively solve for the higher order coefficients of an expansion of Ψ around 0.
More precisely, the tangent space at X c + Ψ(X c ) ∈ M c is given by all vectors of the form (V c , DΨ(X c )V c ) ∈ Êc ⊕ Êh , with V c ∈ Êc .Invariance under the flow of S then translates to the identity for X c in some open neighborhood of the origin in Êc .Equation (69) can be used to show that DΨ(0) = 0. Equation (69) can also be arranged to As Ψ only has terms of degree 2 and higher, the same holds for both sides of Equation (70), which depend on Ψ and H.More generally, using Ψ ρ to denote the terms of order ρ ≥ 2 in the Taylor expansion of Ψ around the origin, Equation ( 70) is readily seen to imply for each ρ for some homogeneous polynomial P ρ of order ρ.Moreover, P ρ depends only on Ψ 2 (X c ) . . ., Ψ ρ−1 (X c ) and on the Taylor expansion of H up to order ρ.It can be shown that for fixed J and P ρ , Equation (71) has a unique solution Ψ ρ in the form of a homogeneous polynomial of order ρ, see [24].As a result, we get the following important observation: Lemma 16 We may iteratively solve for the terms Ψ ρ using expression (71).Moreover, for fixed linearity J, the terms of order ρ and less of Ψ are fully determined by the terms of order ρ and less of H.
We return to our main setting where R(x c ; ε) is the reduced vector field of the system (53) as described in Proposition 15.Note that the presence of a parameter ε means that the center subspace Êc in the observations for general vector fields above is now given by E c × Ω.

Lemma 17
Let ρ > 1 be given, and suppose the vector Then the reduced vector field R(x c ; ε) as described in Proposition 15 can have any Taylor expansion around 0 of order 2 to ρ, subject to R(0; ε) = 0, if no conditions are put on the nonlinear part of f other than f (0; ε) = 0 and sufficient smoothness.
Proof: From Proposition 15 we know that As we have Ψ(0; ε) = 0 and h(0; ε) = 0, we conclude that likewise R(0; ε) = 0 for all ε ∈ Ω.In particular, we see that D ε R(0; 0) = 0, whereas Equation (73) tells us that As a warm-up, we start by investigating the second order terms of R. To this end, we write where Q 1,1 is linear in both components and Q 2,0 is a quadratic form.It follows that where we use that Ψ(v ⊗ x c ; ε) has no constant or linear terms in (x c ; ε).From Equation (74) we obtain As we assume N p=1 ν 3 p ̸ = 0, we see that the second order Taylor coefficients of R(x c ; ε) can be chosen freely (except for the O(|ε| 2 ) term).Now suppose we are given a polynomial map P : E c × Ω → E c of degree ρ satisfying DP (0) = ((A − α * λD)| E c ; 0) and P (0; ε) = 0 for all ε.We will prove by induction that we may choose the terms in the Taylor expansion of h up to order ρ in the variables x c and ε such that the Taylor expansion up to order ρ of R agrees with P .To this end, suppose some choice of h gives agreement between P and the Taylor expansion of R up to order 2 ≤ k < ρ.By the foregoing, this can be arranged for k = 2.
We start by remarking that a change to h that does not influence its Taylor expansion up to order k does not change the Taylor expansion of Ψ up to order k.This follows directly from Lemma 16.As a result, such a change does not influence the Taylor expansion of R up to order k as well.We write for an order k + 1 change to h, where each component of Q i,j : R n × Ω → R n is a homogeneous polynomial of degree i in x c and degree j in ε.The (k + 1)-order terms of R in (x c ; ε) are given by the (k + 1)-order terms of As both h and Ψ have no constant and linear terms, we see that the (k + 1)-order terms of R are also given by those of where Ψ k denotes the terms of Ψ up to order k.As we have previously argued, Ψ k is independent of the additional terms Q i,k+1−i .Hence, we may write the order k + 1 terms in Expression (77) as where W (x c ; ε) denotes the order k + 1 terms of As N p=1 v j p ̸ = 0 for all j ∈ {2, . . ., ρ + 1}, we see that the order k + 1 terms of R may be freely chosen.In other words, we may arrange for the Taylor expansion up to order k + 1 of R to agree with that of P up to order k + 1.This completes the proof by induction.■

Proof of Theorem 3:
If there exist m mutually orthogonal vectors x 1 , . . ., x m such that ⟨x i , Ax i ⟩ > 0, then Proposition 6 guarantees the existence of a positive-definite matrix D such that A − D has a center subspace of dimension m or higher.Given any non-zero eigenvalue λ of L G , we may set α * = 1/λ and conclude that A − α * λD has a center subspace of dimension at least m.As the eigenvalues of the linearization of (11) around the origin are given by those of the maps A − αλD for λ an eigenvalue of L G , we see that the system (11) has a local parameterized center manifold of dimension at least m for some choices of D and α = α * .If the graph G is ρ-versatile for the pair (λ, v), then a choice of D as above together with α * = 1/λ guarantees A − D has a center subspace of dimension at least m.By Corollary 9 we may assume this center subspace to be of dimension precisely m.Moreover, by Lemma 12 we may assume A − α * λ i D to have a hyperbolic spectrum for all other eigenvalues λ i ̸ = λ of L G , after an arbitrarily small perturbation to A and D if necessary.It follows that the system (11) has a local parameterized center manifold of dimension exactly m.We argue in the proof of Lemma 17 that R(0; ε) = 0 for all ε, and that DR(0; 0) = ((A − α * λD)| E c ; 0).This latter map is nilpotent by the statement of Proposition 6.Finally, Lemma 17 shows that any Taylor expansion can be realized for R up to order ρ, subject to the aforementioned restrictions.■

Stability of the center manifold
In this section we investigate the stability of the center manifold of the full network system.We know that the spectrum of the linearization of this system is fully understood if we know the spectrum of the matrices A − α * λ i D for λ i an eigenvalue of L G .Proposition 6 gives conditions on A that guarantee the existence of a positive-definite matrix D such that A − α * λD has an m-dimensional generalized kernel for some fixed eigenvalue λ > 0 of L G .Moreover, by Corollary 9 we may assume that the non-zero eigenvalues of A − α * λD have negative real parts.Lemma 12 in turn shows that -after a small perturbation of A and D if necessary-we may assume A − α * λ i D to have a hyperbolic spectrum for all remaining eigenvalues λ i ̸ = λ of L G .Thus, if the matrices A − α * λ i D for these remaining eigenvalues are all Hurwitz, then the m-dimensional center manifold of Theorem 3 may be assumed stable.This seems most reasonable to expect when λ is the (simple) largest eigenvalue of L G , as the matrices A−α * λ i D for the remaining eigenvalues of L G then "lie between" the Hurwitz matrix A and the non-invertible matrix A − α * λD.More precisely, suppose D is scaled such that A − D = A − α * λD.If we let α vary from 0 to α * = 1/λ, then for each eigenvalue λ i of L G , the matrix A − αλ i D is of the form A − βD for some β in [0, 1].Let us therefore denote by β → γ i (β) for i ∈ {1, . . ., n} a number of curves through the complex plane capturing the eigenvalues of A − βD.As α varies from 0 to α * = 1/λ, the eigenvalues of A − αλD traverse γ i , with the "front runners" given by those of A − αλD.In contrast, for λ 1 = 0 the eigenvalues of A − αλ 1 D of course remain at γ i (0).When α = α * is reached, the eigenvalues of A − α * λ i D end up in different places on the curves γ i .Hence, if the situation is as in Figure 2, where each γ i hits the imaginary axis only for β = 1, or not at all, then we are guaranteed that each of the matrices A − α * λ i D is Hurwitz for λ i ̸ = λ.Hence, the center manifold is then stable.
Of course β = 1 may not be the first value for which a curve γ i hits the imaginary axis, see Figure 3.Note that, if the matrix A − βD indeed has a non-trivial center subspace for some value β ∈ (0, 1), then a bifurcation is expected to occur as α is increased, before it hits α * .As β increases from 0 to 1, three eigenvalues move to the origin, whereas a fourth stays to the left of the imaginary axis.For some value of β ∈ (0, 1), two complex conjugate eigenvalues already cross the imaginary axis away from the origin.Likewise, a real eigenvalue crosses the origin for some β ∈ (0, 1).Data was simulated using Octave.

< l a t e x i t s h a 1 _ b a s e 6 4 = " 0 T h U y w Y D A R S v 6 R m I i F C k c r c i s D Y = " >
The next example shows that some of the eigenvalues of A − βD might cross the imaginary axis before a high-dimensional kernel emerges at β = 1, see Figure 3.

Example 3
We consider the matrices A and D constructed in Example 1.Here we choose c = 21 in order to guarantee that D is a positive-definite matrix.We therefore have the family of matrices: parameterized by the real number β ∈ [0, 1].We are interested in the eigenvalue behavior as β is varied.We know that for β = 0 we have the Hurwitz matrix A, so that all eigenvalues have negative real parts.We would like to know if the family A − βD has all eigenvalues with a negative real part for all β ∈ (0, 1).However, if β = 1 2 we have 3 eigenvalues with positive real part.By continuity of the eigenvalues, it means that each of these crossed the imaginary axis for some β < 1  2 .Only after this, for β = 1, do we have the bifurcation studied in the previous chapter, due to the appearance of a triple zero eigenvalue.Figure 3 shows the numerically computed behavior of these three eigenvalue-branches.
If we are in the situation of Figure 3, then the center manifold can still be stable.This occurs when the largest eigenvalue µ of L G is significantly larger than all other eigenvalues.In that case, we have for all eigenvalues λ of L G unequal to µ.For small enough values of λ/µ the matrix A − λ/µD is therefore still Hurwitz.As it turns out, this can be achieved in the situation explored in Subsection 6.2.More precisely, we have the following result.
Proposition 18 Let r < C be positive integers and suppose G is a connected graph with at least two nodes, consisting of one node of degree C and with all other nodes of degree at most r.Let µ and κ denote the largest and second-largest eigenvalue of the Laplacian L G , respectively.Then the value κ/µ goes to zero as C/r goes to infinity, uniformly in all graphs G satisfying the above conditions.

Bifurcations in diffusely coupled stable systems
Using our results so far, we show what bifurcations to expect in diffusely coupled stable systems in 1, 2 or 3 bifurcation parameters.Note that Theorem 3 tells us that the dynamics on the center manifold is conjugate to that of a reduced vector field R : R m × Ω → R m , satisfying R(0; ε) = 0 for all ε ∈ Ω.By Corollary 10 we may furthermore assume the linearization D x R(0; 0) to be nilpotent with a one-dimensional kernel.Other than that, no restrictions apply to the Taylor expansion of R.
Assuming that an m-parameter bifurcation can generically generate an m-dimensional generalized kernel, we each time consider m parameter bifurcations for a system on R m .In Subsection 5.1 we briefly investigate the cases m = 1 and m = 2. Our main result is presented in Subsection 5.2, where we show the emergence of chaos for m = 3.In most cases, the main difficulty lies in adapting existing results on generic unfoldings to the setting where R(0, ε) = 0 for all ε ∈ Ω.

One and two parameters
Motivated by our results so far, we describe the generic m parameter bifurcations for systems R on R m , where m = 1, 2, subject to the condition R(0; ε) = 0 for all ε ∈ Ω.We each time assume a nilpotent Jacobian with a one-dimensional kernel.We start with the case m = 1.

Remark 19 (The case m
Under the generic assumption that a, b ̸ = 0, we find a transcritical bifurcation.Returning to the setting of our network system, this corresponds to a loss of stability of the fully synchronous solution. Remark 20 (The case m = 2) Consider first a two-parameter vector field R : R 2 × R 2 → R 2 satisfying R(0; 0) = 0 and with non-zero nilpotent Jacobian D x R(0; 0).Such a system generically displays a Bogdanov-Takens bifurcation.However, in this bifurcation scenario there are parameter values for which there is no fixed point.Hence, if we impose the additional condition R(0; ε) = 0 for all ε ∈ R 2 , then another (generic) bifurcation scenario has to occur.This latter situation is worked out in [6].The corresponding generic bifurcation involves multiple fixed points, heteroclinic as well as homoclinic connections, and periodic orbits.One striking feature is the presence of a homoclinic orbit from the origin, which is approached as a limit of stable periodic solutions.In our network setting, such a periodic solution means a cyclic time-evolution of the system from full synchrony to less synchrony and back.The time at which the system is indistinguishably close to full synchrony can moreover be made arbitrarily long.

Three parameters: chaotic behavior
In this section we will prove Corollary 4, which allows us to conclude that chaotic behavior occurs in diffusely coupled stable systems.To this end, we will apply the theory developed so far.Before this, we will present a detailed background on how we will achieve chaos.We expect to find chaos in the network through the existence of a Shilnikov homoclinic orbit on a threedimensional center manifold.
The Shilnikov configuration can be seen as a combination of linear and nonlinear behavior involving a saddle fixed point.A two-dimensional stable manifold attracts trajectories exponentially fast to the fixed point, where the eigenvalues of the linearization are λ 1,2 = −α ± iβ with α > 0 and β ̸ = 0. Transversal to this there is a one-dimensional unstable manifold repelling away trajectories with real eigenvalue γ > 0. The Shilnikov homoclinic orbit emerges from the re-injection of the one-dimensional unstable manifold into the two-dimensional stable manifold, see Figure 4. Of course this re-injection is a consequence of nonlinear terms.L. P. Shilnikov proved that if γ > α, there are countably many saddle periodic orbits in a neighborhood of the homoclinic orbit.The proof consists of showing topological equivalence between a Poincaré map and the shift map of two symbols.The existence of chaotic behavior is in the sense that Robert L. Devaney defined for deterministic systems, with strongly sensitive dependence on initial conditions, topological transitivity and dense periodic points.We next give a brief summary of results contained in the paper [7].The authors studied the three parameter unfolding of nonlinear vector fields on R 3 with linear part conjugate to a nilpotent singularity of codimension three.After making several coordinate changes, the following normal form is presented: where the parameters are given by τ = (λ, ν, κ).The parameter κ is introduced by means of a blow-up technique, and the term denotes the nilpotent singularity of codimension three on R 3 .Equation (80) has two hyperbolic fixed points for λ > 0 and ν = 0, namely p 1 = (− √ 2λ, 0, 0) with local behavior given by a two-dimensional stable and one-dimensional unstable manifold and p 2 = (+ √ 2λ, 0, 0) with local behavior given by a two-dimensional unstable and one-dimensional stable manifold.Knowing there is a solution x(t) for a specific positive parameter λ = λ * such that x(t) → p 1 as t → −∞ and x(t) → p 2 as t → +∞, the authors proved analytically the existence of another solution, also for the parameter λ * , connecting both two-dimensional stable and unstable manifolds and thus forming another heteroclinic orbit.Theorem 4.1 of [7] states that in any neighborhood of the parameter τ 0 = (λ 0 , ν 0 , κ 0 ) = (λ * , 0, 0), where the heteroclinic orbits exist, there are parameters τ = (λ, ν, κ) such that the heteroclinic orbit breaks and a Shilnikov homoclinic orbit appears.
For completeness, we state Theorem 4.1 below in a slightly altered form.
As was the case for m = 2, we cannot immediately use this existing result, as the parameter dependent systems on the center manifold of our network ODE satisfy R(0, ε) = 0 for all ε ∈ R 3 .It remains to show that with this existing restriction, we may still reduce our system to the family given by Equation (80).This then proves Corollary 4 as a consequence of Theorem 3.
To bring our system in the form (80), we will proceed as in the paper [7].The first step is to get rid of the nonlinear terms h 1 and h 2 by means of a coordinate transformation.To eliminate h 1 we consider the following change of coordinates Applying it to (81), we get We therefore get the new system ẏ1 = y 2 ẏ2 = y 3 + h2 (y; ε) (83) ẏ3 = h3 (y; ε), with y = (y 1 , y 2 , y 3 ), and where h2 and h3 are uniquely defined by the relations Note that h2 and h3 again have vanishing linear terms, and moreover satisfy h2 (0; ε) = h3 (0; ε) = 0 for all ε.
To eliminate h2 we consider the change of coordinates Note that ĥ3 again has no linear terms and satisfies ĥ3 (0; ε) = 0 for all ε.Moreover, in case of h 1 = h 2 = 0 we would find x = y = z and h 3 = ĥ3 , which shows that no other restrictions apply to ĥ3 .Writing for some locally defined E i : R 3 → R, we therefore see that generically we may redefine ε = (ε 1 , ε 2 , ε 3 ) so that It follows that we may write for some coefficients a 1 , . . ., a 6 ∈ R. We will assume that a 1 ̸ = 0, so that the system (91) locally has two branches of steady states: z(ε) = (0, 0, 0) and z(ε) = (z 1 (ε), 0, 0).A straightforward calculation shows that Motivated by this, we perform the change of coordinates: Applying this to (91), we get We have left the remainder term O(|z| 3 + |ε||z| 2 ) as is, which will benefit us later.Rearranging terms, we get the new system Similar to the paper [7], we now introduce a blow-up parameter κ ∈ R and write Note that we get so that we may write ∥z∥ = O(κ 3 ).Applying it to Equation (95), we get where furthermore Summarizing, we find We next focus on the parameter γ 2 .We assume from here on out that γ 2 < 0 and perform the following change of coordinates Applying it to (98), we get We thus get the new system Finally, we make the following change of coordinates: This gives Setting λ := 2 and ν := γ 3 r, we arrive at the vector field from Theorem 21.Note that λ = γ 2 1 r 6 2 is necessarily non-negative.However, this may be assumed in the setting of Theorem 21, as λ * > 0. This theorem thus predicts chaos in the setting of our coupled cell system, provided m = 3 and the network in question is at least 2-versatile.and evaluating at the eigenvectors v i for i = 2, . . ., N gives Thus, for each i = 2, . . ., N we find that (N − λ i ) is an eigenvalue of L G • , with a corresponding eigenvector given by v i .As we also have L G • v 1 = 0, we see that the spectrum of L G • is given by The largest eigenvalue of L G • is therefore equal to N − 0 = N = s + t, with an eigenvector given by v = v 2 = (s, s, . . ., s, −t, −t, . . ., −t) .
Since λ 3 > 0, so that the eigenvalue N is simple.Using that st ̸ = 0 and s ̸ = t, we find for all ℓ > 1 Finally, we argue that G • is a connected graph.Indeed, if x, y ∈ G are in different connected components, then they share an edge in G • by definition of this latter graph.If on the other hand x and y are in the same component of G, then in G • they both share an edge with some node z from the other component of G.This completes the proof.■ Example 4 Let G = (V, E) be the undirected graph with V = {1, 2, 3} and two disconnected components of a different order s = 1 and t = 2, shown in Figure 5. Then G • is a connected, non-regular graph with We have Spec(L G • ) = {3, 1, 0} with simple and largest eigenvalue λ = s+t = 3 whose corresponding eigenvector v = (1, 1, −2) satisfies 3 i=1 ν ℓ i ̸ = 0 for all ℓ > 1.
Example 5 Let G = (V, E) be the undirected graph with V = {1, 2, 3, 4, 5} and two disconnected components of a different order s = 2 and t = 3, shown in Figure 6.Then G • is a connected graph with Here Spec(L G • ) = {5, 4, 3, 2, 0} with simple and largest eigenvalue λ = s + t = 5.Its corresponding eigenvector Example 6 below shows that the standard star graphs are ρ-versatile for any ρ > 0. These graphs consist of a single hub-node connected to all other nodes, which in turn have degree 1, shown in Figure 7.We will explore the ρ-versatility of more general star graphs in Subsection 6.2.

Example 6 (Star graphs)
Let G = (V, E) be the undirected graph with V = {1, . . ., N + 1} and two disconnected components of order s = 1 and t = N, shown in Figure 7.If the largest component of G is complete, then G • is a connected graph with Laplacian matrix given by  In what follows we turn to negative examples.The first of them shows us the importance of starting with connected components of different order, whereas the second one shows us what goes wrong if we start with more than 2 components.

Versatile graphs by means of the degree distribution
We next investigate another pathway to ρ-versatility, namely by looking at the degree distribution of the nodes in the network.To this end, we will prove: Proposition 24 Let r < C be two positive integers and suppose G = (V, E) is a graph consisting of one node of degree C and N nodes of degree at most r, where N ≥ 1.
We first construct a graph G ′ consisting of N nodes, all of which have degree at most r − 1.The graph G is then obtained from G ′ by adding a node n 0 , together with C ≥ ( 3 √ N + 1)r edges between n 0 and different nodes of G ′ .Note that condition (121) guarantees that ( 3 √ N + 1)r ≤ N , so that we are not demanding that n 0 is connected to more nodes than G ′ contains.It follows that all nodes in G apart from n 0 have degree at most r.Finally, the degree C of n 0 satisfies so that The graph G is connected if an edge was added from n 0 to at least one node from every connected component of G ′ .
The proof of Proposition 18 uses a result about the effects of adding an edge to the graph on the spectrum of the Laplacian.Given a graph G, we denote by G + e the graph obtained from G by adding some edge e that was not there before.If G and G + e have M nodes, then we denote by 0 This result is sometimes referred to as an interlacing theorem for graphs, see [13].In the proof of Proposition 18, we are interested only in the inequality λ G M −1 ≤ λ G+e M −1 corresponding to the second-largest eigenvalues.Repeated use of this latter result gives λ , where G ′ is obtained from G by adding any number of edges.

Proof of Proposition 18:
Let us say that G has N + 1 nodes, and write n 0 for the unique node of degree C. We denote the eigenvalues of If we have N = 1 then κ = 0, so that there is nothing left to prove.Hence, we assume from here on out that N > 1.Just as in the proof of Proposition 24, we have Next, let G ′ denote the graph obtained from G by adding edges between n 0 and other nodes until n 0 is connected to every other node.By the observation above we have κ = λ G N ≤ λ G ′ N , where the eigenvalues of L G ′ are given by 0 Let us consider the graph G ′ instead.Because n 0 is connected to every other node, we see that the complement graph G ′• consists of two components: {n 0 } and the remaining part H, where we do not claim H itself is connected.We denote by 0 = λ H 2 ≤ . . .λ H N +1 the eigenvalues of L H , so that those of L G ′• are given by 0 = λ G ′• 1 = λ H 2 ≤ . . .λ H N +1 .By the techniques used in the proof of Theorem 23, we conclude that λ H 3 = N + 1 − λ G ′ N .Next, consider the complement H • of H. Again by the techniques used in the proof of Theorem 23, we see that an eigenvalue of Moreover, by construction of H and H • , we see that this latter graph can be obtained from G ′ (or from G), by deleting n 0 and every edge connected to this node.In particular, we conclude that every node in H • has degree at most r.A straightforward application of the Gershgorin disk theorem [5] now tells us that all eigenvalues of L H • are bounded from above by 2r.In particular, we find λ G ′ N − 1 ≤ 2r and so κ ≤ λ G ′ N ≤ 2r + 1 .
Combining equations ( 125) and (126), we see that from which the result follows.

■
To conclude our discussion on ρ-versatile graphs, we fix values n, ρ ∈ N with n > 2 and define S ρ n as the set of all symmetric (n×n) matrices with a simple largest eigenvalue, whose corresponding eigenvector ν satisfies n i=1 ν ℓ i ̸ = 0 for all ℓ ∈ {2, . . .ρ + 1} .
It follows that S ρ n is an open subset of the space of symmetric matrices.Heuristically speaking, if G is a graph such that L G ∈ S ρ n , then we therefore expect L G ′ ∈ S ρ n for any graph G ′ obtained from G by a small perturbation.
In fact, as matrices generically have simple eigenvalues, and as Equation ( 128) is likewise valid for generic (eigen)vectors, we expect L G ∈ S ρ n for "most" graphs G.Of course these statements will have to be made precise, which we do not attempt here.
One common obstruction to L G ∈ S ρ n seems to be symmetry in the graph G.An explanation for this is that symmetry often forces eigenvalues with high multiplicity.Moreover, if an eigenvalue λ of L G is simple, then the span of a corresponding eigenvector v forms a 1-dimensional representation of the symmetry in question.For any finite group symmetry, a one-dimensional real representation is either trivial, or generated by v → −v.In the latter case, the graph symmetry contains a transformation α such that for any node n of G, the corresponding coefficients ν n and ν α(n) of v are related by ν n = −ν α(n) .This means that for any value c ∈ R there is an equal number of nodes n such that ν n = c as there are nodes m such that ν m = −c.As a consequence, we then necessarily have n i=1 ν ℓ i = 0 for all odd ℓ > 0 .
This is a common observation; imposing additional structure on a graph (such as for instance symmetry) induces high dimensional center subspaces and restrictions to the Taylor-coefficients of reduced vector fields in the associated dynamical systems.This generally leads to more elaborate bifurcation scenarios.

Hurwitzness
In this subsection we give examples of Hurwitz matrices A such that there exist m > 0 mutually orthogonal vectors x 1 , . . ., x m satisfying ⟨x i , Ax i ⟩ > 0 for all i = 1, . . ., m .
Note that any Hurwitz matrix A has a negative trace, as this number equals the sum of its eigenvalues.It follows that Equation (130) can then only hold when m < n, where n is the size of A. We start by looking at the case n = 2.To show that H n X is non-empty for general X, we pick X = {x 1 , . . ., x n−1 } and extend it to an orthogonal basis {x 1 , . . ., x n−1 , x n }.Let U be the matrix such that U e i = x i for all i ∈ {1, . . ., n}.It follows that U T U equals a diagonal matrix D with positive diagonal entries given by ⟨x i , x i ⟩ = ∥x i ∥ 2 .In particular, we have U T = DU −1 .Now suppose we pick an element A ∈ H n E .It follows that U AU −1 is Hurwitz as well.Moreover, for all i ∈ {1, . . ., n − 1} we find ⟨x i , U AU −1 x i ⟩ = ⟨U T x i , AU −1 x i ⟩ = ⟨DU −1 x i , AU −1 x i ⟩ = ⟨De i , Ae i ⟩ = ∥x i ∥ 2 ⟨e i , Ae i ⟩ > 0 . (136) This shows that H n X is likewise non-empty, which concludes the proof.■ Note that we have A ∈ H n X =⇒ cA ∈ H n X for all c ∈ R >0 .Taking the union over all the sets H n X , we arrive at: Corollary 26 Given n > 1, the set of all (n × n) Hurwitz matrices A for which some orthogonal vectors x 1 , . . ., x n−1 exist satisfying ⟨x i , Ax i ⟩ > 0 for all i = 1, . . ., n − 1 , is open and non-empty.

Figure 1 :
Figure 1: An illustration of Proposition 6. Figure a) shows the eigenvalues of a particular matrix A, which might be the linear part of the isolated dynamics f at the origin.The matrix A has 6 eigenvalues all strictly on the left half of the complex plane.The existence of m = 3 mutually orthogonal vectors satisfying (23) ensures that a positive-definite matrix D exists such that A − D has a 3-dimensional generalized kernel.In other words, the subtraction of D has moved 3 eigenvalues to the origin, see Figure b).

Figure 2 :
Figure2: Sketch of a situation where the m-dimensional center manifold of Theorem 3 may be assumed stable.Depicted are the eigenvalues of A − βD as β is varied.Colored dots denote starting points where β = 0, the blue and red dashed paths form a conjugate pair of complex eigenvalues and the green and orange dashed paths are real eigenvalues.The arrows indicate how the eigenvalues evolve as β increases to 1. Three of them go to the origin, whereas one moves away from it.None of them touches the imaginary axis before β = 1.
6 H e b E b S D X k m a z 4 X l N 6 C 4 s h C p g h f a o / D 4 M Y 5 I K K g 3 h W O u B i x L j Z 1 g Z R j i d l 4 a p p g k m U z y m A 0 s l F l T 7 2 W L h O T y z S g i j W N k n D V y o 3 y c y L L S e i c A m B T Y T / d v L x b + 8 Q W q i h p 8 x m a S G S r L 8 K E o 5 N D H M r 4 c h U 5 Q Y

Figure 3 :
Figure 3: Numerically computed behavior of three of the four eigenvalues of the family of matrices from Example 3.As β increases from 0 to 1, three eigenvalues move to the origin, whereas a fourth stays to the left of the imaginary axis.For some value of β ∈ (0, 1), two complex conjugate eigenvalues already cross the imaginary axis away from the origin.Likewise, a real eigenvalue crosses the origin for some β ∈ (0, 1).Data was simulated using Octave.

Figure 5 :Figure 6 : 1 Figure 7 :
Figure 5: G consists of precisely two disconnected components of order 1 and 2. The complement G • is a connected and non-regular graph.

Figure 8 :Figure 9 :
Figure 8: G consists of two disconnected components both of the same order 2 and the complement G • is connected.The Laplacian matrix L G • has a simple and largest eigenvalue.However there are no eigenvectors giving the ρ-versatility condition for ρ > 1, except for multiples of 1.

Example 10 AExample 11 b 1 .has eigenvalues b 1 +
general 2 by 2 matrix A is of the form with a, b, c, d ∈ R. A is Hurwitz if and only if a + d < 0 and ad − bc > 0. This can easily be arranged if in addition a > 0, by first choosing d < 0 such that a + d < 0, and then choosing b and c such that ad − bc > 0. We can construct examples of Hurwitz matrices A such that Equation (130) holds with m = 1 and x 1 = (1, 0) T .The set of all such matrices forms a non-empty open subset of the space of all 2 by 2 matrices.A similar observation of course holds when d > 0. Consider the 3 by 3 matrix for a, b, c, d, e ∈ R. Using the theory of network multipliers, it is shown in [3] that the eigenvalues of A are given by a + b + c + d + e, a − b and a + b.It is therefore clear that for certain choices of a through e we can arrange for A to be Hurwitz.Moreover, these eigenvalues do not change if we apply the transformation c → c − 2δ (133) d → d + δ e → e + δfor any δ ∈ R, while keeping a and b the same.Hence, if A as given by Equation (132) is Hurwitz, then so is the matrixA δ =   a + b + c − 2δ e + δ d + δ c − 2δ a + e + δ b + d + δ c − 2δ b + e + δ a + d + δ   (134)for any δ ∈ R. Choosing δ large enough, we see that Equation (130) holds for m = 2 and x 1 = (0, 1, 0) T , x 2 = (0, 0, 1) T .Finally, we show:Proposition 25 Let X = {x 1 , . . ., x n−1 } be a set of n − 1 mutually orthogonal vectors in R n , where n > 1. Denote by H n X the set of all (n × n) Hurwitz matrices A such that ⟨x i , Ax i ⟩ > 0 for all i = 1, . . ., n − 1 .(135)Then,H n X forms a non-empty open subset of the space of all (n × n) matrices.Proof:As the set of all Hurwitz matrices is open, and because the same holds for the set of all matrices A for which Equation (135) holds, we see that H n X is likewise open.It remains to show that H n X is non-empty.We will first show this when X consists of the first n − 1 standard vectors e 1 = (1, 0, . . ., 0) T , e 2 = (0, 1, . . ., 0) T and so forth, up to e n−1 = (0, . . ., 1, 0) T .Given numbers b 1 , . . ., b n ∈ R, we define the (n × n) matrix A b1,...,bn = b 2 . . .b n b 1 b 2 . . .b n As A b1,...,bn has rank 1, we see that it has an (n − 1)-dimensional kernel.The remaining eigenvalue is given by b 1 + • • • + b n with eigenvector (1, . . ., 1) T .Let us choose b 1 , . . ., b n−1 > 0 and b n < −(b 1 + • • • + b n−1 ).We also choose a ∈ R such that 0 < a < min(b 1 , . . ., b n−1 ) .As a result, we see that the matrix A b1,...,bn − a Id n = • • • + b n − a < 0 and −a < 0.Moreover, because b i − a > 0 for all i ∈ {1, . . ., n − 1}, we conclude that A b1,...,bn − a Id n ∈ H n E where E = {e 1 , . . ., e n−1 }.
M 11 and M T 22 = M 22 .In that case the matrix M is positive-definite if and only if both M 22 and M/M 22 are positive-definite.Using this result, we may prove: Id for some scalar c ∈ R >0 .Suppose furthermore that A 11 is positive-definite.Then, for c sufficiently large the matrix D is positive-definite as well.We setH 22 := 2cId, H 12 := A 12 + A T 21 and H 11 := A 11 + A T 11 , the third of which is positivedefinite as A 11 is.As H 22 = 2c Id is invertible with inverse 1/(2c) Id, we may form the Schur complement H/H 22 = H 11 − H 12 H −1 As H 22 is positive-definite, it follows from the above discussion that H is positive-definite if and only H/H 22 is.However, as c → ∞ we have H/H 22 → H 11 , so H/H 22 behaves like a small perturbation of H 11 , then it is positive-definite for c > 0 large enough.This shows that D is likewise positive-definite for large enough c. ■ Proposition 6 Let A ∈ M n (R) be a matrix.There exist m mutually orthogonal vectors x 1 , . . ., x m such that