1 Introduction

Synchronization is an important phenomenon in real-world networks. For instance, in power grids, power stations must work in 50 Hz synchrony in order to avoid blackouts (Dörfler and Bullo 2012; Motter et al. 2013). In sensor networks, synchronization among the sensors is vital for the transmission of information (Papadopoulos et al. 2005; Yadav et al. 2017). On the other hand, synchronization of subcortical brain areas such as in the Thalamus is strongly believed to be the origin of motor diseases such as Dystonia and Parkinson (Hammond et al. 2007; Milton and Jung 2003; Starr et al. 2005). In all of the mentioned examples, the stability of synchronous states is crucial for the network’s function or dysfunction, respectively. Motivated by these observations, stability properties of synchronous states in systems of coupled elements have been investigated intensively (Barahona and Pecora 2002; Pecora and Carroll 1998; Pikovsky et al. 2001; Field 2017; Li et al. Oct 2007).

An important class, mimicking the above examples, is given by networks of identical elements which are coupled in a diffusive manner. That is, networks for which the dynamics of a node depend on the difference between its own state and its input. A special focus has been on unravelling the connection between such a network’s coupling topology and its overall dynamics (Pereira et al. 2017; Jalili 2013; Nishikawa et al. 2003, 2017; Agarwal and Field 2010; Bick and Field 2017).

While certain correlations have been observed, there are few rigorous results determining the relation between a network’s structure and its dynamical properties (Wu and Chua 1996; Pogromsky and Nijmeijer 2001; Wang and Chen 2002; Ujjwal et al. 2016). There is even less known about the impact of structural perturbations of a network on its dynamical properties such as the stability of synchrony (see for instance Milanese et al. Apr 2010). A particularly interesting and important question in this category is the following: assume that a link’s weight in a network is perturbed or a new link with a small weight is added to the network. What is the impact on the dynamics? For instance, in interaction graphs of gene networks, it has been shown that adding links between two stable systems can lead to dynamics with positive topological entropy (Poignard 2013). In diffusive systems such as laser networks, it was shown that the addition of a link can lead to synchronization loss (Pade and Pereira 2015; Hart et al. 2015). In this article, we focus on the question whether these structural perturbations lead to higher or lower synchronizability. In the main body, we give rigorous answers to this question for directed networks. In undirected networks, under our assumptions, undirected perturbations will never decrease the synchronizability. Here, the only non-trivial situation appears when introducing directed perturbations to undirected networks. We deal with this case in the “Appendix”. Let us first introduce the model and motivate the main questions with some examples.

1.1 Model and Examples

We call a network a triplet \((\mathcal {G}, \varvec{f}, \varvec{H})\), where \(\mathcal {G}\) is a graph, possibly weighted and directed, \(\varvec{f}:\mathbb {R}^{\ell }\rightarrow \mathbb {R}^{\ell }\) is a function representing the local dynamics of each node and \(\varvec{H}\) is a coupling function between the nodes of the network. If no confusion can arise, we sometimes identify a network and its underlying graph \(\mathcal {G}\). To this triplet, we associate the following coupled equations.

$$\begin{aligned} \dot{\varvec{x}}_{i}=\varvec{f}(\varvec{x}_{i})+\alpha \sum _{j=1}^{N}W_{ij}\varvec{H}(\varvec{x}_{j}-\varvec{x}_{i})\qquad i=1,2,\ldots ,N. \end{aligned}$$
(1)

Here, \(\alpha \ge 0\) is the overall coupling strength and \(\varvec{W}=[W_{ij}]_{1\le i,j\le N}\in \mathbb {R}^{N\times N}\) is the adjacency matrix associated with the graph \(\mathcal {G}\). In other words, \(W_{ij}\ge 0\) measures the strength of interaction from node j to node i. The network \((\mathcal {G}, \varvec{f}, \varvec{H})\) is undirected if \(\varvec{W}\) is symmetric, otherwise it is directed. The theory we develop here can include networks of non-identical elements with minor modifications (Pereira et al. 2014).

Note that due to the diffusive nature of the coupling, if all oscillators start with the same initial condition, then the coupling term vanishes identically. This ensures that the globally synchronized state is an invariant state for all coupling strengths \(\alpha \), and we call the set

$$\begin{aligned} M := \big \{ x_i \in U \subset \mathbb {R}^{\ell } \text{ for } i \in \{ 1, \ldots , N \} : x_1=\cdots = x_N \big \} \end{aligned}$$
(2)

the synchronization manifold. The transverse stability of M depends on the structure of the graph \(\mathcal {G}\). Indeed, structural changes in \(\mathcal {G}\) can have a drastic influence on the stability of M as can be seen in the next example which serves as motivating example for the subsequent analysis.

1.2 Structural Perturbations in Directed Networks—About Masters and Slaves

Directed networks always consist of one or several strongly connected subnetworks in which every node is reachable from any other node through a directed path. If there is more than one strongly connected subnetwork, two such subnetworks can be connected through unidirectional links pointing from one subnetwork to another. In the top right of Fig. 1, we show a network composed of two strongly connected subnetworks (without the red link), which is weakly connected; starting from the smallest connected subnetwork, it is not possible to reach the larger connected subnetwork through a directed path. In the physics literature, this configuration is called master–slave coupling as the subnetwork consisting of nodes 1, 2 and 3 drives the subnetwork consisting of nodes 4 and 5.

The master–slave configuration is believed to have many optima such as synchronization. For instance, feedforward networks can synchronize for a wide range of coupling strengths while having only a few links (Nishikawa and Motter 2006). The network presented in Fig. 1 also supports stable synchronized dynamics. An important question concerns the network dynamics once we make qualitative structural changes. For instance, what happens if we add a link breaking the master–slave configuration?

Fig. 1
figure 1

Impact of Making the network strongly connected. Simulations of the networks shown on the right depicting the total synchronization error \(\sum _{i{\ne }j}\Vert x_j(t)-x_i(t)\Vert /N(N-1)\) versus time, where N is the number of nodes. In the top plot, the red link is added after time \(t=4000\) and destroys the master-slave configuration by making the network strongly connected. As a consequence, the previously stable chaotic synchronization is no longer supported. In the bottom row, parameters are adjusted such that synchronization is unstable for the original network. The addition of the blue link at time \(t=4000\) makes the network strongly connected again. However, in this case it leads to a stabilization of the synchronous state. Each link in the original network has unit weight. The new links introduced have weight 0.25, and the isolated dynamics f is described by chaotic Rössler oscillators (Poignard et al. 2018). The coupling function is the identity operator and \(\alpha \) is near the critical coupling \(\alpha _c\) (\(\alpha =0.085\) in the upper plot and \(\alpha =0.08\) in the lower plot). The parameters abc as in Eq. (1) in Barrio et al. (2011) are set to \(a=0.2, b=0.2,c=0.9\). The main plots show the total synchronization error and the insets show the differences \(x_1-x_5\) of the first components of nodes 1 and 5. The initial conditions are chosen as small random deviations of \(x_i=4.7973, y_i=-8.4776\) and \(z_i=0.0361\) for \(i=1,\ldots 5\) where \((x_i,y_i,z_i)^T\) is the state variable of the i-th node (Color figure online)

An example for this is found in Fig. 1a. Introducing the new link (in red) makes the whole network strongly connected: there is a directed path connecting any two vertices in the network. Therefore, the addition of the link significantly improves the connectivity properties of the network. However, this structural improvement has a surprising consequence for the dynamics: the network synchronization is lost, as can be seen in the simulation in Fig. 1a.

Hinderance of synchronization is not about breaking a master–slave configuration One may think that this synchronization loss appears because we are breaking the master–slave configuration. This rationale is justified as master–slave configurations are known to synchronize well (Nishikawa and Motter 2006). However, the synchronization loss is not related to the master–slave breaking. Indeed, adding a different connection which also makes the network strongly connected stabilizes the synchronous state (see Fig. 1b).

Hinderance of synchronization is not about reinforcing the hub Synchronization loss in the example of Fig. 1a appeared as an additional link was added to the hub of the largest subnetwork (the most connected node in the network). However, running experiments on random graphs with hubs, we found several counter-examples in which linking to the hub improves synchronization.

To sum up, while in some settings, master–slave configurations and the presence of hubs play an important role for the behaviour of a network under structural perturbations (Pereira et al. 2017; Pereira 2010; Belykh et al. 2005), for networks with diffusive dynamics near synchronization adding extra links generates nonlinearities which can either enhance or hinder synchronization. Our main result (Theorem 1) gives an almost complete explanation of the complex behaviour of such weakly connected directed networks when a master–slave configuration is reinforced or destroyed, respectively.

1.3 Informal Statement of the Main Result

Using the master stability approach to tackle the transverse stability of the synchronization manifold M (Pereira et al. 2014), we can in fact reduce the stability problem to the spectral analysis of graph Laplacians. The rather mild assumptions needed for this approach are specified in Sect. 2.2. We emphasize that under these assumptions, the master stability function is unbounded. Furthermore, the left bound exclusively depends on the spectral gap \(\lambda _2\), i.e. the other Laplacian eigenvalues do not play a role for linear stability considerations. Let us now give an informal statement of our main result.

In Theorem 1, we consider networks consisting of two strongly connected components. The general case of a higher number of strongly connected components is a straightforward generalization.

Informal Statement of Theorem1(Breaking Master–Slave configurations) Consider a directed network connected in a master–slave configuration as in Fig. 1. First, consider the situation where the master network is poorly connected. Then, strengthening the cutset is immaterial for synchronization and it will neither facilitate nor hinder synchronization. Second, consider the case where the master network is highly connected. In this case, we have:

  • Strengthening the driving facilitates synchronization, leading to shorter transients towards synchronization and augmenting the basin of attraction.

  • Master–Slave configurations are non-optimal. It is always possible to break the master–slave configuration in a way that favours synchronization (e.g. Fig. 1). Provided the overall connectivity of the network is poor, it is even possible to find one or several nodes in the master component such that the addition of an arbitrary single link ending at this node and breaking the master–slave configuration increases the synchronizability. In fact, if additionally, the Laplacian of the master component has zero column sums, then any perturbation in opposite direction of the cutset enhances the synchronizability.

  • Breaking Master–Slave configurations can hinder Synchronization. If the connectivity of the master component is not much stronger than the connectivity of the slave component (a precise condition of this will be given in Theorem 1), we can always find a cutset such that there is a perturbation in opposite direction of this cutset for which synchronization is hindered. Our result reveals the role the eigenvectors of the master network play in the destabilization of the synchronous motion. For example, if \(\alpha _k\) is an eigenvalue of the Laplacian of the master and close to \(\lambda _2\) the spectral gap in the slave component (this is the case in our illustration), then the eigenvector \(\varvec{X}_k\) associated with \(\alpha _k\) encodes the important information about the possible destabilization. For instance, assume that the ith entry of \(\varvec{X}_k\) is the maximal (or minimal) one. If the slave network is driven by a link coming from the ith node, then it is possible to destabilize the synchronization.

The remainder of the article is organized as follows. In Sect. 2, we introduce basic notions concerning the stability of synchronization in networks and present the notion of synchronizability of networks. In Sect. 3, we present the main result of our paper, followed by Sect. 4 which is devoted to the proof of Theorem 1. The article concludes with a discussion in Sect. 5. The “Appendix” is dedicated to the study of directed perturbations in undirected networks. We state and prove the main result, Theorem 2, which establishes a classification of directed perturbations according to their impact on synchronizability.

2 Notations and Definitions

2.1 Weighted Graphs and Laplacian Matrices

We consider networks of identical elements with diffusive interaction. It will be useful to interpret the coupling structure of the network as a graph. We recall some basic facts on graph theory.

Definition 1

(Weighted graphs) A weighted graph \(\mathcal {G}\) is a set of nodes \(\mathcal {N}\) together with a set of edges \(\mathcal {E}\subset \mathcal {N}\times \mathcal {N}\) and a weight function \(w:\mathcal {E}\rightarrow \mathbb {R}_+\). We say that the graph is unweighted when we have \(w(i,j)=1\) for all (ij) in \(\mathcal {E}\). Moreover,

  1. (i)

    We say that the graph is undirected if \((i,j)\in \mathcal {E}\iff (j,i)\in \mathcal {E}\) and \(w(i,j)=w(j,i)\) for all \((i,j)\in \mathcal {E}\). Otherwise, the graph is directed and edges are assigned orientations. A directed graph is also called digraph.

  2. (ii)

    \(\mathcal {G}=(\mathcal {N},\mathcal {E},w)\) is a subgraph of \(\mathcal {G}^{\prime }=(\mathcal {N}^{\prime },\mathcal {E}^{\prime },w^{\prime })\) if \(\mathcal {N}\subseteq \mathcal {N}^{\prime }\), and \(\mathcal {E}\subseteq \mathcal {E}^{\prime }\). In this case, we write \(\mathcal {G}\subseteq \mathcal {G}^{\prime }\).

  3. (iii)

    The adjacency matrix \(\varvec{W}\in \mathbb {R}^{N\times N}\) of the graph \(\mathcal {G}\) is defined through

    $$\begin{aligned} W_{ij}=\left\{ \begin{array}{cc} w(i,j) &{} \text {if } (i,j)\in \mathcal {E}\\ 0 &{} else \end{array}\right. \end{aligned}$$

To deal with synchronization of networks, we will focus on graphs exhibiting some sort of connectedness.

Definition 2

(Connectedness of graphs) An undirected graph \(\mathcal {G}\) is connected if for any two nodes i and j, there exists a path \(\{i=i_1,\ldots ,i_p=j\}\) of nodes (successively connected by edges of \(\mathcal {G}\)) between node i and node j. For directed graphs, we have two notions of connectedness

  1. (i)

    A digraph \(\mathcal {G}\) is strongly connected if every node is reachable from every other node through a directed path.

  2. (ii)

    The digraph is weakly connected if it is not strongly connected and the underlying graph which is obtained by ignoring the links’ directions is connected. A maximal strongly connected subgraph of a weakly connected digraph is called strongly connected component, or strong component. The maximal set of links which connects two strong components is called cutset.

  3. (iii)

    A spanning diverging tree of a digraph is a weakly connected subgraph such that one node, the root node, has no incoming edges and all other nodes have one incoming edge.

Let a weighted digraph be given by its adjacency matrix \(\varvec{W}\), and let \(\varvec{D_W}\) be the diagonal matrix whose i-th entry is given by the degree \(d_i=\sum _{j=1}^NW_{ij}\) of node i. The Laplacian of \(\varvec{W}\) is then defined as

$$\begin{aligned} \varvec{L_W} = \varvec{D_W}-\varvec{W} \end{aligned}$$
(3)

As the Laplacian has zero row sums, the vector \(\mathbf {1}\) (all entries equal to 1) is an eigenvector associated with the eigenvalue 0. In virtue of the Gershgorin theorem (Horn and Johnson 1985), the remaining eigenvalues \(\lambda _{i}\) have nonnegative real parts. In what follows we will always assume that the eigenvalues are ordered in the following way

$$\begin{aligned} 0=\lambda _{1}\le \mathfrak {R}\left( \lambda _{2}\right) \le \cdots \le \mathfrak {R}\left( \lambda _{N}\right) . \end{aligned}$$
(4)

This allows us to introduce a standard notation from algebraic graph theory. We call the second eigenvalue \(\lambda _2=\lambda _2(\varvec{L_W})\) of \(\varvec{L_W}\) the spectral gap. If the graph is undirected, we call the corresponding normalized eigenvector the Fiedler vector (Chung 1997).

2.2 Synchronizability of Networks: Assumptions

Although equations of the form (1) are heavily used in the context of network synchronization, it was only very recently that a stability result has been established for the general case of time-dependent solutions (Pereira et al. 2014). In order to guarantee for the stability of synchronous motion, we make the following assumptions:

A1 (Structural assumption)\(\mathcal {G}\) has a spanning diverging tree.

B1 (Absorbing Set) The vector field \(\varvec{f}:\mathbb {R}^{\ell }\rightarrow \mathbb {R}^{\ell }\) is continuous and there exists a bounded, positively invariant open set \(U\subset \mathbb {R}^{\ell }\) such that \(\varvec{f}\) is continuously differentiable in U and there exists a \(\varrho >0\) such that

$$\begin{aligned} \left\| d\varvec{f}\left( \varvec{x}\right) \right\| \le \varrho \qquad \forall \varvec{x} \in U. \end{aligned}$$
(5)

B2 (Smooth Coupling) The local coupling function \(\varvec{H}\) is smooth satisfying \(\varvec{H}(\varvec{0})=\varvec{0}\), and the eigenvalues \(\beta _{j}\) of \(d\varvec{H}\left( \varvec{0}\right) \) are real.

B3 (Spectral Interplay) The eigenvalues \(\beta _{j}\) of \(d\varvec{H}\left( \varvec{0}\right) \) and \(\lambda _{i}\) of \(\varvec{L}\) fulfil

$$\begin{aligned} \gamma :=\mathfrak {R}(\lambda _2) \min _{1\le j\le \ell }\beta _j>0. \end{aligned}$$
(6)

Let us shortly discuss these assumptions to see that they are somehow natural. A1 concerns the coupling topology of the underlying (directed) graph. For undirected networks, it simply amounts to assuming that the underlying coupling graph is connected. In the case of a weakly connected digraph consisting of several strong components, it is equivalent to the fact that there is at most one root component: a strong component which does not have any incoming cutsets. Algebraically, a consequence of this assumption for both, undirected and directed graphs, is that the zero eigenvalue of the graph Laplacian is simple (Agaev and Chebotarev 2000).

Assumption B1 guarantees that the nodes’ dynamics admit an invariant compact set, for instance an equilibrium, a periodic orbit or a chaotic orbit (such as is the case for the motivating example from Fig. 1).

The second dynamical condition B2 guarantees that the synchronous state \({x}_{1}\left( t\right) ={x}_{2}\left( t\right) =\cdots ={x}_{N}\left( t\right) \) is a solution of the coupled equations for all values of the overall coupling strength \(\alpha \): when starting with identical initial conditions, the coupling term vanishes and all the nodes behave in the same manner.

The last condition B3 remarks that for undirected graphs the zero eigenvalue of the graph Laplacian is non-simple iff the underlying graph is disconnected (Brouwer and Haemers 2011). In this case, the stability condition would be violated. Indeed, in order to observe synchronization, it is clear that one should consider networks which are connected in some sense. We remark that the assumption that the \(\beta _j\) are real is true for many applications. The general case of complex eigenvalues \(\beta _j\) can be tackled in a similar way, but the analysis becomes more technical without providing new insight into the phenomena (Pereira et al. 2014).

2.3 Critical Threshold for Synchronization

Under the previous assumptions, it was shown in Pereira et al. (2014) that for Eq. (1), there exists an \(\alpha _c = \alpha _c(\mathcal {G},\varvec{f},\varvec{H})\) such that if the global coupling strength fulfils \(\alpha > \alpha _c\), the network is locally uniformly synchronized: the synchronization manifold attracts uniformly in an open neighbourhood. More precisely, there exists a \(C=C\left( \varvec{L},d\varvec{H}\left( \varvec{0}\right) \right) >0\) such that if the initial condition \(\varvec{x}_{i}\left( t_{0}\right) \) is in a neighbourhood of the synchronization manifold, then the solution \(\varvec{x}(t)\) of Eq. (1) fulfils

$$\begin{aligned} \left\| {x}_{i}\left( t\right) -{x}_{j}\left( t\right) \right\| \le Ce^{-\left( \alpha \gamma -\rho \right) \left( t-t_{0}\right) }\left\| {x}_{i}\left( t_{0}\right) -{x}_{j}\left( t_{0}\right) \right\| \qquad \forall t\ge t_{0}. \end{aligned}$$

Now, the key connection to the graph Laplacian is that the critical coupling \(\alpha _c\) can be factored as

$$\begin{aligned} \alpha _c=\frac{\rho }{\gamma } \end{aligned}$$
(7)

where \(\rho =\rho (\varvec{f},d\varvec{H}(\varvec{0}))\) is a constant depending only on \(\varvec{f}\) and \(d\varvec{H}(\varvec{0})\). So, the constant \(\gamma \) which represents the coupling structure [see Eq. (6)] is directly related to the contraction rate towards the synchronous manifold. In fact, the condition \(\alpha >\alpha _c\) for stable synchronous motion now writes as

$$\begin{aligned} \alpha \mathfrak {R}\left( \lambda _{2}\right) \min _{1\le j\le m}\beta _{j}>\rho . \end{aligned}$$
(8)

Condition (8) shows that the spectral gap \(\lambda _{2}\) plays a central role for synchronization properties of the network.

2.3.1 Measures of Synchronization

We can use the critical coupling \(\alpha _c\) in order to define a measure of synchronizability.

Definition 3

We say that the network \((\mathcal {G}_1, \varvec{f}_1, \varvec{H}_1)\) is more synchronizable than \((\mathcal {G}_2, \varvec{f}_2, \varvec{H}_2)\) if their critical couplings satisfy

$$\begin{aligned} \alpha _c( \mathcal {G}_1, \varvec{f}_1, \varvec{H}_1) < \alpha _c( \mathcal {G}_2, \varvec{f}_2, \varvec{H}_2). \end{aligned}$$
(9)

Indeed, the range of coupling strengths which yield stable synchronization is larger for \(( \mathcal {G}_1, \varvec{f}_1, \varvec{H}_1)\). Fixing the dynamics \(\varvec{f}\) and the coupling function \(\varvec{H}\), we can now measure whether structural changes in the graph will favour or hinder synchronization. Assume we have a network \((\mathcal {G}, \varvec{f}, \varvec{H})\) and a perturbed network \((\tilde{\mathcal {G}}, \varvec{f}, \varvec{H})\) with corresponding spectral gaps \(\lambda _2\) and \(\tilde{\lambda }_2\).

A direct consequence of the definition of synchronizability is that if \(\mathfrak {R}(\lambda _2)<\mathfrak {R}(\tilde{\lambda }_2)\), the perturbed network \((\tilde{\mathcal {G}}, \varvec{f}, \varvec{H})\) is more synchronizable than \((\mathcal {G}, \varvec{f}, \varvec{H})\).

We also say that the modification favours synchronization. Otherwise, if \(\mathfrak {R}(\lambda _2)>\mathfrak {R}(\tilde{\lambda }_2)\), we say the structural perturbation hinders synchronization. This enables us to reduce the stability problem to an algebraic problem, i.e. the behaviour of the spectral gap under structural perturbations. We will use this approach throughout the whole article.

3 Main Result

In this section, we state our main result, Theorem 1, on perturbations of directed networks. We emphasize that, given assumption A1, the result is structurally generic, a term which we introduced in an earlier paper (Poignard et al. 2018). In order to explain the notion of structural genericity, consider the set of Laplacians corresponding to networks with identical coupling topologies but potentially different weights. In this set, the subset of Laplacians for which our results are valid is dense and its complement is of zero Lebesgue measure. In other words, given any network topology satisfying A1, our result is valid up to a small perturbation of the weights of the existing links of this network. This structural genericity is stronger than the classical one for which it is usually necessary to perturb drastically the structure of the original network itself. For more details on structural genericity, see Theorem 6.6 for the directed case and Theorem 3.1 for the undirected case in Poignard et al. (2018).

3.1 Structural Perturbations in Directed Networks

In this section, we investigate the class of directed networks satisfying assumption A1 and consisting of at least two strong components. Due to A1, these networks have one root component. Furthermore, we restrict ourselves to the study of the dynamical role of links between strong components, i.e on cutsets. Here, perturbations can point either in direction of a cutset or in opposite direction of a cutset. For simplicity of presentation, we can assume that there are only two strong components. So, the corresponding Laplacian is of the form

$$\begin{aligned} \varvec{L_W}=\left( \begin{array}{cc} \varvec{L}_{1} &{} \varvec{0}\\ -\varvec{C}&{} \varvec{L}_{2}+\varvec{D}_{\varvec{C}} \end{array}\right) , \end{aligned}$$
(10)

where \(\varvec{L}_{1}\in \mathbb {R}^{n\times n}\) and \(\varvec{L}_{2}\in \mathbb {R}^{m\times m}\) are the respective Laplacians of the strong components, \(\varvec{C}\in \mathbb {R}^{m\times n}\) is the adjacency matrix of the cutset pointing from one strong component to the other and \(\varvec{D_{C}}\) is a diagonal matrix with the row sums of \(\varvec{C}\) on its diagonal. The results presented in Theorem 1 below can be generalized in a straightforward way to networks with more than two strong components. For instance, our results are still valid for graph Laplacians of the form

$$\begin{aligned} \varvec{L_W}=\begin{pmatrix} \varvec{L}_{1} &{} \varvec{0}&{} \varvec{0} &{} \cdots &{} \varvec{0}\\ -\varvec{C_{21}} &{} \varvec{L}_{2}+\varvec{D}_{\varvec{C_{21}}}&{} \varvec{0} &{} \cdots &{}\varvec{0}\\ -\varvec{C_{31}} &{} -\varvec{C_{32}} &{} \varvec{L}_{3}+\varvec{D}_{\varvec{C_{31}}}+\varvec{D}_{\varvec{C_{32}}}&{} 0 \cdots &{}\varvec{0}\\ \vdots &{} &{} \ddots &{}\ddots &{} \vdots \\ \\ -\varvec{C_{(p-1)1}}&{}\ldots &{}-\varvec{C_{(p-1)(p-2)}}&{} \varvec{L}_{p-1}+\sum _{i=0}^{p-2}\varvec{D}_{\varvec{C_{(p-1)i}}}&{} \varvec{0}\\ \\ -\varvec{C_{p1}} &{} \ldots &{}&{} -\varvec{C_{p(p-1)}}&{} \varvec{L}_{p}+\sum _{i=0}^{p-1}\varvec{D}_{\varvec{C_{pi}}}\\ \end{pmatrix}.\\ \end{aligned}$$

representing a graph with p strong components connected by cutsets \(\varvec{C_{ij}}\).

We first remark that as a consequence of the block structure of \(\varvec{L_W}\) given by Equation (10), its eigenvalues are either eigenvalues of \(\varvec{L}_{1}\) or eigenvalues of \(\varvec{L}_{2}+\varvec{D}_{c}\). A structural perturbation in direction of the cutset induced by a nonnegative matrix \(\varvec{\Delta }\in \mathbb {R}^{m\times n}\) corresponds to the following modified Laplacian matrix

$$\begin{aligned} \varvec{L}_p\left( \varvec{\Delta }\right) =\left( \begin{array}{cc} \varvec{L}_{1} &{} \varvec{0}\\ -\varvec{C}-\varvec{\Delta } &{} \varvec{L}_{2}+\varvec{D}_{\varvec{C}+\varvec{\Delta }} \end{array}\right) . \end{aligned}$$

A structural perturbation in opposite direction of the cutset induced by a nonnegative matrix \(\varvec{\Delta }\in \mathbb {R}^{n\times m}\) corresponds to the following modified Laplacian matrix

$$\begin{aligned} \varvec{L}_{p}\left( \varvec{\Delta }\right) =\left( \begin{array}{cc} \varvec{L}_{1}+\varvec{D}_{\Delta } &{} -\varvec{\Delta }\\ -\varvec{C} &{} \varvec{L}_{2}+\varvec{D_{C}} \end{array}\right) . \end{aligned}$$

Remark 1

In the rest of the paper, to avoid cumbersome formulations, we will employ the formulation “a structural perturbation \(\varvec{\Delta }\) in direction of the cutset” to refer to a structural perturbation in direction of the cutset induced by a nonnegative matrix \(\varvec{\Delta } \in \mathbb {R}^{m\times n}\)” and similarly for structural perturbations in opposite direction of the cutset.

Notation

Given a Laplacian matrix \(\varvec{L_W}\) of a directed graph with simple spectral gap \(\lambda _2\left( \varvec{L_W}\right) \) and a nonnegative matrix \(\varvec{\Delta }\), we denote, similarly to the above notations, by

$$\begin{aligned} s\left( \varvec{\Delta }\right) := \lim _{\varepsilon \rightarrow 0 } \frac{1}{\varepsilon } \left( \lambda _2\left( \varvec{L}_p(\varepsilon \varvec{\Delta } ) \right) - \lambda _2\left( \varvec{L_W} \right) \right) \end{aligned}$$

the change rate of the spectral gap map under the small structural perturbations \(\varepsilon \varvec{\Delta }\) in direction or in opposite direction of the cutset.

Observe, as in Definition 4, that the spectral gap map \(\varepsilon \mapsto \lambda _2\left( \varvec{L}_p(\varepsilon \varvec{\Delta })\right) \) is regular because of the simplicity of \(\lambda _2\left( \varvec{L_W}\right) \) (Horn and Johnson 1985). In Poignard et al. (2018), we proved that having simple eigenvalues is a structurally generic property for graph Laplacians of weakly connected digraphs that satisfy Assumption A1.

Notice that we can possibly have \(s\left( \varvec{\Delta }\right) \in \mathbb {C}\) since the matrices involved in this notation are no more symmetric. However, we will prove in Sect. 4 (Lemma 2) that in the case where \(\lambda _2\left( \varvec{L_W}\right) \) is an eigenvalue of \(\varvec{L_{2}}+\varvec{D_{C}}\), then \(\lambda _2\left( \varvec{L_W}\right) \) is real positive and therefore \(s\left( \varvec{\Delta }\right) \in \mathbb {R}\). We can now state our second main result in the directed case:

Theorem 1

Let a directed graph \(\mathcal {G}\) consists of two strong components connected by a cutset with adjacency matrix \(\varvec{C}\) and write the associated Laplacian as

$$\begin{aligned} \varvec{L_W}=\left( \begin{array}{cc} \varvec{L}_{1} &{} \varvec{0}\\ -\varvec{C} &{} \varvec{L}_{2}+\varvec{D_{C}} \end{array}\right) . \end{aligned}$$

Assume A1 is satisfied. Then, for a generic choice of the nonzero weights of \(\varvec{L_W}\), we have the following assertions:

  1. (i)

    Invariance of synchronizability If the spectral gap \(\lambda _2\) of \(\varvec{L_W}\) is an eigenvalue of \(\varvec{L_1}\), then the network’s synchronizability is invariant under arbitrary structural perturbations \(\varvec{\Delta }\) in direction of the cutset.

  2. (ii)

    Improving synchronizability by reinforcing the cutset If \(\lambda _{2}\) is an eigenvalue of \(\varvec{L_{2}}+\varvec{D_{C}}\), then the network’s synchronizability increases for arbitrary structural perturbations \(\varvec{\Delta }\) in direction of the cutset.

  3. (iii)

    Non-optimality of master–slave configurations Assume \(\lambda _{2}\) is an eigenvalue of \(\varvec{L_{2}}+\varvec{D_{C}}\). Then, we have the following statements:

    1. (a)

      There exists a structural perturbation \(\varvec{\Delta }\) in opposite direction of the cutset such that \(s(\varvec{\Delta })> 0\).

    2. (b)

      There exists a constant \(\delta (\varvec{L}_1)>0\) and at least one node \(1\le k_0\le n\) (in the driving component) such that if we have \(0<\lambda _2<\delta (\varvec{L}_1)\), then \(s(\varvec{\Delta })> 0\) for any structural perturbation \(\varvec{\Delta }\) consisting of only one link in opposite direction of the cutset and ending at node \(k_0\).

    3. (c)

      If, moreover, \(\varvec{L}_1\) has zero column sums, then there exists a constant \(\delta (\varvec{L}_1)>0\) such that if \(0<\lambda _2<\delta (\varvec{L}_1)\), we have \(s(\varvec{\Delta })> 0\) for any structural perturbation \(\varvec{\Delta }\) in opposite direction of the cutset.

  4. (iv)

    Hindering synchronizability by breaking the master–slave configuration There exists a cutset \(\varvec{C}\) for which \(\lambda _{2}\) is an eigenvalue of \(\varvec{L_{2}}+\varvec{D_{C}}\) and a perturbation \(\varvec{\Delta }\) in opposite direction of \(\varvec{C}\) such that: if \(\varvec{L}_1\) admits a positive eigenvalue sufficiently small, then we have \(s(\varvec{\Delta })\le 0\).

Let us make a few remarks. In the proof of items (i) and (ii), we repeatedly apply a perturbation result in order to handle non-small perturbations. This is not possible in items (iii)(a)–(c) and item (iv) because every perturbation in opposite direction of the cutset makes the graph strongly connected and thus qualitatively changes the network’s structure. In item (iii) a, the perturbation can be realised by turning an arbitrary node in the slave component into a hub having directed connections to all the nodes in the master component.

As we have shown numerically in the example in Fig. 1 and also in Pade and Pereira (2015), not all perturbations in opposite direction of the cutset are increasing the synchronizability when \(\varvec{L}_1\) does not have zero column sums. This is stated in item (iv): an example where the situation described in this item occurs is when the master component is an undirected subnetwork, i.e when \(L_1\) is symmetric, in which case all the eigenvalues are nonnegative.

In items (ii)–(iv), we assume that the spectral gap is an eigenvalue of \(\varvec{L_{2}}+\varvec{D_{C}}\). This happens for instance when the entries in the cutset \(\varvec{C}\) are very small (see Lemma 2). Topologically, this means that the master component is very well connected in comparison with the intensity and/or density of the driving force. It is worth remarking that the connection density of the second component does not play a role in this scenario. The rest of the paper is devoted to the proofs of our two main results and of other results completing our study.

4 Proof of the Main Result

The following standard result from matrix theory is the technical starting point for the rest of this article (Horn and Johnson 1985). It allows us to determine the dynamical effect of structural perturbations up to first order in the strength of the perturbation.

Lemma 1

(Spectral Perturbation Horn and Johnson 1985) Let \(\lambda \) be a simple eigenvalue of \(\varvec{L}\in \mathbb {R}^{N\times N}\) with corresponding left and right eigenvectors \(\varvec{u},\varvec{v}\), and let \(\tilde{\varvec{L}}\in \mathbb {R}^{N\times N}\). Then, for \(\varepsilon \) small enough there exists a smooth family \(\lambda \left( \varepsilon \right) \) of simple eigenvalues of \(\varvec{L}+\varepsilon \tilde{\varvec{L}}\) with \(\lambda \left( 0\right) =\lambda \) and

$$\begin{aligned} \lambda ^{\prime }\left( 0\right) =\frac{\varvec{u}^T\tilde{\varvec{L}}\varvec{v}}{\varvec{u}^T\varvec{v}}. \end{aligned}$$
(11)

In order to track the motion of the spectral gap through this representation, we first investigate the structure of the eigenvectors of \(\varvec{L_W}\) in the following two auxilliary lemmata. First, observe that the matrix \(\varvec{L}_{2}+\varvec{D_{C}}\) is nonnegative diagonally dominant (Berman and Plemmons 1994; Horn and Johnson 1985). This property enables us to find a Perron–Frobenius like result.

Lemma 2

Let \(\varvec{L_W}\) be as in Theorem 1. Then, \(\varvec{L}_{2}+\varvec{D_{C}}\) has a minimal simple, real and positive eigenvalue with corresponding positive left and right eigenvectors.

Proof

Let \(s:=\max _{i}\left\{ {\varvec{D_{C}}} _{(i)}+\sum _{j\ne i}W_{ij}\right\} >0\), then \(\varvec{N}=s\varvec{I}-\left( \varvec{L}_{2}+\varvec{D_{C}}\right) \) is a nonnegative matrix by definition of s. Furthermore, it is irreducible as we assumed that the component associated with \(\varvec{L}_{2}\) is strongly connected. Then, by the Perron–Frobenius theorem (Berman and Plemmons 1994), \(\varvec{N}\) has a maximal, simple and real eigenvalue \(\Lambda \) with corresponding positive left and right eigenvectors \(\varvec{\omega }\) and \(\varvec{\eta }\). That is

$$\begin{aligned} \varvec{N\eta }= & {} \Lambda \varvec{\eta } \end{aligned}$$

yielding

$$\begin{aligned} \left( \varvec{L}_{2}+\varvec{D_{C}}\right) \varvec{\eta }= & {} \left( s-\Lambda \right) \varvec{\eta }. \end{aligned}$$

As \(\Lambda \) is the maximal eigenvalue and all the eigenvalues of \(\varvec{L}_{2}+\varvec{D_{C}}\) are obtained by eigenvalues \(\mu \) of \(\varvec{N}\) through \(s-\mu \), we must have that \(s-\Lambda \) is the minimal real eigenvalue of \(\varvec{L}_{2}+\varvec{D_{C}}\). Furthermore, the eigenvectors are the same, so the left and right eigenvectors of \(\varvec{L}_{2}+\varvec{D_{C}}\) corresponding to \(s-\Lambda \) are positive. As a consequence of the Gershgorin Theorem together with the strong connectivity of the second component, we have that \(\varvec{L}_{2}+\varvec{D_{C}}\) is invertible (Corollary 6.2.9 in Horn and Johnson 1985). Hence, \(s-\Lambda \ne 0\). Furthermore, by the Gershgorin Theorem again, we have \(s-\Lambda \ge 0\) and hence \(s-\Lambda >0\). \(\square \)

This Lemma shows that the spectral gap and the corresponding eigenvectors are real in this case. So, when changing the coupling structure, the motion of \(\lambda _{2}\) will be along the real axis by Lemma 1. Next, we investigate the structure of the eigenvectors of \(\varvec{L_W}\).

Lemma 3

Let \(\varvec{L_W}\) be as in Theorem 1 and let the spectral gap \(\lambda _{2}\) of \(\varvec{L_W}\) be an eigenvalue of \(\varvec{L_{2}}+\varvec{D_{C}}\). Then, the eigenvalue is simple and the corresponding left and right eigenvectors of \(\varvec{L_W}\) have the form

$$\begin{aligned} \left( \varvec{w}\varvec{C}\left( \varvec{L}_{1}-\lambda _{2}\varvec{I}\right) ^{-1},\varvec{w}\right) \quad ,\quad \left( \varvec{0},\varvec{y}\right) \end{aligned}$$
(12)

where \(\varvec{w}\) and \(\varvec{y}\) are left and right eigenvectors of \(\varvec{L}_{2}+\varvec{D_{C}}\).

Proof

Let \(\left( \varvec{v},\varvec{w}\right) \) and \(\left( \varvec{x},\varvec{y}\right) \) be left and right eigenvectors of \(\varvec{L_W}\) corresponding to \(\lambda _{2}\). For the left eigenvector, we have

$$\begin{aligned} \varvec{0}= & {} \left( \varvec{v},\varvec{w}\right) \varvec{L_W}-\lambda _2\left( \varvec{v},\varvec{w}\right) \\= & {} \left( \varvec{v}\left( \varvec{L}_{1}-\lambda _{2}\varvec{I}\right) -\varvec{wC},\varvec{w}\left( \varvec{L}_{2}+\varvec{D_{C}}\right) -\lambda _2\varvec{w}\right) . \end{aligned}$$

The second component of this equation yields that \(\varvec{w}\) is a left eigenvector of \(\left( \varvec{L}_{2}+\varvec{D_{C}}\right) \). As \(\lambda _{2}\) is simple by Lemma 2, it is not an eigenvalue of \(\varvec{L}_{1}\), so the first component yields

$$\begin{aligned} \varvec{v}=\varvec{wC}\left( \varvec{L}_{1}-\lambda _{2}\varvec{I}\right) ^{-1}. \end{aligned}$$
(13)

The equation for the right eigenvector is

$$\begin{aligned} \varvec{0}= & {} \varvec{L_W}\left( \begin{array}{c} \varvec{x}\\ \varvec{y} \end{array}\right) -\lambda _{2}\left( \begin{array}{c} \varvec{x}\\ \varvec{y} \end{array}\right) \\= & {} \left( \begin{array}{c} \left( \varvec{L}_{1}-\lambda _{2}\varvec{I}\right) \varvec{x}\\ -\varvec{C}\varvec{x}+\left( \varvec{L}_{2}+\varvec{D_{C}}\right) \varvec{y}-\lambda _{2}\varvec{y} \end{array}\right) . \end{aligned}$$

As \(\varvec{L}_{1}-\lambda _{2}\varvec{I}\) is regular, we have \(\varvec{x}=\varvec{0}\). The second component then yields that \(\varvec{y}\) is a right eigenvector of \(\varvec{L}_{2}+\varvec{D_{C}}\). \(\square \)

Proof of Theorem 1

We will use throughout the proof that the smallest eigenvalue of \(\varvec{L}_2+\varvec{D_C}\) is simple, real and positive by Lemma 2.

Ad (i). Let the nonnegative matrix \(\varepsilon \varvec{\Delta }\) be a small perturbation of the cutset, so the corresponding Laplacian writes

$$\begin{aligned} \varvec{L}_p\left( \varepsilon \varvec{\Delta }\right) =\left( \begin{array}{cc} \varvec{L}_{1} &{} \varvec{0}\\ -\varvec{C}-\varepsilon \varvec{\Delta } &{} \varvec{L}_{2}+\varvec{D}_{\varvec{C}}+\varepsilon \varvec{D}_{\varvec{\Delta }} \end{array}\right) . \end{aligned}$$

By assumption \(\lambda _2\) is an eigenvalue of \(\varvec{L}_1\). As the smallest eigenvalue of \(\varvec{L}_2+\varvec{D_C}\) is simple, we can apply Lemma 1 and obtain the following formula for the perturbed smallest eigenvalue \(\mu _1(\varepsilon )\) of \(\varvec{L}_2+\varvec{D_C}+\varepsilon \varvec{D_{\Delta }}\)

$$\begin{aligned} \mu _1^{\prime }(0) = \frac{\varvec{w}^T\varvec{D_{\Delta }y}}{\varvec{w}^T\varvec{y}}>0. \end{aligned}$$
(14)

The positivity holds true because by Lemma 2, left and right eigenvectors \(\varvec{w}^T, \varvec{y}\) of \(\varvec{L}_2+\varvec{D_C}\) are positive, and at least one entry of \(\varvec{D_{\Delta }}\) is positive. So, we have \(\mathfrak {R}(\lambda _2)<\mu _1(0)<\mu _1(\varepsilon )\), i.e. the spectral gap of the whole network is still given by \(\lambda _2\). Now, the perturbed matrix \(\varvec{L}_2+\varvec{D_C}+\varepsilon \varvec{D_{\Delta }}\) is of the same form as \(\varvec{L}_2+\varvec{D_C}\). Hence, the above reasoning can be applied repeatedly in order to obtain the desired result.

Ad (ii). Let again \(\varepsilon \varvec{\Delta }\) be a small perturbation in direction of the cutset and let us write the perturbed Laplacian \(\varvec{L_p}\left( \varepsilon \varvec{\Delta } \right) \) as

$$\begin{aligned} \varvec{L_p}\left( \varepsilon \varvec{\Delta }\right)= & {} \varvec{L_W}+\varepsilon \left( \begin{array}{cc} \varvec{0} &{} \varvec{0}\\ -\varvec{\Delta } &{} \varvec{D}_{\varvec{\Delta }} \end{array}\right) . \end{aligned}$$

Using Lemma 1, we have for the spectral gap of the perturbed system

$$\begin{aligned} s(\varvec{\Delta }) =\frac{\varvec{w}^T\left( \varvec{D_{\Delta }}\varvec{y}-\varvec{\Delta }\varvec{x}\right) }{\left( \varvec{v}^T,\varvec{w}^T\right) \left( \begin{array}{c} \varvec{x}\\ \varvec{y} \end{array}\right) } \end{aligned}$$
(15)

where \(\left( \varvec{v},\varvec{w}\right) \) and \(\left( \varvec{x},\varvec{y}\right) \) are the eigenvectors of \(\varvec{L_W}\). Now, from Lemma 3, we have \(\varvec{x}=\varvec{0}\) and so we obtain

$$\begin{aligned} s(\varvec{\Delta })=\frac{\varvec{w}^T\varvec{D_{\Delta }}\varvec{y}}{\varvec{w}^T\varvec{y}}. \end{aligned}$$
(16)

By assumption \(\varvec{\Delta }\) and therefore \(\varvec{D_{\Delta }}\) is nonnegative. Furthermore, Lemma 2 shows that \(\varvec{w}\) and \(\varvec{y}\) are positive, so \(s(\varvec{\Delta })\) is positive. By the same reasoning as in (i), we can perform such small perturbations repeatedly in order to obtain the result for any structural perturbation in direction of the cutset with arbitrarily large entries.

Ad (iii)(a). For a small perturbation \(\varepsilon \varvec{\Delta }\) in opposite direction of the cutset, the perturbed Laplacian writes as

$$\begin{aligned} \varvec{L}_{p}\left( \varepsilon \varvec{\Delta }\right) =\left( \begin{array}{cc} \varvec{L}_{1}+\varvec{D}_{\varepsilon \Delta } &{} -\varepsilon \varvec{\Delta }\\ -\varvec{C} &{} \varvec{L}_{2}+\varvec{D}_{C} \end{array}\right) . \end{aligned}$$
(17)

Using Lemma 1 and 3 yields

$$\begin{aligned} s(\varvec{\Delta })=-\frac{ \varvec{w}^T \varvec{M (\Delta )y} }{\varvec{w}^T \varvec{y} }, \end{aligned}$$
(18)

where \(\varvec{w}\) and \(\varvec{y}\) are left and right eigenvectors of \(\varvec{L}_{2}+\varvec{D_C}\) and

$$\begin{aligned} \varvec{M (\Delta )}=\varvec{C}(\varvec{L}_{1}-\lambda _{2}\varvec{I})^{-1}\varvec{\Delta }. \end{aligned}$$
(19)

Now, assume we can find a \(\varvec{\Delta }\) such that \(\varvec{\Delta y}=\varvec{1}\). Then, we would have

$$\begin{aligned} \varvec{M (\Delta ) y}= & {} \varvec{C}(\varvec{L}_{1}-\lambda _{2}\varvec{I})^{-1}\varvec{1}\\= & {} -\frac{1}{\lambda _2}\varvec{C}\varvec{1}. \end{aligned}$$

Now, by Lemma 2, the eigenvectors \(\varvec{w}\) and \(\varvec{y}\) are positive and \(\varvec{C}\) is nonnegative with at least one positive entry. Hence,

$$\begin{aligned} s(\varvec{\Delta })= & {} \frac{1}{\lambda _2}\frac{\varvec{w}^T\varvec{C1}}{\varvec{w}^T\varvec{y}}\\> & {} 0. \end{aligned}$$

So it remains to show that there exists a \(\varvec{\Delta }\) such that \(\varvec{\Delta y}=\varvec{1}\). By Lemma 2, \(\varvec{y}\) is a positive vector, so for any fixed \(1\le k\le m \), we can choose \(\varvec{\Delta }_{ik}=\frac{1}{y_k}\) for \(1\le i \le N\) and zero elsewhere to obtain \(\varvec{\Delta y}=\varvec{1}\).

Ad (iii)(b). Here, we first use a result proved in Poignard et al. (2018) on the structure of Laplacian spectra in the case of strongly connected digraphs. Such graphs admit a spanning diverging tree, and therefore, by Theorem 6.6 in Poignard et al. (2018), we have that for a generic choice of the nonzero weights of \(\varvec{L}_1\), the spectrum of this matrix is simple. Under this genericity assumption, we can thus suppose that \(\varvec{L}_1\) is diagonalizable.

Then. let’s consider a vector \(\varvec{\Delta y}\) (with nonnegative entries) decomposed in the basis of eigenvectors \(\left( \varvec{1},\varvec{X}_2,\ldots ,\varvec{X}_N\right) \) of \(\varvec{L}_1\):

$$\begin{aligned} \varvec{\Delta y}=\beta _1\varvec{1}+ \sum _{k=2}^N \beta _k \varvec{X}_k, \end{aligned}$$
(20)

with the numbers \(\beta _i\) being possibly in \(\mathbb {C}\). Such a decomposition gives the relation:

$$\begin{aligned} -\left( \varvec{L}_1-\lambda _2 \varvec{I}\right) ^{-1}\varvec{\Delta y}=\dfrac{\beta _1}{\lambda _2}\varvec{1}-\sum _{k=2}^N \dfrac{\beta _k}{\alpha _k-\lambda _2}\varvec{X}_k, \end{aligned}$$
(21)

where the numbers \(\alpha _k\) denote the eigenvalues of \(\varvec{L}_1\) sorted by increasing order with respect to their real part, (so that \(\alpha _1=0\)). Notice the fraction in this expression is well defined, since by assumption we have \(\mathfrak {R}\left( \alpha _k\right) >\lambda _2\) for any \(k\ge 2\).

Let’s consider \(\varvec{\Delta y}=\varvec{e}_k\), where \(\varvec{e}_k\) denotes the k-th vector of the canonical basis of \({\mathbb {R}}^N\). We first remark that the corresponding values \(\beta _1\left( \varvec{e}_k\right) \) in the decomposition (20) satisfy the following relation

$$\begin{aligned} \varvec{1}=1\cdot \varvec{1}= \sum _{k=1}^N\beta _1\left( \varvec{e}_k\right) \varvec{1}+\sum _{k=1}^N\sum _{j=2}^N\beta _j(\varvec{e}_k)\varvec{X}_j \end{aligned}$$

which directly gives us the relation \(\sum _{k=1}^N\beta _1\left( \varvec{e}_k\right) =1\).

So, at least one of these values, say \(\beta _1\left( \varvec{e}_{k_0}\right) \), must be positive. Consequently, for any nonnegative matrix \(\varvec{\Delta }\) with \(\varvec{\Delta y}=\varvec{e}_{k_0}\) and \(\lambda _2\) small enough, we get that the right hand side in Eq. (21) is positive and hence, \(s(\varvec{\Delta })\) from Eq. (18) must be positive. In other words, since the terms in the sum in Eq. (21) depend only on \(\varvec{L}_1\) and \(\varvec{e}_{k_0}=\varvec{\Delta y}\), there exists a constant \(\delta (\varvec{L}_1,\varvec{e}_{k_0})\) such that if \(0<\lambda _2<\delta (\varvec{L}_1,\varvec{e}_{k_0})\) then \(s(\varvec{\Delta })>0\). To conclude, it suffices to consider the set \(\mathcal {A}\) of integers \(1\le k\le N\) such that \(\beta _1\left( \varvec{e}_k\right) >0\) and to set

$$\begin{aligned} \delta (\varvec{L}_1)=\min _{k \in \mathcal {A}}\delta (\varvec{L}_1,\varvec{e}_{k}). \end{aligned}$$

Since \(\mathcal {A}\) contains \(k_0\), it is nonempty and thus \(\delta (\varvec{L}_1)\) exists and is positive. Now consider the structural perturbations \(\varvec{\Delta }\) with one link in opposite direction of the cutset for which exist an integer k in \(\mathcal {A}\) such that \(\varvec{\Delta y}=\varvec{e}_k\), i.e the structural perturbations with only one link in opposite direction of the cutset ending at a node k belonging to \(\mathcal {A}\): such structural perturbations exist (since the vector \(\varvec{y}\) is positive), and for any such \(\varvec{\Delta }\) we have \(s(\varvec{\Delta })>0\) provided \(0<\lambda _2<\delta (\varvec{L}_1)\).

Ad (iii)(c). As in (b) assume again that \(\varvec{L}_1\) is diagonalizable. If ,moreover, it is zero column sum, then the basis of eigenvectors \(\left( \varvec{1},\varvec{X}_2,\ldots ,\varvec{X}_n\right) \) of \(\varvec{L}_1\) satisfies

$$\begin{aligned} \forall k\ge 2,\,\,\sum _{i=1}^n {X_k}_{(i)}=0. \end{aligned}$$

Indeed, this can be directly seen by multiplying each eigenvector equation \(\varvec{L}_1\varvec{X}_k=\alpha _k\varvec{X}_k\) by the vector \((1,\ldots ,1)\) to the left.

As a result, we must have \(\beta _1 \left( \varvec{e}_k\right) =\frac{1}{n}\) for any \(1\le k \le n\). Now consider any nonnegative matrix \(\varvec{\Delta }\), if \(y_i\) denotes the ith entry of \(\varvec{y}\) in its decomposition in the canonical basis \((\varvec{e}_1,\ldots ,\varvec{e}_n)\), we have

$$\begin{aligned} -\left( \varvec{L}_1-\lambda _2 \varvec{I}\right) ^{-1}\varvec{\Delta y}&=\sum _{i=1}^ny_i\left[ \dfrac{\beta _1(\varvec{e}_i)}{\lambda _2}\varvec{1}-\sum _{k=2}^n \dfrac{\beta _k(\varvec{e}_i)}{\alpha _k-\lambda _2}\varvec{X}_k\right] \\&=\sum _{i=1}^ny_i\left[ \dfrac{1}{n\,\lambda _2}\varvec{1}-\sum _{k=2}^n \dfrac{\beta _k(\varvec{e}_i)}{\alpha _k-\lambda _2}\varvec{X}_k\right] . \end{aligned}$$

Since all the entries \(y_i\) are positive by Lemma 2, thus to get \(s(\varvec{\Delta })>0\) it suffices that all the terms in brackets are positive vectors. So, it suffices that \(\lambda _2\) is small enough compared to \(\frac{1}{n}\) and compared to the sums \(\sum _{k=2}^n \frac{\beta _k(\varvec{e}_i)}{\alpha _k-\lambda _2}\varvec{X}_k\). Since the family \((\beta _k(\varvec{e}_i))_{\begin{array}{c} 2\le k\le n\\ 1\le i\le n\\ \end{array}}\) is finite, we get again (as in Ad (iii)(b)) the existence of a constant \(\delta (\varvec{L}_1)>0\) such that, if \(0<\lambda _2<\delta (\varvec{L}_1)\), then for any structural perturbation \(\varvec{\Delta }\) in opposite direction of the cutset, we have \(s(\varvec{\Delta })>0\).

Ad (iv). As in (b) and (c), we can suppose \(\varvec{L}_1\) is diagonalizable. Since the entries of the cutset \(\varvec{C}\) are nonnegative, in virtue of Eq. (18) it suffices to show that there is a \(\varvec{\Delta }\) such that some entry of \((\varvec{L}_1 - \lambda _2 \varvec{I} )^{-1} \varvec{\Delta } \varvec{y}\) is positive. Assume \(\varvec{L}_1\) admits a positive eigenvalue \(\alpha _k\), then any eigenvector of \(\varvec{L}_1\) associated with \(\alpha _k\) is real. Let’s choose one such eigenvector \(\varvec{X}_k\): if one entry of \(\varvec{X}_k\) is negative, we define

$$\begin{aligned} \mathcal {G}_{-}=\{1\le i\le n: {\varvec{X}_k}_{(i)}<0\} \text{ and } \mathcal {G}_{+}=\{1\le i\le n: {\varvec{X}_k}_{(i)}>0\} \end{aligned}$$
(22)

and consider

$$\begin{aligned} \beta _m=\max \{-{\varvec{X}_k}_{(i)}, i \in \mathcal {G}_{-} \} \text{ and } \beta _M = \max \{ \varvec{X}_{k_{(i)}}, i \in \{ 1, \dots , N\} \} , \end{aligned}$$

if the set \(\mathcal {G}_{-}\) is empty (resp. \(\mathcal {G}_{+}\)), we set \(\beta _m = 0\) (resp. \(\beta _M = 0\)). Notice that in view of the genericity properties, the eigenvector \(\varvec{X}_k\) has no zero entry. Moreover, we consider \(\varvec{\Delta y} = \beta _m \varvec{1} + \varvec{X}_k\). In this way \(\varvec{\Delta y}\) is nonnegative and there is a \(\varvec{\Delta }\) that solves this equation. Thus

$$\begin{aligned} (\varvec{L}_1 - \lambda _2 \varvec{I})^{-1} \varvec{\Delta }\varvec{y} = \frac{1}{\lambda _2}\left[ - \beta _m \varvec{1} + \frac{1}{\frac{\alpha _k}{\lambda _2} - 1} \varvec{X}_k\right] . \end{aligned}$$

Assume that \(\beta _M\) is attained in the ith entry, so we obtain that if

$$\begin{aligned} 0<\alpha _k< \left( \frac{\beta _M}{\beta _m} +1 \right) \lambda _2, \end{aligned}$$

then \({(\varvec{L}_1 - \lambda _2 \varvec{I})^{-1} \varvec{\Delta y}}_{(i)}>0.\) Therefore, for a cutset \(\varvec{C}\) connecting to this entry, we have \(s(\varvec{\Delta })\le 0\), as desired.

If all entries of \(\varvec{X}_k\) are nonnegative, then \(\varvec{\Delta y} = \varvec{X}_k\). This yields

$$\begin{aligned} (\varvec{L}_1 - \lambda _2 \varvec{I})^{-1} \varvec{\Delta }\varvec{y} = \left[ \frac{1}{\alpha _k - \lambda _2} \varvec{X}_k\right] , \end{aligned}$$

and any \(\alpha _k > \lambda _2\) suffices from which we get this time that for any choice of the cutset \(\varvec{C}\) we have \(s(\varvec{\Delta })<0\). \(\square \)

In item (iv) of this theorem, the choice of the cutset \(\varvec{C}\) for which we hinder synchronization is not that sharp, indeed suppose only one entry in \(\varvec{X}_k\) is negative. Then, we can apply the same reasoning to the vector \(-\varvec{X}_k\), for which \(n-1\) entries will be non negative. In this case, the suitable cutsets \(\varvec{C}\) will be more numerous.

Illustration of Item (iv) (Hinderance of Synchronization). Consider the directed network in Fig. 1 (without the addition of links). Assume that all connections in the master network have strength \(1/2< w<1\) and the connections in the slave network have strength 1. Then, the spectrum of the network can be decomposed as

$$\begin{aligned} \sigma (L_W) = \{ 0, 2w, 3w \} \cup \{ 1, 3 \}. \end{aligned}$$

So, the spectral gap \(\lambda _2 = 1\) belongs to \(\sigma (\varvec{L}_2 + \varvec{D}_{\varvec{C}})\). With the notation from the proof above, we have \(\alpha _2 = 2w\) and the corresponding eigenvector of \(\varvec{L}_1\) is given by \(\varvec{X}_2 = (-1 , 1 , -1)\). Also, the right eigenvector corresponding to \(\lambda _2\) of \(\varvec{L}_2 + \varvec{D}_{\varvec{C}}\) is \(\varvec{y} = (1, 1)\). Hence, considering

$$\begin{aligned} \varvec{\Delta y} = \beta _m \varvec{1} + \varvec{X}_2 = (0, 2, 0) \end{aligned}$$

, this equation can be solved by introducing a single link from any node of the slave component to any node of the master component. However, in view of the cutset \(\varvec{C}\) that starts from node 2 of the master component, only connections to node 2 can give a contribution to \(s(\varvec{\Delta })\). Hence, we can choose

$$\begin{aligned} \varvec{\Delta } = \left( \begin{array}{c@{\quad }c} 0 &{} 0 \\ 2 &{} 0 \\ 0 &{} 0 \end{array} \right) , \end{aligned}$$

that is, a single link from node 4 to node 2 as in Fig. 1 will cause hinderance of synchronization since \(s(\varvec{\Delta }) < 0\).

More generally, the eigenvector \(\varvec{X}_k\) provides a spectral decomposition

$$\begin{aligned} \mathcal {G}_{-}=\{1\le i\le n: {\varvec{X}_k}_{(i)}<0\} \text{ and } \mathcal {G}_{+}=\{1\le i\le n: {\varvec{X}_k}_{(i)}>0\}. \end{aligned}$$

When the cutset is from only the nodes of a single component \(\mathcal {G}_{-}\) or \(\mathcal {G}_{+}\) then it might be possible to hinder synchronization. This suggests that to improve synchrony it is best to drive the slave by mixing multiple inputs from \(\mathcal {G}_{-}\) and \(\mathcal {G}_{+}\).

5 Discussion

In this paper, we have investigated the effect of structural perturbations on the transverse stability of the synchronous manifold in diffusively coupled networks. Establishing a connection between topological properties of a network and its synchronizability has been a challenge for the last few decades. So far, most of the existing literature focuses on establishing correlations supported by numerical simulations. Here, we present a first step in proving rigorous results for both, undirected and directed networks.

For directed networks, we have investigated the behaviour of a network when its cutset is perturbed. There is only scenario we did not investigate here: when the spectral gap is an eigenvalue of \(L_1\), determining the effect of a perturbation in opposite direction of the cutset cannot be solved in the framework presented above. It is of course possible to write down a similar term as in Eq. (19). However, in this case it involves left and right eigenvectors of \(L_1\). One would thus need to investigate eigenvectors of Laplacians of strongly connected digraphs, and more precisely the signs of their entries. To our knowledge, there have been no attempts to do so, yet.

Even more involved is the question whether there exists a classification of links according to their dynamical impact in strongly connected networks. To our knowledge, no results have been obtained for the general case so far either. This is also due to the fact that there are few attempts to extend the approaches on undirected graphs due to Fiedler (see Fiedler 1973) to directed graphs. As shown here, related results would essentially improve our understanding of the dynamical impact of a link in directed networks.