1 Introduction

Many complex systems can be modeled as networks. Informally, a network is a collection of objects, referred to as nodes or vertices, that are connected to each other in some fashion; the connections are referred to as edges. The edges may be directed or undirected, and may be equipped with positive weights that correspond to their importance. The nature of the nodes, edges, and weights depends on the application. Some modeling situations require more than one kind of nodes or more than one type of edges.

Multilayer networks are networks that consist of different kinds of edges and possibly different types of nodes. This kind of networks arise when one seeks to model a complex system that contains connections and objects with different properties. For instance, when modeling train and bus connections in a country, the train routes and bus routes define edges with distinctive properties, and the train and bus stations may make up nodes with diverse properties. The connections between a train station and an adjacent bus station give rise to yet another kind of edges along which travelers walk. Edge weights may be chosen proportional to the number of travelers along an edge, proportional to the distance between the nodes that the edge connects, or proportional to the cost of traveling along an edge. Whether it is meaningful to distinguish between different kinds of edges and nodes, and using edge weights, depends on the nature and purpose of the network model.

It is often of interest to determine the ease of communication between nodes in a network, as well as how important a node is in some well-defined sense. Also, it is desirable to be able to assess the sensitivity of the measure of communication between the nodes to perturbations in the edge weights. For instance, if the nodes represent cities, and the edges represent roads between the cities, with edge weights proportional to the amount of traffic on each road, then one may be interested in which road(s) should be widened or made narrower to increase or reduce, respectively, communication in the network the most. The available data may be contaminated by measurement errors. We are then interested in how sensitive to errors in the data our choice of road(s) to widen or make narrower is.

The investigation of the importance of nodes and edges, as well as the sensitivity of the communicability within a network to changes in the edge weights of the network with only one kind of nodes and edges has received considerable attention in the literature; see, e.g., [4, 5, 8, 9, 10, 11, 12, 17, 19] and references therein. Several of the techniques discussed evaluate the exponential of the adjacency matrix of the network, or the exponential of the adjacency matrix determined by the line graph associated with the given network. The present paper extends the communicability and sensitivity analysis in [8, 19] to multilayer networks. Since multilayer networks typically have a large number of nodes and edges, we focus on techniques that are well suited for large-scale networks.

We consider multilayer networks that are represented by graphs that share the same set of vertices \(V_{N}=\{1,2,\dots ,N\}\) and have edges both within a layer and between layers. We will simply refer to this kind of networks as multilayer networks. Nice recent discussions on multilayer networks are provided by Bergermann and Stoll [3], Cipolla et al. [6], and Tudisco et al. [22]. De Domenico et al. [7] describe how multilayer networks with L layers can be modeled by a fourth order tensor and introduce a supra-adjacency matrix \(B\in \mathbb {R}^{NL\times NL}\) for the representation of such networks. In detail, let \(A^{(\ell )}=[w_{ij}^{(\ell )}]_{i,j=1}^{N}\in \mathbb {R}^{N\times N}\) be the non-negative adjacency matrix for the graph in layer for = 1,2,…,L. Thus, the entry \(w_{i,j}^{(\ell )}\geq 0\) is the “weight” of the edge between node i and node j in layer . If the graph is “unweighted”, then all nonzero entries of A() are set to one. The matrix \(B\in \mathbb {R}^{NL\times NL}\) is a block matrix with N × N blocks. The th diagonal block is the adjacency matrix \(A^{(\ell )}\in \mathbb {R}^{N\times N}\) for layer , for = 1,2,…,L; the off-diagonal N × N block in position (1,2), with 1 ≤ 1,2L and 12 represents the inter-layer connections between the layers 1 and 2; see Section 4 for details.

We may consider B an adjacency matrix for a monolayer network with NL nodes, and assume that B is irreducible. This is equivalent to that the graph associated with B is strongly connected; see, e.g., [13]. Hence, the Perron-Frobenius theory applies, from which it follows that B has a unique eigenvalue ρ > 0 of largest magnitude (the Perron root) and that the associated right and left eigenvectors, x and y, respectively, can be normalized to be of unit Euclidean norm with all components positive. These normalized eigenvectors are commonly referred to as the right and left Perron vectors, respectively. Thus,

$$B \mathbf{x}=\rho \mathbf{x}, \qquad \mathbf{y}^{T} B=\rho \mathbf{y}^{T}.$$
(1)

We will assume throughout this paper that the Perron vectors x and y have been normalized in the stated manner.

Following [8], we introduce the Perron communicability in the multilayer network,

$$C^{\text{PN}}(B)=\exp_{0}(\rho)\mathbf{1}_{NL}^{T}\mathbf{y}\mathbf{x}^{T}\mathbf{1}_{NL}=\exp_{0}(\rho)\left(\sum\limits_{j=1}^{NL}y_{j}\right) \left(\sum\limits_{j=1}^{NL}x_{j}\right),$$
(2)

where

$$\exp_{0}(t)=\exp(t)-1,\quad \mathbf{x}=[x_{1},x_{2},\dots,x_{NL}]^{T},\quad \mathbf{y}=[y_{1},y_{2},\dots,y_{NL}]^{T},$$

and \(\mathbf {1}_{NL}\in \mathbb {R}^{NL}\) denotes the vector of all entries one. For a general adjacency matrix \(B\in \mathbb {R}^{NL\times NL}\) associated with a monolayer network with NL nodes, the above measure is analogous to, but fairly different from, the total network communicability

$$C^{\text{TN}}(B)=\mathbf{1}_{NL}^{T}\exp(B)\mathbf{1}_{NL},$$

introduced by Benzi and Klymko [1]. The latter is related to the “size” of the matrix \(\exp (B)\), while the measure (2) is determined by the Wilkinson perturbation yxT discussed in Section 2. The latter measure provides a worst-case perturbation of the Perron root under a small perturbation of B. We use the modified exponential function \(\exp _{0}(M)\) in (2) instead of the exponential, because the Maclaurin series of \(\exp (M)\) has no natural interpretation in the context of network modeling. We note that CPN(M) is easy to apply and cheaper to compute than CTN(M) and \(C_{0}^{\text {TN}}(M):=\mathbf {1}_{NL}^{T}\exp _{0}(M)\mathbf {1}_{NL}\) for monolayer networks with many nodes or layers, i.e., when NL is large [8].

Due to the normalization of the Perron vectors x and y in (1), we have

$$1\leq\sum\limits_{j=1}^{NL} x_{j}\leq\sqrt{NL},\qquad 1\leq\sum\limits_{j=1}^{NL} y_{j}\leq\sqrt{NL}.$$

Therefore, for the multilayer network associated with B, one has

$$\exp_{0}(\rho)\leq C^{\text{PN}}(B)\leq NL\exp_{0}(\rho).$$
(3)

Typically, \(\exp _{0}(\rho )\gg NL\). It then follows that the quantity \(\exp _{0}(\rho )\) is a fairly accurate indicator of the Perron communicability of the graph represented by B in the sense that it suffices to consider \(\exp _{0}(\rho )\) to determine whether the Perron communicability of a network is large or small. The right-hand side bound in (3) will be sharpened slightly in Proposition 2 below.

Following the approach in [7], we form the leading eigentensors \(Y\in \mathbb {R}^{N\times L}\) and \(X\in \mathbb {R}^{N\times L}\) for the multilayer network associated with B by reshaping the Perron vectors y and x, respectively. Thus, the first column of the matrix Y is made up of the first N components of the vector y, the second column of Y consists of the next N components of the vector y, etc. The joint eigenvector centrality of node i in layer is given by the entry in position (i,) of Y. The rows of Y represent the eigenvector versatility of the nodes. Moreover, the (scalar) versatility of node i is given by

$$\nu_{i}=(Y \mathbf{1}_{L})_{i}, i=1,2,\dots,N.$$
(4)

The vector 1L may be replaced by some other vector in \(\mathbb {R}^{L}\) with non-negative entries if another weighting of the columns of Y is desired.

Remark 1

The concepts of hub and authority communicability was introduced by Kleinberg [14] for graphs that are defined by an adjacency matrix. An extension to multi-relational networks that are based on tensors is described by Li et al. [15]. We can define analogous concepts for tensors by using the Perron communicability. If we replace the matrix B in (1) by BBT, then we obtain analogously to (2) the Perron hub communicability

$$C^{PN}(BB^{T})=\exp_{0}(\rho_{\text{BB}^{\text{T}}})\mathbf{1}_{NL}^{T}\mathbf{x}\mathbf{x}^{T}\mathbf{1}_{NL},$$

where \(\rho _{\text {BB}^{\text {T}}}\) is the Perron root for BTB and x is the Perron vector for BTB. Similarly, if we replace the matrix B in (1) by BTB, then we obtain the Perron authority communicability

$$C^{PN}(B^{T}B)=\exp_{0}(\rho_{\text{B}^{\text{T}}\text{B}})\mathbf{1}_{NL}^{T}\mathbf{x}\mathbf{x}^{T}\mathbf{1}_{NL},$$

where \(\rho _{\text {B}^{\text {T}}\text {B}}=\rho _{\text {BB}^{\text {T}}}\) is the Perron root for BTB and x is the Perron vector for BTB.

We turn to special multilayer networks in which nodes in different layers are identified with each other. Thus, there are no edges between different nodes in different layers; the only edges that connect different layers are edges between a node and its copy in other layers. Hence, in the supra-adjacency matrix \(B\in \mathbb {R}^{NL\times NL}\) all off-diagonal entries in all off-diagonal blocks are zero.

We will refer to these kinds of networks as multiplex networks. They can be represented by a third-order tensor. The graph for layer is associated with the non-negative adjacency matrix \(A^{(\ell )}\in \mathbb {R}^{N\times N}\), \(\ell =1,2,\dots ,L\), and a mode-1 unfolding of the third-order tensor that represents the network yields an L-vector of these adjacency matrices:

$${\mathcal{A}}=[A^{(1)},A^{(2)},\dots,A^{(L)}]\in\mathbb{R}^{N\times NL}.$$
(5)

The supra-adjacency matrix \(B\in \mathbb {R}^{NL\times NL}\) for the multiplex network associated with the matrix \({\mathcal {A}}\) in (5) has the diagonal blocks A(), \(\ell =1,2,\dots ,L\), and every N × N off-diagonal block is the identity matrix \(I_{N}\in \mathbb {R}^{N\times N}\); see, e.g., [7]. Hence, the coupling is diagonal and uniform. One may introduce a parameter γ ≥ 0 that determines how strongly the layers influence each other. This yields the matrix

$$B:=B(\gamma)=\text{diag}[A^{(1)},A^{(2)},\dots,A^{(L)}]+ \gamma(\mathbf{1}_{L}{\mathbf{1}_{L}^{T}}\otimes I_{N}-I_{NL}),$$
(6)

where ⊗ denotes the Kronecker product; see [3].

Due to the potentially large sizes of the matrices B in (1) and (6), one typically computes their right and left Perron vectors by an iterative method, which only require the evaluation of matrix-vector products with the matrices B and BT. Clearly, one does not have to store B, but only \({\mathcal {A}}\) in (5) to evaluate matrix-vector products with the matrix B in (6) and its transpose.

Remark 2

If one is interested in the Perron hub or authority communicability of the network, then the matrices A() in (5) should be replaced by A()(A())T or (A())TA(), respectively, for \(\ell =1,2,\dots ,L\).

Following [, Definition 3.5], we introduce for future reference the 21L-dimensional vectors of the marginal layer Y-centralities and the marginal layer X-centralities

$$\mathbf{c}_{Y} = Y^{T}\mathbf{1}_{N}\quad\text{and}\quad\mathbf{c}_{X} = X^{T}\mathbf{1}_{N},$$
(7)

respectively.

It is the purpose of the present paper to investigate the Perron network communicability of multilayer networks that can be represented by a supra-adjacency matrix \(B\in \mathbb {R}^{NL\times NL}\), as well as the special case of multiplex networks that are represented by the matrix \({\mathcal {A}}\in \mathbb {R}^{N\times NL}\) in (5). We also are interested in the sensitivity of the communicability to errors or changes in the entries of the supra-adjacency matrix B and in the entries of the matrices A() in (5) in the case of a multiplex network. The particular structure of B in (6) for multiplex networks will be exploited.

The organization of this paper is as follows. The Wilkinson perturbation for a supra-adjacency matrix is defined in Section 2. This perturbation forms the basis for our sensitivity analysis of multilayer networks. Section 3 discusses some properties of the Perron and total network communicabilities. A sensitivity analysis for multilayer networks based on the Wilkinson perturbation is presented in Section 4. Both Sections 3 and 4 first discuss multilayer networks that can be defined by general supra-adjacency matrices, and subsequently describe simplifications that ensue for multiplex networks that can be defined by the supra-adjacency matrix B in (6). Section 5 presents a few computed examples, and Section 6 contains concluding remarks.

2 Wilkinson perturbation for supra-adjacency matrices

Let \(B\in \mathbb {R}^{NL\times NL}\) be the supra-adjacency matrix in (1). We assume that B is irreducible. Let ρ > 0 be the Perron root of B, and let x and y be the associated right and left normalized Perron vectors. Thus, all entries of x and y are positive, and ∥x2 = ∥y2 = 1. Throughout this paper ∥⋅∥2 denotes the Euclidean vector norm or the spectral matrix norm, and ∥⋅∥F stands for the Frobenius norm. The vectors x and y are uniquely determined.

Let \(E\in \mathbb {R}^{NL\times NL}\) be a non-negative matrix such that ∥E2 = 1, and let ε > 0 be a small constant. Denote the Perron root of B + εE by ρ + δρ. Then

$$\delta\rho=\varepsilon\frac{\mathbf{y}^{T}E\mathbf{x}}{\mathbf{y}^{T}\mathbf{x}}+O(\varepsilon^{2});$$
(8)

see [16]. Moreover,

$$\frac{\mathbf{y}^{T} E \mathbf{x}} {\mathbf{y}^{T}\mathbf{x}}= \frac{|\mathbf{y}^{T} E \mathbf{x}|} {\mathbf{y}^{T}\mathbf{x}}\leq \frac{\|\mathbf{y}\|_{2}\|E\|_{2}\|\mathbf{x}\|_{2}} {\mathbf{y}^{T}\mathbf{x}}= \frac{1}{\cos\theta},$$
(9)

where 𝜃 is the angle between x and y. The quantity \(1/\cos \limits \theta\) is referred to as the condition number of ρ and denoted by κ(ρ); see Wilkinson [, Section 2]. Note that when 23B is symmetric, we have x = y and, hence, 𝜃 = 0. In this situation ρ is well-conditioned. Equality in (9) is achieved for the Wilkinson perturbation

$$E=\mathbf{y}\mathbf{x}^{T}\in\mathbb{R}^{NL \times NL},$$
(10)

which we will refer to as W. For E = W, the perturbation (8) of the Perron root is δρ = εκ(ρ) + O(ε2). We observe that all the above statements hold true if everywhere the spectral norm is replaced by the Frobenius norm.

3 Some properties of the Perron and total network communicabilities

This section discusses a few properties of the Perron communicability and how it relates to the total network communicability.

Proposition 1

$$\begin{array}{@{}rcl@{}} C^{\text{PN}}(B)=\exp_{0}(\rho)\mathbf{c}_{Y}^{T}\mathbf{c}_{X}, \end{array}$$
(11)

where cX is the vector of the marginal layer X-centralities and cY is the vector of the marginal layer Y-centralities in (7).

Proof

The proof follows from (2) by observing that

$$\mathbf{1}_{NL}^{T} \mathbf{y}\mathbf{x}^{T} \mathbf{1}_{NL}= \mathbf{1}_{N}^{T} Y X^{T} \mathbf{1}_{N} =\mathbf{c}_{Y}^{T}\mathbf{c}_{X}.$$

Remark 3

When the network is undirected, one has according the definitions (7) that cX = cY, because x = y. This gives, by (11), the symmetric Perron communicability

$$\begin{array}{@{}rcl@{}} C^{\text{PN sym}}(B)=\exp_{0}(\rho)\|\mathbf{c}_{Y}\|_{2}^{2}. \end{array}$$

Proposition 2

$$C^{\text{PN}}(B)\leq NL \exp_{0}(\rho)\cos \phi,$$

where ϕ is the angle between the vector cY of the marginal layer Y-centralities and the vector cX of the marginal layer X-centralities in (7).

Proof

One has

$$\mathbf{c}_{Y}^{T}\mathbf{c}_{X}=\|\mathbf{c}_{X} \|_{2}\|\mathbf{c}_{Y} \|_{2}\cos \phi,$$

where ϕ is the angle between cY and cX. Let ∥⋅∥1 denote the vector 1-norm. It is evident that

$$\|\mathbf{c}_{X} \|_{1}=\sum\limits_{j=1}^{NL} x_{j} =\|\mathbf{x} \|_{1},\quad \|\mathbf{c}_{Y} \|_{1}=\sum\limits_{j=1}^{NL} y_{j} =\|\mathbf{y} \|_{1}.$$

Since

$$\|\mathbf{c}_{X}\|_{2}\leq \|\mathbf{c}_{X}\|_{1}=\|\mathbf{x}\|_ 1\leq \sqrt{NL}\|\mathbf{x}\|_{2},\quad \|\mathbf{c}_{Y}\|_{2}\leq \|\mathbf{c}_{Y}\|_{1}=\|\mathbf{y}\|_ 1\leq \sqrt{NL}\|\mathbf{y}\|_{2},$$

we have the bound

$$\|\mathbf{c}_{X} \|_{2}\|\mathbf{c}_{Y} \|_{2}\leq NL \|\mathbf{x}\|_{2}\|\mathbf{y}\|_{2}=NL,$$

which gives the proof by using (11).

Remark 4

When the network is undirected, by Remark 3, Proposition 2 reads

$$C^{\text{PN sym}}(B)\leq NL \exp_{0}(\rho),$$

which is the same bound as (3).

Matrix function-based communicability measures have been generalized in [3] to the case of layer-coupled multiplex networks that can be represented by a supra-adjacency matrix B of the form (6), i.e., by \({\mathcal {A}}\) defined by (5). Following the argument in [8], assume that the Perron root ρ of a supra-adjacency matrix B of the form (6) is significantly larger than the magnitude of its other eigenvalues. Then

$$C_{0}^{\text{TN}}({\mathcal{A}})\approx\kappa(\rho)C^{\text{PN}}({\mathcal{A}}),$$

where \(C_{0}^{\text {TN}}({\mathcal {A}})=\mathbf {1}_{NL}^{T}\exp _{0}(B)\mathbf {1}_{NL}\) and \(C^{\text {PN}}({\mathcal {A}})\) refers to the Perron network communicability (2) when B is of the form (6). Thus, the multiplex total network communicability depends on the conditioning of the Perron root.

Remark 5

It is straightforward to see that if the network represented by the matrix B of the form (6) is undirected, and the Perron root ρ is significantly larger than the magnitude of the other eigenvalues of B, then one has

$$C_{0}^{\text{TN sym}}({\mathcal{A}})\approx C^{\text{PN sym}}({\mathcal{A}}).$$

Indeed, the Perron vectors x and y coincide so that κ(ρ) = 1.

4 Multilayer network Perron root sensitivity

Let the supra-adjacency matrix \(B\in \mathbb {R}^{NL\times NL}\) be associated with an L-layer network as described above. Then an edge from node i in layer k to node j in layer , with \(i,j\in \{1,2,\dots ,N\}\), ij, and \(k,\ell \in \{1,2,\dots ,L\}\), is associated with the (i,j)th entry \(w_{ij}^{(k,\ell )}>0\) of the (k,)th block of order N × N of the matrix B.

Consider increasing the weight \(w_{ij}^{(k,\ell )}\) of an existing edge by ε > 0 or introducing a new edge from node i in layer k to node j in layer with weight ε > 0. This corresponds to perturbing the supra-adjacency matrix B by the matrix εE, where the matrix \(E\in \mathbb {R}^{NL\times NL}\) has entries zero everywhere, except for the entry one in position (i,j) in the block (k,). It follows from (8) that the impact on the Perron root of this perturbation is

$$\delta\rho=\varepsilon\kappa(\rho) {y_{N(k-1)+i} x_{N(\ell-1)+j}}+O(\varepsilon^{2}).$$

The notion of multilayer network Perron root sensitivity with respect to the direction (i,k)→(j,), defined by

$$S^{\text{PR}}_{i, j, k, \ell}(B):=\kappa(\rho) y_{N(k-1)+i } x_{N(\ell-1)+j },$$
(12)

is helpful for determining which edge(s) to insert in, or remove from, a multilayer network.

Remark 6

Notice that the largest entries of x and y are strictly smaller than 1, hence the multilayer network Perron root sensitivity (12) with respect of any direction is less than κ(ρ). Indeed, x and y are unit vectors with positive entries so that, if, e.g., xN(− 1)+j = 1, this would imply that xk = 0 for all kN( − 1) + j, which is not possible.

We also introduce the multilayer network Perron root sensitivity matrix associated with B, denoted by SPR(B), whose entries are given by the quantities \(S^{\text {PR}}_{i, j, k, \ell }(B)\). The following result holds true.

Proposition 3

The multilayer Perron root sensitivity matrix is given by

$$S^{\text{PR}}(B)= \kappa(\rho)W\in\mathbb{R}^{NL\times NL},$$
(13)

where W is the Wilkinson perturbation defined by (10).

Proof

The proof follows from (12) by observing that

$$S^{\text{PR}}_{i, j, k, \ell}(B)=\kappa(\rho) W_{N(k-1)+i , N(\ell-1)+j},$$

with W = yxT.

Remark 7

Notice that both the spectral norm and the Frobenius norm of the multilayer network Perron root sensitivity matrix are equal to the condition number of the Perron root. Moreover, the Perron communicability (2) reads

$$C^{\text{PN}}(B)=\frac{\exp_{0}(\rho)}{\kappa(\rho)} \mathbf{1}_{NL}^{T} S^{\text{PR}}(B)\mathbf{1}_{NL}.$$

Remark 8

Following [, Eqs (2.1)-(2.2)], the spectral impact of each existing edge in 19B can be analyzed by means of the matrix

$$-\frac{1}{\rho}B\circ S^{\text{PR}}(B)\in\mathbb{R}^{NL\times NL},$$

where ○ denotes the Hadamard product.

The exponential of the spectral radius of the graph associated with B often is a fairly accurate relative measure of the Perron network communicability of the graph; cf. (3). If we would like to modify the graph by adding an edge that increases the Perron network communicability as much as possible, then we should choose the indices i, j, k, and for the new edge so that

$$x_{N(\ell-1)+j}=\max_{1\leq q\leq NL}x_{q}, \qquad y_{N(k-1)+i}=\max_{1\leq q\leq NL}y_{q}.$$

We turn to the removal of an edge, with the aim of simplifying the graph without affecting the Perron network communicability much. We therefore would like to choose the indices \(1\leq \hat {\imath },\hat {\jmath }\leq N\) and \(1\leq \hat {k},\hat {\ell }\leq L\) such that \(w_{\hat {\imath },\hat {\jmath }}^{(\hat {k},\hat {\ell })}\) is positive and

$$y_{N(\hat{k}-1)+\hat{\imath}} x_{N(\hat{\ell}-1)+\hat{\jmath}}= \min_{\substack{1\leq i,j\leq N\\1\leq k,\ell\leq L\\w_{ij}^{(k,\ell)}>0}} y_{N(k-1)+i} x_{N(\ell-1)+j}.$$

A way to determine such an index quadruple \(\{\hat {\imath },\hat {\jmath },\hat {k},\hat {\ell }\}\) is to first order the products yixj, 1 ≤ i,jNL in increasing order. This yields a sequence of index pairs \(\{i_{q},j_{q}\}_{q=1}^{N^{2}L^{2}}\) such that

$$y_{i_{q}}x_{j_{q}}\leq y_{i_{q+1}}x_{j_{q+1}}\qquad\forall~1\leq q<N^{2}L^{2}.$$

Then determine the first index pair \(\{i_{\hat {q}},j_{\hat {q}}\}\) in this sequence such that \(w_{\hat {\imath },\hat {\jmath }}^{(\hat {k},\hat {\ell })}>0\), where

$$i_{\hat{q}}=N(\hat{k}-1)+\hat{\imath},\qquad j_{\hat{q}}=N(\hat{\ell}-1)+\hat{\jmath}$$

with \(1\leq \hat {\imath },\hat {\jmath }\leq N\) and \(1\leq \hat {k},\hat {\ell }\leq L\).

We remark that the perturbation bound (8) only is valid for ε of small enough magnitude. Nevertheless, it is useful for choosing which edge(s) to remove to simplify a multilayer network. This is illustrated in Section 5. It may be desirable that the graph obtained after removing an edge is connected. The connectedness has to be verified separately.

Remark 9

Notice that when the network is undirected, it may be meaningful to require the perturbation of the network also be symmetric. Thus, instead of considering the network sensitivity (12) with regard to the direction (i,k)→(j,), we investigate the sensitivity of the network with regard to perturbations in the directions (i,k)→(j,) and (j,)→(i,k). This results in the expression

$$\begin{array}{@{}rcl@{}} S^{\text{PR sym}}_{i, j, k, \ell}(B)&:=& \kappa(\rho) (y_{N(k-1)+i} x_{N(\ell-1)+j}+y_{N(\ell-1)+j} x_{N(k-1)+i})\\ &=& 2x_{N(k-1)+i} x_{N(\ell-1)+j}, \end{array}$$

where we have used that x = y. This expression is analogous to (12).

We conclude this section with a discussion on multiplex networks. In such a network, an edge from node i to node j in layer , with \(i,j\in \{1,2,\dots ,N\}\), ij, and \(\ell \in \{1,2,\dots ,L\}\) is associated with the entry \(w_{ij}^{(\ell )}\geq 0\) of the adjacency matrix A(). Increasing the weight \(w_{ij}^{(\ell )}>0\) of an existing edge by ε > 0, or introducing a new edge by setting a zero weight \(w_{ij}^{(\ell )}\) to ε > 0, means perturbing \({\mathcal {A}}\) in (5) by \(\varepsilon {\mathcal P}\), where

$${\mathcal P}=[O_{N},\dots,O_{N},P_{ij}^{(\ell)},O_{N},\dots,O_{N}]\in\mathbb{R}^{N\times NL}\quad\text{with} \quad P_{ij}^{(\ell)}=\mathbf{e}_{i}\mathbf{e}_{j}^{T}\in\mathbb{R}^{N\times N}.$$
(14)

Here \(O_{N}\in \mathbb {R}^{N\times N}\) denotes the zero matrix. The perturbation \(\varepsilon {\mathcal P}\) corresponds to perturbing the supra-adjacency matrix B by an NL × NL block matrix with all null N × N blocks except for the th diagonal block A() in which the (i,j)th entry is set equal to ε.

Introduce the multiplex Perron root sensitivity \({S}^{\text {PR}}_{i, j, \ell }({\mathcal {A}})\) with respect to the direction (i,j) in layer ,

$${S}^{\text{PR}}_{i, j, \ell}({\mathcal{A}}):=\kappa(\rho) {y_{N(\ell-1)+i } x_{N(\ell-1)+j }},$$

which is analogous to the quantity (12) for more general multilayer networks. Thus, if \({\mathcal P}\) is defined by (14) and \({\mathcal {A}}\) by (5), one has from (8) that \(\delta \rho \approx \varepsilon {S}^{\text {PR}}_{i, j, \ell }({\mathcal {A}})\). Analogously, consider reducing the (i,j)th entry \(w_{ij}^{(\ell )}>0\) of the adjacency matrix A() by ε and assume that 0 < ε ≪ 1 and \(\varepsilon <w_{ij}^{(\ell )}\). Then the modified network associated with the tensor \({\mathcal {A}}-\varepsilon {\mathcal P}\) is non-negative and connected if the network associated with \({\mathcal {A}}\) has these properties. Then \(\delta \rho \approx \ -\varepsilon {S}^{\text {PR}}_{i, j, \ell }({\mathcal {A}})\).

Moreover, as shown in Remark 9, when considering an undirected multiplex network, we obtain the expression

$${S}^{\text{PR sym}}_{i, j, \ell}({\mathcal{A}}):=2 {x_{N(\ell-1)+i } x_{N(\ell-1)+j }}.$$

Recall that the Perron root sensitivity matrix (13) for general multilayer networks depends on the Wilkinson perturbation \(W\in {\mathbb R}^{NL\times NL}\) of the supra-adjacency B as well as on the condition number κ(ρ). By assuming that B is of the type in (6), the results in the following section will lead to analogous properties of the multiplex Perron root sensitivity matrix \({S}^{\text {PR}}({\mathcal {A}})\), whose nonvanishing entries are given by the quantities \({S}^{\text {PR}}_{i, j, \ell }({\mathcal {A}})\).

4.1 Exploiting the structure of multiplex networks

Consider the cone \({\mathcal D}\) of all non-negative block-diagonal matrices in \({\mathbb R}^{NL\times NL}\) with L blocks in \({\mathbb R}^{N\times N}\) and let \(M|_{\mathcal D}\) denote the matrix in \({\mathcal D}\) that is closest to a given matrix \(M\in {\mathbb R}^{NL\times NL}\) with respect to the Frobenius norm. It is straightforward to verify that \(M|_{\mathcal D}\) is obtained by replacing all the entries outside the block-diagonal structure by zero.

Let \(E\in {\mathcal D}\) be such that ∥EF = 1, and let ε > 0 be a small constant. Then

$$\frac{\mathbf{y}^{T} E \mathbf{x}} {\mathbf{y}^{T}\mathbf{x}}= \frac{|\mathbf{y}^{T} E \mathbf{x}|} {\mathbf{y}^{T}\mathbf{x}}\leq \frac{\|\mathbf{y}\|_{2}\|\mathbf{y}\mathbf{x}^{T}|_{\mathcal D}\|_{F}\|\mathbf{x}\|_{2}} {\mathbf{y}^{T}\mathbf{x}}= \frac{\|\mathbf{y}\mathbf{x}^{T}|_{\mathcal D}\|_{F}}{\mathbf{y}^{T}\mathbf{x}},$$
(15)

with equality for the \({\mathcal D}\)-structured analogue of the Wilkinson perturbation

$$E=\frac{\mathbf{y}\mathbf{x}^{T}|_{\mathcal D}}{\|\mathbf{y}\mathbf{x}^{T}|_{\mathcal D}\|_{F}};$$
(16)

see [18]. The quantity

$$\frac{\|\mathbf{y}\mathbf{x}^{T}|_{\mathcal D}\|_{F}}{\mathbf{y}^{T}\mathbf{x}} = \kappa(\rho)\|\mathbf{y}\mathbf{x}^{T}|_{\mathcal D}\|_{F}$$

will be referred to as the \({\mathcal D}\)-structured condition number of ρ and denoted by \(\kappa _{{\mathcal D}}(\rho )\). For E in (16), the perturbation (8) of the Perron root is \(\delta \rho =\varepsilon \kappa _{{\mathcal D}}(\rho )+O(\varepsilon ^{2})\).

Thus, the \({\mathcal D}\)-structured analogue of the Wilkinson perturbation is the maximal perturbation for the Perron root ρ of a supra-adjacency matrix of the type (6) induced by a \({\mathcal D}\)-structured perturbation. The following result holds.

Proposition 4

The multiplex Perron root sensitivity matrix is given by

$${S}^{\text{PR}}({\mathcal{A}})= \kappa(\rho)W|_{\mathcal{D}}\in\mathbb{R}^{NL\times NL} ,$$

where W is the Wilkinson perturbation defined by (10) and \({\mathcal D}\) is the cone of all non-negative block-diagonal matrices in \({\mathbb R}^{NL\times NL}\) with L blocks in \({\mathbb R}^{N\times N}\).

Proof

In the multiplex associated with the matrix \({\mathcal {A}}\) in (5) and represented by the matrix B in the form (6), the parameter γ that yields the weight of the inter-layer edges, i.e., the influence of the layers on each other, is determined a priori by the model. Thus, \({S}^{\text {PR}}({\mathcal {A}})\in {\mathcal D}\), because admissible perturbations only affect intra-edges. Hence, one obtains from Proposition 3 that the multiplex Perron root sensitivity matrix consists of just the projection into \({\mathcal D}\) of (13) obtained by replacing all the entries of W outside the block-diagonal structure by zero. This concludes the proof.

Analogously to (13), the multiplex Perron root sensitivity matrix is the product of the maximal admissible perturbation and the relevant condition number of the Perron root. Thus, \({S}^{\text {PR}}({\mathcal {A}})\) is given by the product of the \({\mathcal D}\)-structured condition number of ρ, \(\kappa _{{\mathcal D}}(\rho )\), and the \({\mathcal D}\)-structured analogue of the Wilkinson perturbation W:

$${S}^{\text{PR}}({\mathcal{A}})= \kappa(\rho) \|W|_{\mathcal D}\|_{F}\frac{W|_{\mathcal D}} {\|W|_{\mathcal D}\|_{F}}.$$

Hence, the Frobenius norm of the multiplex Perron root sensitivity matrix is equal to the structured condition number \(\kappa _{{\mathcal D}}(\rho )\) of the Perron root; see Remark 7 for the general case of a multilayer network.

The above argument quantitatively shows that the Perron communicability in multiplexes is less sensitive, both component-wise and norm-wise, than the Perron communicability in more general multilayer networks.

Remark 10

Following the argument in Remark 7, we define the effective Perron communicability in a multiplex network,

$$C^{\text{PN}}({\mathcal{A}})=\frac{\exp_{0}(\rho)}{\kappa(\rho)} \mathbf{1}_{NL}^{T} S^{\text{PR}}({\mathcal{A}})\mathbf{1}_{NL}.$$

Moreover, observing that

$$\mathbf{1}_{NL}^{T} S^{\text{PR}}({\mathcal{A}})\mathbf{1}_{NL}\leq NL\|S^{\text{PR}}({\mathcal{A}})\|_{F} = NL \kappa_{\mathcal D}(\rho),$$

we obtain the upper bound

$$C^{\text{PN}}({\mathcal{A}})\leq NL\exp_{0}(\rho)\|W|_{\mathcal D}\|_{F},$$

which is sharper than (3).

We conclude this subsection by defining the multiplex Perron root sensitivity matrix

$${S}^{\text{PR}}({\mathcal{A}})= \kappa(\rho){\mathcal W} ,$$

where

$${\mathcal W}:=[W^{(1)},W^{(2)},\dots,W^{(L)}]\in\mathbb{R}^{N\times NL}.$$
(17)

Here \(W^{(\ell )}\in \mathbb {R}^{N\times N}\) is constructed by multiplying the th column of Y by the th row of XT, for = 1,2,…,L, where the matrices \(X,Y\in \mathbb {R}^{N\times L}\) are determined by reshaping the right and left Perron vectors x and y of B; see Section 1 for the definition of X and Y.

Remark 11

Analogously to Remark 8, we note that the spectral impact of each existing edge in \(\mathcal {A}\) can be studied by means of

$$-\frac{1}{\rho}{\mathcal{A}}\circ {S}^{\text{PR}}({\mathcal{A}});$$

cf. [, Eqs (2.1)-(2.2)].19

4.2 Exploiting the sparsity structure of multiplex networks

When considering perturbations of existing edges, we take into account the projection of the Wilkinson perturbation W into the cone \({\mathcal S}\) of all matrices in \({\mathcal D}\) with the same sparsity structure as \(\text {diag}[A^{(1)},A^{(2)}.\dots ,A^{(L)}]\). The argument that lead to the structured results (15) and (16) holds true for any (further) sparsity structure of the matrix \(\text {diag}[A^{(1)},A^{(2)},\dots ,A^{(L)}]\). Moreover, \(\kappa _{\mathcal S}(\rho )\leq \kappa _{\mathcal D}(\rho )\leq \kappa (\rho )\). One has the following result for the multiplex Perron root structured sensitivity matrix \({S}^{\text {PR struct}}({\mathcal {A}})\), whose nonvanishing entries are given by the quantities \({S}^{\text {PR}}_{i, j, \ell }({\mathcal {A}})\) that correspond to the positive entries of B.

Proposition 5

The multiplex Perron root structured sensitivity matrix is given by

$${S}^{\text{PR struct}}({\mathcal{A}})= \kappa(\rho)W|_{\mathcal{S}}\in\mathbb{R}^{NL\times NL} ,$$

where W is the Wilkinson perturbation defined by (10) and \({\mathcal S}\) is the cone of all non-negative block-diagonal matrices in \({\mathbb R}^{NL\times NL}\) with L blocks in \({\mathbb R}^{N\times N}\) having the same sparsity structure as the diagonal block matrices of the supra-adjacency matrix B in (6) that represents the multiplex.

Proof

As for Proposition 4, the proof follows by observing that the multiplex Perron root structured sensitivity matrix \({S}^{\text {PR struct}}({\mathcal {A}})\) consists of the projection into \({\mathcal S}\) of W, because only perturbations of existing intra-edges are admissible. □

We have the following component-wise and norm-wise inequalities:

$${S}^{\text{PR struct}}({\mathcal{A}})\leq{S}^{\text{PR}}({\mathcal{A}}),$$
$$\|{S}^{\text{PR struct}}({\mathcal{A}})\|_{F}\leq\|{S}^{\text{PR}}({\mathcal{A}})\|_{F}.$$

Remark 12

Following the argument in Remark 10, we are in a position to introduce the notion of structured Perron communicability in a multiplex network,

$$C^{\text{PN struct}}({\mathcal{A}})=\frac{\exp_{0}(\rho)}{\kappa(\rho)} \mathbf{1}_{NL}^{T} S^{\text{PN struct}}({\mathcal{A}})\mathbf{1}_{NL},$$

and obtain by using

$$\mathbf{1}_{NL}^{T} S^{\text{PR struct}}({\mathcal{A}})\mathbf{1}_{NL}\leq NL\|S^{\text{PR struct}}({\mathcal{A}})\|_{F} = NL \kappa_{\mathcal S}(\rho)$$

the sharper upper bound

$$C^{\text{PN struct}}({\mathcal{A}})\leq NL\exp_{0}(\rho)\|W|_{\mathcal S}\|_{F}\leq NL\exp_{0}(\rho)\|W|_{\mathcal D}\|_{F}.$$

Finally, one may alternatively represent \({S}^{\text {PR struct}}({\mathcal {A}})\) as

$${S}^{\text{PR struct}}({\mathcal{A}})= \kappa(\rho){\mathcal W}|_{\mathcal S} ,$$

where \({\mathcal W}|_{\mathcal S}\) is obtained from \({\mathcal W}\) in (17), by projecting each matrix W() into the cone \({\mathcal S}^{(\ell )}\) of all non-negative matrices in \(\mathbb {R}^{N\times N}\) having the same sparsity structure as the matrix A(), for = 1,2,…,L.

4.3 Symmetry patterns of multiplexes

Let the network be represented by a symmetric supra-adjacency matrix B of the type (6). Applying the arguments in the preceding subsections to the cone of all the symmetric matrices in \({\mathcal D}\) [all the symmetric matrices in \({\mathcal S}\)] leads to the same structured analogue of the Wilkinson perturbation as \(W|_{\mathcal D}\) [as \(W|_{\mathcal S}\)]. Indeed, as the network is undirected, the right and left Perron vectors coincide, so that the Wilkinson perturbation W = yxT = yyT is a symmetric matrix.

5 Computed examples

This section presents some examples to illustrate the performance of the methods discussed above. The computations were carried out using Matlab R2015b. The calculation of the Perron root and the left and right Perron vectors can easily be evaluated by using the Matlab functions eig for small networks. For large-scale networks these quantities can be computed by the two-sided Arnoldi algorithm, which was introduced by Ruhe [20], and has been improved by Zwaan and Hochstenbach [24]. Specifically, we used the function eig in Examples 1 and 2, and the two-sided Arnoldi algorithm in Example 3.

5.1 Example 1: A small synthetic multilayer network

We construct a small directed unweighted general multilayer network with N = 4 nodes in each layer and L = 3 layers, illustrated in Fig. 1. The network is represented by a supra-adjacency matrix \(B\in \mathbb {R}^{12\times12}\), whose 4 × 4 diagonal blocks are adjacency matrices that represent the graphs of each layer. The off-diagonal blocks represent edges that connect nodes in different layers. This results in a nonsymmetric matrix B, whose Perron root is ρ(B) = 2.3471; the condition number of the Perron root is κ(ρ(B)) = 1.0248.

Fig. 1
figure 1

Example 1: Layers are presented from left to right in the order L = 1, L = 2, and L = 3. The edges connecting nodes from same layer are marked in black. The edges connecting nodes from different layers are marked in red

Let ε = 0.3 and let W denote the Wilkinson perturbation (10). Then ρ(B + εW) = 2.6512. Thus, the perturbation εW of B increases the spectral radius by 0.3041 as can be expected since εκ(ρ(B)) = 0.3074. If we replace the matrix W by the matrix of all ones, normalized to be of unit Frobenius norm, then the spectral radius increases by only 0.2561. Clearly, this is not an accurate estimate of the actual worst-case sensitivity of ρ(B) to perturbations.

The largest Perron root sensitivity is \(S^{\text {PR}}_{2,4,3,2}(B)=0.2241\); cf. (12). This suggests that increasing the weight of the edge connecting node 2 in layer 3 and node 4 in layer 2 results in a relatively large change in the Perron root.

In general, we expect the Perron root to increase more when introducing new edges or increasing edge weights that correspond to the largest entries of the Perron root sensitivity matrix SPR(B) than when introducing randomly chosen edges or increasing randomly chosen edge weights. Table 1 confirms this for Example 1. Similarly, we expect a smaller decrease in the Perron root when decreasing the weights that correspond to the smallest entries of the matrix SPR than when decreasing random weights. Table 2 confirms this for Example 1.

Table 1 Example 1: The four largest entries of the Perron root sensitivity matrix and Perron roots for the supra-adjacency matrix obtained by increasing/introducing the weights \(w_{i,j}^{(k,\ell )}\), and Perron roots for supra-adjacency obtained by increasing/introducing the weight of random edges by ε = 0.3
Table 2 Example 1: The four smallest entries of the Perron root sensitivity matrix and Perron roots for the supra-adjacency matrix obtained by decreasing the weights \(w_{i,j}^{(k,\ell )}\), and Perron roots corresponding to decreasing the weight of random edges by ε = 0.3

The smallest entries of the matrix SPR(B) also give the candidate edges to remove in order to simplify the network. However, we have to check the connectedness of the network after removal of an edge. Let \(\hat B\) denote the supra-adjacency matrix obtained by removing the edge (1,1)→(4,1), that connects node 1 in layer 1 and node 4 in layer 1. Then \(\rho (\hat B)=2.3270\). Therefore, this removal decreases the Perron root only by an order of 10− 2. Thus, the network represented by the supra-adjacency matrix B can be simplified by removing the edge (1,1)→(4,1) without a significant impact on the Perron network communicability. The graph obtained after removal of this edge is connected.

5.2 Example 2: The Scotland Yard data set

This example considers the Scotland Yard transportation network created by the authors of [3]. The network can be downloaded from [2]. It consists of N = 199 nodes representing public transport stops in the city of London and L = 4 layers that represent different modes of transportation: boat, underground, bus, and taxi. The edges are weighted and undirected. More precisely, the edges in the layer that represents travel by taxi all have weight one. A taxi ride is defined as a trip by taxi between two adjacent nodes in the taxi layer; a taxi ride along k edges is considered k taxi rides. The weights of edges in the boat, underground, and bus layers are chosen to be equal to the minimal number of taxi rides required to travel between the same nodes.

The Perron root of the supra-adjacency matrix B is ρ(B) = 17.6055, and its condition number is κ(ρ(B)) = 1. Let ε = 0.3 and let W be the Wilkinson perturbation (10). Then ρ(B + 𝜖W) = 17.9055. Thus, the spectral radius increases by 0.3. This can be expected since εκ(ρ(B)) = 0.3. If we replace the matrix W by the matrix of all ones, normalized to be of unit Frobenius norm, then the spectral radius increases by only 0.006. This is not an accurate estimate of the actual worst-case sensitivity of ρ(B) to perturbations.

The largest entry of the Perron root sensitivity matrix is \(S^{PR}_{89,67,2,2}(B)=0.2407\). Increasing the weight of the edge e89,67,2,2 that connects the nodes 89 and 67 in layer 2 typically results in a larger increase in the Perron root than when increasing a weight of a randomly chosen edge. For instance, when increasing the weight of the edge e89,67,2,2 by 0.3, the Perron root is increased by 0.1458; see Table 3 for illustrations.

Table 3 Example 2: The three largest entries of the Perron root sensitivity matrix and Perron roots for the supra-adjacency matrix obtained by increasing the weights \(w_{i,j}^{(k,\ell )}\) by ε = 0.3, and Perron roots corresponding to same increase for random edges

We also note that the Perron root ρ(B) does not change significantly when setting the entry (162,560) of B to zero. This models the removal of the edge that connects node 162 in layer 1 to node 162 in layer 3 in the network. This edge corresponds to the smallest entry of the Perron root sensitivity matrix \(S^{\text {PR}}_{162,162,1,3}(B)=3.2279\cdot 10^{-15}\).

Now consider perturbations of existing edges. We compute the multiplex Perron root structured sensitivity matrix \(S^{\text {PR struct}}({\mathcal {A}})\) and compare the changes in the Perron root when increasing the weights of existing edges according to the largest entries of \(S^{\text {PR struct}}({\mathcal {A}})\) and increasing the weights of randomly chosen existing edges. This is illustrated by Table 4. As expected, the Perron root changed the most when considering edges associated with a large entry in the matrix \(S^{\text {PR struct}}({\mathcal {A}})\).

Table 4 Example 2: Sensitivity of the Perron root to structured increase of weights by ε = 0.3

Finally, we note that the Perron root of the network is not very sensitive to removal of edges that correspond to the smallest entries of the matrix \(S^{\text {PR struct}}({\mathcal {A}})\); see Table 5.

Table 5 Example 2: Sensitivity of the Perron root to removal of edges

5.3 Example 3: The European airlines data set

The European airlines data set consists of 450 nodes that represent European airports and has 37 layers that represent different airlines operating in Europe. Each edge represents a flight between airports. There are 3588 edges. The network can be represented by a supra-adjacency matrix B (6), where the block-diagonal matrices contain ones if an airline offers a flight between the two corresponding airports, and zeros otherwise. Each off-diagonal block is the identity matrix; this reflects the effort required to change airlines for connecting flights. The network can be downloaded from [2].

Similarly as Taylor et al. [21], we only include N = 417 nodes from the largest connected component of the network. This component defines the supra-adjacency matrix B. Its largest eigenvalue is ρ(B) = 38.3714 and κ(ρ(B)) = 1. Let ε = 0.3 and let W be the Wilkinson perturbation. Then ρ(B + εW) = 38.6714. Thus, the spectral radius increases by 0.3 as expected since εκ(ρ(B)) = 0.3.

If we replace the matrix W by the matrix of all ones, normalized to be of unit Frobenius norm, then the spectral radius increases by only 0.091.

The smallest entry of the Perron root sensitivity matrix is \(\displaystyle S^{\text {PR}}_{202,202,31,28}(B)=5.1845\cdot 10^{-13}\). This suggests that the cost of changing from the Czech airline to the Niki airline at Valan Airport can be avoided without influencing the communicability of the network.

The two largest entries of the Perron root sensitivity matrix are \(S^{PR}_{38,2,1,1}(B)=0.0040\) and \(S^{PR}_{157,2,1,1}(B)=0.0034\). This indicates that the Perron root may be increased the most by increasing the number of flights operated by the Lufthansa airline between the Munich and Frankfurt Am Main airports and between Düsseldorf and Frankfurt Am Main airports.

Finally, we consider structured perturbations. Table 6 shows important changes in the Perron root when increasing the weights \(w^{\ell }_{i,j}\) corresponding to the largest entries of the multiplex Perron root structured sensitivity matrix \(S^{\text {PR struct}}({\mathcal {A}})\) compared to increasing weights of random existing edges by ε = 0.3. On the other hand, removing random edges decreases the Perron root more than removing edges that correspond to the smallest entries of \(S^{\text {PR struct}}({\mathcal {A}})\); see Table 7.

Table 6 Example 3: Sensitivity of the Perron root to structured increase of weights ε = 0.3
Table 7 Example 3: Sensitivity of the Perron root to structured removal of edges

We conclude that the Perron communicability of the European airlines network is not so sensitive to removing flights operated by Wideroe Airlines between several airports. Meanwhile, increasing the number of flights operated by the Lufthansa airline would increase the communicability of the network significantly.

5.4 Example 4: General multilayer network

We consider an example of a general multilayer network, where interactions are allowed between different nodes in different layers. The network has 160 nodes, 6 layers, and 148 edges that may be directed. The network can be downloaded from https://github.com/wjj0301/Multiplex-Networks.

The Perron root of the supra-adjacency matrix B associated with the network, and its condition number are ρ(B) = 8.1324 and κ(ρ(B)) = 1.3277, respectively. Let ε = 0.3 and let W be the Wilkinson perturbation. Then the Perron root of B + εW is 0.3990 larger than ρ(B). This can be expected since εκ(ρ(B)) = 0.3983. The largest entry of the Perron root sensitivity matrix is \(S^{\text {PR}}_{6,24,1,1}(B)=0.3389\). Increasing the weight of the edge connecting node 6 and node 24 in layer 1 by 0.3 increases the Perron root by 0.0998.

We used ε = 0.3 in all computed examples. The conclusions drawn would have been the same if instead ε = 0.1 were used.

6 Conclusion

This paper investigates the communicability of multilayer networks by introducing the concept of Perron communicability for this kind of networks. The communicability is measured by the Perron root of the supra-adjacency matrix associated with the network. The Perron vectors of this matrix help to determine which edge weights to increase or reduce in order to increase or reduce, respectively, the Perron communicability the most. Our analysis also addresses the question of which edges can be removed without changing the Perron communicability much.