Skip to main content

Deep diving into the S&P Europe 350 index network and its reaction to COVID-19


In this paper, we analyse the dynamic partial correlation network of the constituent stocks of S&P Europe 350. We focus on global parameters such as radius, which is rarely used in financial networks literature, and also the diameter and distance parameters. The first two parameters are useful for deducing the force that economic instability should exert to trigger a cascade effect on the network. With these global parameters, we hone the boundaries of the strength that a shock should exert to trigger a cascade effect. In addition, we analysed the homophilic profiles, which is quite new in financial networks literature. We found highly homophilic relationships among companies, considering firms by country and industry. We also calculate the local parameters such as degree, closeness, betweenness, eigenvector, and harmonic centralities to gauge the importance of the companies regarding different aspects, such as the strength of the relationships with their neighbourhood and their location in the network. Finally, we analysed a network substructure by introducing the skeleton concept of a dynamic network. This subnetwork allowed us to study the stability of relations among constituents and detect a significant increase in these stable connections during the Covid-19 pandemic.


The global financial crisis of 2007–2008 encouraged researchers to adopt an interdisciplinary approach to studying the systemic risk in the financial sector to understand and model it. Caccioli, Barucca, and Kobayashi [13] delve into this topic, developing a survey that focuses mainly on network analysis. The interest in understanding the topology of financial networks was born to realise its possible reaction when impacted by economic shocks and the possible consequences that these shocks entail. There are many ways of approaching this study as well as many methodologies, like Huynh, Foglia, and Doukas [33] which concentrates on tail risk in the Eurozone and with log-returns of the stocks, find the entities that act as issuers and receivers of risk, considering a directed network. Huynh et al. [34] explains how sentiment impacts the stock market throughout the investors or Ambros et al. [7], which also shows how media affects the volatility and returns of the stock market during the covid-19 pandemic. In Goodell and Huynh [31], the authors show how having inside information can affect the expected behaviour of the stock market and change expectations indicating how a privileged circle reacts similarly. Alternatively, Xie, Wang, and Huynh [58] analyses the stock market’s reactions to the intermittent lockdowns using tick returns.

This paper aims to analyse the topology of the network derived from the interrelationships between the stocks that constitute the S&P Europe 350 index, considering adjusted closing prices from January 2016 to September 2020. This index contains 350 blue-chip companies from 16 developed European countries. These companies can be considered as “too big to fail" and are likely to have the most resilient connections that would survive a crisis. We especially want to know which firms are the most central in a dynamic network set-up, how the connectedness of the graph evolves under the influence of the pandemic shock, and determine if the network links follow a homophilic behaviour. To capture the effect of the trends in the world economy on these stock prices, we use the Morgan Stanley Capital International World (MSCI World) index as the common factor.

In general, the network analysis on financial networks has primarily focused on the study of over a handful of graph parameters, like diameter, average path length, and various centrality measures (Anufriev and Panchenko, [8], Diebold and Yılmaz, [18], and Kuzubaş, Ömercikoğlu, Saltoğlu, [42] to mention some). Two main topics studied in a network are connectivity and centrality. To study different vertex characteristics, we study three centralities (degree, closeness, harmonic, betweenness, and eigenvector). We keep our focus on network and local connectivity. Network connectivity is related to number of edges, while local connectivity is related to the number of adjacent neighbors.

We use the consistent dynamic conditional correlation model (cDCC-GARCH), and the multivariate model presented by Aielli [2]. Following the same theoretical approach as in Eratalay and Vladimirov [24] and Anufriev and Panchenko [8], we obtain the partial correlationFootnote 1 network by applying the Gaussian Graphical Model algorithm (GGM). Then we obtain global and local measurements of the network to identify which companies are most sensitive to external changes given the system’s structure. For this, we will rely on Demirer et al. [16], and Kuzubaş, Ömercikoğlu, Saltoğlu, [42] for the betweenness and closeness centralities.

In addition to the diameter and average path length, we calculate the radius of the partial correlation network. With these complementary measures, we can enhance our understanding of the topology of the network. Assuming that a shock has a single node as an entry point from which it will spread throughout the network, the diameter and radius can be interpreted as the minimum force a shock should have to ensure its propagation all over the network. Diameter is useful when the entry point is unknown, while radius is used when the entry point can be selected. On the other hand, the average path length shows the average force needed for shock transmission between any pair of vertices.

We perform a homophilic profile, where we measure the tendency of the edges of the network to create bonds with similar nodes. We found a direct relationship between the partial correlations and the proportion of homophilic edges, which helps us get a clearer perspective of the underlying network structure. This homophilic profiling treatment is a novel approach because, regardless of being a well-known topic in social sciences,Footnote 2 homophily has been barely mentioned in the financial networks literature, such as Elliott, Hazell, and Georg [22], and Barigozzi and Brownlees [9]. Moreover, based on the daily network pictures, we capture the system’s dynamics by introducing the concept of the skeleton of a dynamic network, which may be used as a forecast enhancing tool or interpreted as a shock strength measure. Thanks to the analysis of this new substructure, we found that during the Covid-19 pandemic there was an increase in the number of stable relationships. On the other hand, Millington and Niranjan [47] explores the concept of similarity as an indicator of how nodes resemble each other through some structural property (such as neighborhood, paths), and find an increase in this indicator in times of turbulence. These results suggest that it would complicate finding a diversified portfolio for investors. But from another point of view, our results could reinforce his idea that the returns of more firms tend to react similarly. However, given the construction of the skeleton, we cannot rule out that the increase could also belong to opposite reactions among stock, which would be to the appetite of investors looking to balance their profits.

To sum up, we studied two kinds of parameters: global (radius, diameter, average distance) and local (degree, closeness, harmonic, betweenness, and eigenvector centralities). Moreover, we developed a homophilic profile by industry and country. We introduced the definition of the skeleton of a dynamic network, which results from collecting the resilient edges over time. This paper focuses on the methodology to obtain and analyse some of the most representative global and local centrality measures of a network, allowing us to map the topology of the network under study. These measures could serve as input in systemic risk studies and could be complemented with more information such as the risk profile of each firm and its balance sheet, among others.

What remains of this work is structured as follows. In Section 2, we make a literature review of Network Analysis and Financial Networks. In Section 3, we describe the data under study. Later, in Section 4, we present the methodology implemented for Financial Econometrics and Network Analysis. In Section 5, we analyse the results, and in Section 6, we conclude.

Literature review

By analysing centralities, central banks can identify Global Systemically Important Institutions (G-SIIs), which can help regulate them, as already suggested in several other studies. For instance, the work of Martinez-Jaramillo et al. [44] bases a large part of its analysis on the topology of the interbank network, creating a measure of centrality composed of the closeness, betweenness, and the degree centralities (the latter being called strength). Kuzubaş, Ömercikoğlu, Saltoğlu, [42] take as an example the Turkish crisis that occurred in 2000, and in addition to the degree, closeness, and betweenness centralities, they calculate the Bonacich centrality. These two studies describe the interbank network.

Several more articles develop the centralities, focusing mainly on degree and eigenvector, such as Millington and Niranjan [46] and Anufriev and Panchenko [8], or Iori and Mantegna [35], where average distance is added to their analysis, and Billio et al. [12], who calculate proximity and eigenvector.

Network analysis

During the 1960s and 1970s, several mathematical and statistical tools started to be used by social scientists to get a better understanding of the structure and behaviour of social networks (Milgram, [45], Zachary, [59], Killworth and Bernard, [40]). While the statistical tools are used to obtain quantitative results, the mathematical devices borrowed from graph theory allow us to discover and visualise the underlying structure of the studied data.

In the late 20th century and the beginning of the 21st century, with the seminal works made by Albert, Jeong, and Barabási [3], Faloutsos, Faloutsos, and Faloutsos [25], and Watts and Strogatz [54], among others, the above mention set of tools, combined with the growing availability of information to the general public and the increased computational power to analyse big data sets led to the creation of network theory as a discipline on its own. Since then, this type of research was applied to study a wide variety of topics, such as genomics, epidemics, cybersecurity, communication, financial markets, social interactions, linguistics and more (Lewis [43], Keeling and Eames [38], Solé et al. [52]).

The primary strength of network analysis lies in the fact that it incorporates a multidisciplinary approach that utilises a range of theories, from social sciences, such as economics to exact sciences, such as biology. A great amount of detail about this can be found in Jackson [36], who suggests that all that is needed for this approach is to identify agents and the relationships that connect them. For instance, using the labour market to understand searching and matching models, or using social networks to analyse human behaviour.

Financial networks

The financial network is one example of a complex system, where there are many actors (financial institutions, where mainly interbank connections have been studied) and an uncountable number of interrelations among them. Caccioli, Barucca, and Kobayashi [13] delve into systemic risk, utilizing network analysis as their primary tool.

The application of network theory to financial networks has shown that high connectivity can produce one of two effects when a disruption to the system occurs—absorption (Allen and Gale [5], Freixas, Parigi, and Rochet [28]) or contagion (Gai and Kapadia [29], Elliott, Golub, and Jackson [21]). If the disruption to the system is minor and within a certain threshold, the connectivity of the network helps to alleviate the shock, which can be interpreted as absorption. However, if the disruption exceeds the threshold, instead of softening the impact, the interconnections augment the spread of it, as shown in Acemoglu, Ozdaglar, and Tahbaz-Salehi [1].

The relationships in a network can be direct or indirect. One example of a direct network is the interbank market, where the relationship is the trade of currency executed directly by the banks Allen and Babus [4]. Other examples are Wang et al. [53], which uses various S&P 500 index financial institutions to construct an extreme risk spillover network, and Karkowska and Urjasz [37], which focuses on the volatility spillovers from post-communist countries to global bond markets.

In our case, the relationship is indirect and describes how the behaviour of one company can lead to the behaviour of others in response; as an example, we can imagine that there is a waltz, where the couples are the firms, there are several couples, they may or may not know each other, but they all dance considering the movements of the other couples.

We derive this relationship from the partial correlation matrix. This method has been widely applied and modified; to mention some Eratalay and Vladimirov [24], Kenett et al. [39], Anufriev and Panchenko [8] and Iori and Mantegna [35] write a compendium of several studies and their different applications, some of them using this same approach, all with the idea of understanding how a network reacts to disruption in greater depth.

Many studies of financial systemic risk based on network theory developed since 2007, consider a worldwide assortment of components, such as in Diebold and Yilmaz [17], which assesses equity stocks of developed and emerging countries, or Anufriev and Panchenko [8], considering the Australian market or Diebold and Yilmaz [19] among US and European contexts. Pereira et al. [50] consider world stock exchanges and analyse them over time using a multiscale network to detect changes among pre and post-crisis periods. Furthermore, Barros Pereira et al. [10] examines for almost 30 years the evolution of 14 countries of the European stock market, using a motif-synchronization method to analyse the stability of the relations.


We use the constituent stocks of the S&P Europe 350 index, which is made up of 350 blue-chip companies from 16 different developed European countries. This index provides us with a significant sample of the European stock market, which is why we take it as the basis for this study, which mainly focuses on the methodology of the study of financial networks.

The S&P Europe 350 index components, along with their market capitalizations and tickers, were directly provided by Standard and Poors, with figures from December 2019. We use the provided data to gather the daily adjusted closure price history from January 2014 to September 2020 from Yahoo FinanceFootnote 3 We also used the returns from the Morgan Stanley Capital International (MSCI World) for which we collected the data for the same dates and from the same source.

From the raw data received, we synchronised the time periods and removed the series for which there were fewer observations. Also, if a company had preferred and common stocks, we removed the preferred stocks from our list to avoid contamination of the results with the evident strong correlation. After these adjustments, we had the price data of 331 firms from S&P Europe 350. We considered the time period from January 2016 to September 2020 for stocks in the S&P Europe 350 and for the MSCI World index, which gave us 1,202 price observations for each series.

For all firms, we calculated their log-returns and after that we treated the data with a generalised Hampel filter. Using a 20-day moving window, on average 0.42% of the data was identified as outliers, which were replaced by the local medians in the corresponding window.Footnote 4 Details about this method can be found in Pearson et al. [49]. From this point forward, we use this outlier filtered return data.

The COVID pandemic started to become evident in Europe by the end of February 2020, Plümper and Neumayer [51], we can observe in Fig. 1 a significant increase in the index volatility being a consistent reaction to the pandemic shock. Given that our sample has 331 firms with 1,201 observations each, we use box plots to summarise the descriptive statistics. From Fig. 2, we can notice that the returns lie around zero; with a standard deviation of around two. On average, returns are slightly negatively skewed, but for some series the skewnesses are less than minus one, implying that their distributions are highly negatively skewed. The average kurtosis is around nine but with many outliers above 20, suggesting leptokurtic distributions for all series.

Fig. 1
figure 1

S&P Europe 350 Index Returns from January 2016 to September 2020. By the beginning of March 2020, we can notice a sudden increase in the volatility. Source: Authors’ calculations

Fig. 2
figure 2

Descriptive statistics of the S&P Europe 350 index returns from January 2016 to September 2020 Source: Authors’ calculations


The methodology will be divided in two main parts, the econometric approach and the network theory approach.

Econometrical analysis

The econometric analysis will be based mainly on the work of Eratalay and Vladimirov [24]. Instead of the unobservable factor in their model, we consider the Morgan Stanley Capital International World (MSCI World) index as a common observable factor.Footnote 5 We include the common observable factor, which otherwise would bring about spurious connections in the network. (See a discussion in Barigozzi and Brownlees, [9] and Eratalay and Vladimirov, [24]). We chose MSCI World as an indicator of the general trend in the behaviour of developed economies worldwide.

A return series \(r_t\) can be modelled as:

$$\begin{aligned} r_t = {\mathbb {E}}_t(r_t \mid I_{t-1}) + \sqrt{{\mathbb {V}}ar_t(r_t \mid I_{t-1})} \varepsilon _t \end{aligned}$$

where \(E_t(r_t|I_{t-1})\) is the conditional mean, \(Var_t(r_t|I_{t-1})\) is the conditional variance, and the \(\varepsilon _t\) is the standardised disturbance such that \(\varepsilon _t \sim N(0,1)\). The conditional mean and the conditional variance are functions of the information up to \(t-1\), denoted by \(I_{t-1}\).

Conditional mean

For modelling the return vector, we will use a vector autoregressive model, VAR(1).

$$\begin{aligned} r_t = \mu + {\varvec{\Phi }} r_{t-1} + {\varvec{\Theta }} r_{t-1}^M + \eta _t \end{aligned}$$

where \(\mu\) is a \(n \times 1\) column vector representing the intercept; \({\varvec{\Phi }}\) and \({\varvec{\Theta }}\) are \(n \times n\) matrices of parameters of the returns lagged one period from S&P Europe 350 stock returns and the MSCI World index, respectively. In particular \({\varvec{\Theta }}\) is a diagonal matrix. For each series i, \(\eta _{t,i}\) is the error term represented by a random process with mean zero and variance \(h_{t,i}\), such that \(\eta _{t,i} = \sqrt{h_{t,i}} \varepsilon _{t,i}\), and \(\varepsilon _{t,i}\) are the standardised errors.

Conditional variance

Let us denote the conditional mean and the conditional variance of series i as \(\mu _{t,i}\) and \(h_{t,i}\), respectively. Therefore, the error term \(\eta _{t,i}\) can be expressed as:

$$\begin{aligned} \eta _{t,i} = r_{t,i} - \mu _{t,i} = \sqrt{h_{t,i}} \varepsilon _{t,i} \text {, where } \eta _{t,i} \sim N(0,h_{t,i}) \end{aligned}$$

For each time series i, the conditional variance of the error term can be represented as a GARCH(1,1):

$$\begin{aligned} h_{t+1,i}& = \omega _i + \alpha _i (r_{t,i}- \mu _{t,i})^2 + \beta h_{t,i} \nonumber \\& = \omega _i + \alpha _i h_{t,i} \varepsilon _{t,i}^2 + \beta _i h_{t,i} \nonumber \\& = \omega _i + \alpha _i \eta _i^2 + \beta _i h_{t,i} \end{aligned}$$

where the parameters \(\omega _i > 0\), \(\alpha _i \ge 0\), \(\beta _i \ge 0\) and \(\alpha _i + \beta _i <1\), hence each \(h_{t,i}\) process is stationary.

In the matrix representation, we can write that \(r_t \mid I_{t-1} \sim N(\mu _t,\mathbf {H_t})\), and \(\varepsilon _t \sim N(0,\mathbf{I }_n)\), with \({\mathbf {H}}_t = {\mathbb {V}}ar(r_t\mid I_{t-1}) = {\mathbb {V}}ar(\eta _t\mid I_{t-1})\) and \(r_t = \mu _t + \mathbf {H_t}^{1/2} \varepsilon _t\). Here \(\mathbf {H_t}\) is the conditional variance-covariance matrix and it can be decomposed as as:

$$\begin{aligned} {\mathbf {H}}_t \,=\, & {\mathbf {D}}_t {\mathbf {R}}_t {\mathbf {D}}_t \end{aligned}$$
$$\begin{aligned} {\mathbf {D}}_t \,=\, & \text {diag}\{\sqrt{h_{t,i}}\} \end{aligned}$$

where \({\mathbf {H}}_t\) depends on \({\mathbf {R}}_t\), the conditional correlation matrix, and \({\mathbf {D}}_t\), a diagonal matrix of the standard deviations.

Dynamic conditional correlations

In this section, we discuss \({\mathbf {R}}_t\), the matrix of conditional correlations. Each of its elements is in the interval \([-1,1]\) and, according to (5), \({\mathbf {R}}_t\) should be positive definite in order for \({\mathbf {H}}_t\) to be positive definite as well.

We follow the consistent dynamic conditional correlation (cDCC) model of Aielli [2]:

$$\begin{aligned} {\mathbf {R}}_t \,=\, & {\mathbf {Q}}_t^{*-1} {\mathbf {Q}}_t {\mathbf {Q}}_t^{*-1} \end{aligned}$$
$$\begin{aligned} {\mathbf {Q}}_t^{*-1} \,=\, & \begin{bmatrix} 1/ \sqrt{q_{11t}} & 0 & \ldots & 0 \\ 0 & 1/\sqrt{q_{22t}} & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & 1/\sqrt{q_{nnt}} \end{bmatrix} \end{aligned}$$
$$\begin{aligned} {\mathbf {Q}}_t \,=\, & (1- \theta - \kappa ) {\bar{{\mathbf {Q}}}} + \theta \{ {\mathbf {Q}}_{t-1}^{*} \varepsilon _{t-1} \varepsilon _{t-1}' {\mathbf {Q}}_{t-1}^{*} \}+ \kappa {\mathbf {Q}}_{t-1} \end{aligned}$$

where using \(\varepsilon ^*_t = {\mathbf {Q}}_t^{*} \varepsilon _{t}\) and \(\varepsilon ^{*'}_t = \varepsilon _{t}' {\mathbf {Q}}_t^{*}\), we can simplify the previous equation:

$$\begin{aligned} {\mathbf {Q}}_t \,=\, & (1- \theta - \kappa ) {\bar{{\mathbf {Q}}}} + \theta \{ \varepsilon ^*_{t-1} \varepsilon ^{*'}_{t-1} \}+ \kappa {\mathbf {Q}}_{t-1} \end{aligned}$$
$$\begin{aligned} {\bar{{\mathbf {Q}}}} \,=\, & {\mathbb {C}}ov(\varepsilon ^*_t \varepsilon ^{*'}_t) = {\mathbb {E}}(\varepsilon ^*_t \varepsilon ^{*'}_t) \end{aligned}$$

Where \(\kappa \ge 0\) and \(\theta \ge 0\) are scalars ensuring \(\kappa + \theta < 1\), and \({\bar{{\mathbf {Q}}}}\) represents the unconditional covariance of the standardised disturbances, also known as the long-run covariance matrix, and for this work it will be replaced by the sample covariance of the residuals \(\varepsilon ^*_t\). This is called the variance targeting approach. (See Engle [23] for details.)

The estimation for the conditional mean, conditional variance and conditional correlation parameters is realised using the three-step estimation following the Eratalay and Vladimirov [24] path. The resulting quasi-maximum likelihood estimators are consistent and asymptotically normal.Footnote 6

Network analysis

Once we have the conditional correlation matrix, we compute the partial correlation matrix using the GGM algorithm. From this partial correlation matrix, we construct our network, where each vertex will represent a firm, and the strength of the correlation between them will be represented by edges.

It should be noted that the range of partial correlations is \([-1,1]\); that is, there are negative and positive values, leading to data distortion or data loss in some instances (e.g., when adding values). For this reason, we take into account the following cases throughout this work:

  • Net data, the original partial correlation values, positive and negative.

  • Absolute data; that is, the absolute value of original partial correlation.

  • Positive data; that is, only positive values within the partial correlation.

In addition, each partial correlation matrix will also be a symmetric arrangement, and it will correspond to the adjacency matrix of its respective network. We will consider an edge in all the cases except when \(a_{ij} = 0\), which means that there is not a linear interdependence among i and j.

Formally, a graph or network, denoted by G, is an ordered pair of disjoint sets (V(G), E(G)), where V(G) is a non-empty set of vertices or nodes, and E(G) is the set of edges or links, where each edge is an unordered pair of distinct vertices \(\{i,j\}\) simply denoted as ij.Footnote 7 Whenever two nodes i and j form a link ij, it is said that they are adjacent with each other, and that they are neighbors.

The simplest parameters of a network G are its number of vertices, called the order of G and denoted by N, and its number of edges, called the size of G and denoted by m(G).

The most usual way to visually represent a graph is a diagram where each node is represented by a point or small circle and an edge is represented by a line that connects its end-vertices without crossing over any other vertex. Any unweighted graph of N vertices can be represented by a \(N \times N\) matrix \({\mathbf {A}}\), called its adjacency matrix, where the entry \(a_{ij}\) of \({\mathbf {A}}\) is equal to 1 if there is an edge between the nodes i and j, or otherwise \(a_{ij} = 0\).

When modelling some practical problems, we could assign a real number w(ij) to every link ij, representing its weight.Footnote 8 In such a case, graph G together with the collection of weights on its edges is called a weighted graph, and we can add this extra information into the adjacency matrix of G, so instead of 0’s and 1’s we have that \(a_{ij} = w(ij)\). This allows us to present in the adjacency matrix not only the existence of a relation between the end vertices of a link, but also take into account some characteristic that allows us to quantitatively differentiate between links, depending on the context.

In fact there is a one-to-one correspondence between symmetric matrices and weighted graphs, which allows us to define a network from any such matrix. In our case, the partial correlation matrices will play the role of the adjacency matrices in our graphs, where its values represent how close the co-movement of two firms are after controlling for the correlations with other firms, and how similar their behaviour over time.Footnote 9

This way, the weight w(ij) of the link ij will be equal to the partial correlation between the two corresponding firms (Fig. 3).

Fig. 3
figure 3

Weighted graph G and its adjacency matrix \({\mathbf {A}}\)

In addition, in any network, a path between vertices i and j is a sequence of distinct vertices \(x_0,x_1,\dots ,x_k\), where \(i = x_0\) and \(j = x_k\), such that \(x_i\) and \(x_{i+1}\) form an edge in the network. For unweighted graphs the integer k represents the length of such a path; that is, the number of edges contained in the path, while for weighted networks the length of the path is the sum of the weight of its edges. Any shortest path connecting i and j is called a geodesic and its length is called the distance between its end vertices, denoted by d(ij). In other words, the distance between two vertices is the minimum length that separates one node from the other. If there is no path connecting two nodes, the distance between them is defined as infinite.

Before continuing, we first need to highlight an important aspect of a distance metric. Distance is a value that represents how closely related two objects are in the following way: the lower the value, the closer those objects are.Footnote 10 In contrast, the higher the partial correlation between two firms, the more related they are. Therefore, it is necessary to reverse the order of the partial correlations so the respective new values can be handled like a proper distance metric (Opsahl, Agneessens, and Skvoretz, [48]), where lower values correspond to closeness. For this reason, we will use the inverse of the weight for each link whenever we calculate lengths and distances; in other words, a new weight \(w^*(ij) = [w(ij)]^{-1}\) is assigned to each edge when computing any distance-related measure in the network.

From here, three relevant graph parameters are directly derived. First, the average path length of graph G, denoted by \({\overline{d}}(G)\), is defined as the average distance between every pair of nodes in the network; that is,

$$\begin{aligned} {\overline{d}}(G)& = \frac{1}{\left( {\begin{array}{c}N\\ 2\end{array}}\right) } \sum _{i \ne j} d(i,j). \end{aligned}$$

Second, the radius of G is the minimum length k such that there is a node whose distance to any other node is at most k, and is denoted by \(\text {rad}(G)\). And, finally the diameter of G, denoted by \(\text {diam}(G)\), is the maximum distance between any two nodes in the graph. Clearly \(\text {rad}(G) \le \text {diam}(G)\) and \({\overline{d}}(G) \le \text {diam}(G)\)Footnote 11 hold.

The radius and diameter tell us the minimum and maximum distance respectively that we expect to cover from one random node to reach all the other nodes (further details in A.1). In other words, they help us set boundaries that measure the distance a shock should transit to propagate over the entire network despite its starting point.

It is worth mentioning that there are some graphs on which a proper distance can not be defined. When defining a distance on a network we are implicitly looking at an optimization problem where we want to find the shortest or cheapest way to move between any pair of nodes. We are guaranteed to find a solution to this problem and define a distance provided that all weights assigned to the edges are positive.

Unfortunately, when dealing with negative weights, this task cannot be fulfilled whenever there is a negative cycle, which is a sequence of distinct vertices \(C = x_1,x_2, \dots , x_k\) such that every pair of consecutive nodes form an edge and \(x_1x_k\) is also an edge, and \(w(C) < 0\). In such a case, the minimization problem has no solution since any path connected to this negative cycle can become cheaper and cheaper by walking inside the negative cycle and looping indefinitely. On the bright side, despite the fact that some algorithms (like Dijkstra’s) are not designed to handle negative weights and fall into an infinite loop, there are some that can determine if there is any negative cycle, namely Bellman-Ford’s algorithm (Wu and Chao, [57]).


Centrality measures are tools that allow us to quantify the importance or influence that a vertex has over the network as a whole or in a locally delimited region.

For unweighted graphs, the degree centrality of vertex i, denoted by \(C_D(i)\), is the number of neighbours that such a node has, while for weighted graphs the degree centrality of i is the sum of the weights of all the edges incident to i,Footnote 12, in other words

$$\begin{aligned} C_D(i) = \sum _{j} w(ij). \end{aligned}$$

This measure evaluates how strong the local connectivity or influence of each node individually is.

The Closeness centrality of a node is defined as the inverse of the sum of its distances to all other nodes in the network; that is

$$\begin{aligned} C_C(i) = \left[ \sum _{j\ne i} d(i,j) \right] ^{-1} = \frac{1}{\sum _{j\ne i} d(i,j)}. \end{aligned}$$

Since this value is at most equal to \(\frac{1}{N-1}\), then the normalised closeness centrality of the node i is

$$\begin{aligned} C_C^*(i) = (N-1) C_C(i). \end{aligned}$$

On the same note, the harmonic centrality of a vertex is defined as

$$\begin{aligned} C_H(i) = \sum _{j \ne i} \frac{1}{d(i,j)}, \end{aligned}$$

where \(1/d(i,j) = 0\) if the distance between i and j is infinite. The normalized harmonic centrality of a node is

$$\begin{aligned} C_H^*(i) = \frac{1}{N-1}C_H(i). \end{aligned}$$

Both closeness and harmonic centralities measure how close a node is to all remaining nodes and have quite similar behaviour. The main difference between them is that closeness centrality is not defined for disconnected graphs while harmonic centrality is. Both normalised versions lie in the real interval [0, 1], where the closer these values are to 1, the closer the respective vertex is to the others.

Alternatively, the betweenness centrality of a node is defined as

$$\begin{aligned} C_B(i) = \sum _{s \ne i \ne t} \frac{\sigma _{st}(i)}{\sigma _{st}}, \end{aligned}$$

where \(\sigma _{st}\) denotes the number of distinct geodesics from s to t, and \(\sigma _{st}(i)\) is the number of those geodesics that contain node i. The normalized betweenness centrality of a node is

$$\begin{aligned} C_B^*(i) = \frac{2}{(N-1)(N-2)}C_B(i). \end{aligned}$$

In this case, we measure the importance of node i given its location within the topology of the network. In a sense, we are quantifying how essential i is to the connectivity of any pair of the remaining nodes i.e. if i acts (or not) as a bridge that connects the other members of the graph.

Given the adjacency matrix of the network, \({\mathbf {A}}\), and its largest eigenvalue, \(\lambda\), the eigenvector centrality of vertex i, denoted as \(C_E(i)\), is the i-th entry of the eigenvector \({\mathbf {x}}\), which is the unique solution to equation

$$\begin{aligned} \mathbf {Ax} = \lambda {\mathbf {x}} \end{aligned}$$

such that x has only positive entries and \(xx^\top = 1\). Hence \(C_E(i) = x_i\), where \({\mathbf {x}}^\top = (x_1\ x_2\ \cdots \ x_N)\). According to eigenvector centrality, a node is important in the network if its neighbours are important.


When analysing a network, one can wonder if certain attributes of the vertices, or their combination, play a role in the existence of edges or the lack thereof within the network. For instance, in social networks, friendships generally tend to be established between people with similar characteristics (gender, age, beliefs, spoken language, etc). By contrast, couples are prone to form between persons of the opposite gender on a dance floor. We can detect such behaviour by measuring what is called homophily: to assess if there is a bias (in favour or against) on the number of links between nodes with similar characteristics.

To measure any network’s bias in the distribution of edges towards one or more regions, we have to compare the relative number of edges inside such regions against the whole graph. Given the network G, and \(X_1,X_2,\dots ,X_k\) disjoint subsets of vertices with size \(n_1,n_2,\dots ,n_k\), respectively, we first compute the maximum possible number of edges such that both of its ends are in the same subset \(X_i\), which is \(\left( {\begin{array}{c}n_i\\ 2\end{array}}\right)\) for each i. Then, we sum all of these values and divide the result by the maximum number of edges of the whole network; that is, \(\left( {\begin{array}{c}N\\ 2\end{array}}\right)\), this quotient is called the baseline homophily ratio of the network G and is denoted by \(hr^*(G)\), in other words:

$$\begin{aligned} hr^*(G) = \left( {\begin{array}{c}N\\ 2\end{array}}\right) ^{-1} \sum _{i = 1}^k \left( {\begin{array}{c}n_i\\ 2\end{array}}\right) = \sum _{i = 1}^k \frac{n_i(n_i-1)}{N(N-1)}. \end{aligned}$$

Later, we compute the homophily ratio of network G, denoted by hr(G), which is the quotient of the total number of edges in the network whose ends are both in the same subset \(X_i\) to the total number of edges in the network; that is:

$$\begin{aligned} hr(G) = \sum _{i = 1}^k\frac{m_i}{m(G)}, \end{aligned}$$

where \(m_i\) is the number of links with both ends in \(X_i\).

When a network is constructed in such a way that each link has the same probability of forming despite the attributes of its end vertices, it is fair to expect that both ratios would be pretty close.Footnote 13 So, whenever the homophily ratio is significantly greater than its baseline, then G is called homophilic, and when it is significantly lower it is said that G is heterophilic.Footnote 14 For example, in Fig. 4 we can see two networks with opposite homophilic behaviour. In both cases, the subsets of vertices considered are the same and coloured red, blue, and green, respectively, so the baseline homophily is equal to \(2/7 = 0.29\) for the two networks. On the other hand, the homophily ratios are \(5/7 = 0.71\) and \(3/19 = 0.16\) for the left and right networks, respectively. In other words, for the network on the left side, the nodes tend to create links within the groups, while in the network on the right side, this tendency occurs between nodes of different groups.

Fig. 4
figure 4

Examples of homophilic and heterophilic networks. In both cases three subsets of vertices are considered and marked with different colors

Network skeleton

To better understand and analyse a complex system, we often use different networks to represent the state of the system at different points in time, so at the end, we have a collection of networks that enable us to study the evolution of the system over time. Taking that into account, we define dynamic network as an ordered sequence of networks defined over the same set of vertices.Footnote 15 When working with weighted networks, we can interpret the weight of each link in a given moment as the strength of the relationship it represents at that particular point in time, and no matter how strong, some of these relations tend to appear and disappear over time. In contrast, another critical aspect to consider about any link is its resilience which does not consider its weight; instead, we are looking for edges whose presence is constant over time, leading us to the following definitions.

Fig. 5
figure 5

Skeleton of a dynamic network

In a dynamic network, an edge is resilient if it appears in the network at every point during the studied period; that is, in every network of the sequence. The set containing all resilient edges and their corresponding vertices forming a network is called the skeleton of its respective dynamic network. When dealing with weighted networks, we define the weight of each edge in the skeleton as the mean of the corresponding weights in the dynamic network sequence. Figure 5 shows a dynamic network sequence labelled by day, and the respective network skeleton. The weights of the edges are calculated as explained above.

Results and analysis

From the cDCC-GARCH model, and after applying the GGM, we obtained partial correlation matrices related to 1,201 days. From here, we can construct 1,201 individual networks, one per day; this grants us a broader scope for depicting the behaviour of the dynamic network over time. In addition, we analysed the period around the Covid-19 pandemic, where we considered four stages, Sans-Covid, Pre-Covid, During-Covid and Post-Covid. The corresponding periods are from January 2016 to October 2019, November 2019 to February 2020, March to June 2020, and July to September 2020, which throughout this paper we will refer to as Sans,Footnote 16 Pre, Dur, and Post, respectively.

For a better visualization, understanding and interpretation of each network, we set the partial correlations between (\(-\,\)0.0558, 0.0558) equal to zero. The cutoff value 0.0558 corresponds to a 10% confidence level in a Fisher’s test for the significance of partial correlations. (See Fisher, [26]).

While calculating the distances in the network, we encountered negative cycles when using the net data, which makes it impossible to measure distances. To avoid these negative cycles, it is necessary to consider only positive and absolute weights for calculating any distance-related parameter (radius, diameter, average distance, betweenness, closeness, and harmonic centralities).

Fig. 6
figure 6

Weights of Positive and Negative Edges. Source: Authors’ calculations

Global measures

A first glimpse into the network structure can be made by analysing the number of edges and their weights (Table 1). Over the 1201 days, the mean number of edges in the network was 13,227 and always stayed between 22.6% and 24.7% of the total possible edges (54,615).

Table 1 Edge weight and edge count

It is worth noticing that the number of positive weighted edges against the total is remarkably stable since it remained around the 54.7% during the whole period, deviating by no more than 0.57%, which implies that the numbers of negative and positive edges are closely related. This relation extends to their weights, where positive edges represent 56.8% with a maximum deviation of 0.62%. The negative and positive edges almost behave like a mirror of each other, as shown in Fig. 6 where we plotted the aggregate weights against time.

Fig. 7
figure 7

Partial correlation distribution. On the right side, we can see subfigures showing a zoom of the tails distribution. Above, the left tail, where the maximum negative value is \(-\,\)0.24; and below, the right tail, with the maximum positive value of 0.68 Source: Authors’ calculations

In Fig. 7, we can observe that almost half of the relations in each network are negative; in fact, the maximum magnitude is \(-\,\)0.24. The proportion of negative weights affects the net weights since they counterweight the strength of instability phenomena. Moreover, Fig. 10 shows how the positive weights and the absolute value of the weights have similar behaviour, just transferred to a different scale.

On the other hand, we can observe that before the beginning of the Pre period there is a meaningful shortage in the average path length. However, this decline was gradual since May 2018 and reached its lowest value in February 2019. Again, in the Dur period, there is a sudden increase followed by a sudden decay in the length of the shortest path, as shown in Figs. 11 and 12. This behaviour suggests that although there was no increase in connectedness, there was an inconstancy alternation in the intensity of existing relationships. In the network of positive values, we do not find a visible change in the behaviour of the radius and diameter over time. In the network of absolute values, specifically the radius, a more pronounced peak is perceived just inside the Dur dates.

On average, the positive and absolute networks have average distance, radius, and diameter of 16.7, 20.8, and 25.8, and 18.5, 23.3, and 29.2, respectively. We notice in Table 2 that the radius is greater than the average distance in every case. This is important given that the radius is the minimum distance that needs to be travelled from a particular vertex to cover the network. Therefore, for this network, we need the radius and diameter to determine boundaries. In addition to the average distance, these parameters give us a broader description of the network’s topology.

Table 2 Global measures

Local measures

To analyse the centralities of the dynamic networks (with positive and absolute weights), we took as a basis the average centrality per day of the degree, closeness, harmonic, betweenness, and eigenvectorFootnote 17 centralities. In the case of degree centrality, we also calculated the net value.

We considered the mean of each centrality measure by industry, obtaining 11 centrality measures for each industry. The highest of each of the centrality measures constitutes the top one highest centrality measures by industry. We used an equal treatment to calculate the top one highest centrality measures by country. Of the top 1 with highest centralities by industry, shown in Table 3, we noticed that three industries stand out: the Computers & Peripherals and Office Electronics (THQ) for net and positive degree centralities; the Semiconductors & Semiconductor Equipment (SEM) in both harmonic centralities; and Paper & Forest Products industries (FRP) in both betweenness centralities.

In the case of the top 1 by country, in Table 3, Spain excels for seven centrality measures (\(C_E^{abs}\), \(C_E^{+}\), \(C_D^{abs}\), \(C_D^{pos}\), \(C_C^{+}\), \(C_H^{abs}\) and \(C_H^{+}\)), representing 7/11 of the firms with the highest centralities.

Table 3 Top 1 centralities, by industry and country

Considering the absolute and positive networks, from the Top 20 of the highest centralities,Footnote 18 only nine and seven firms, respectively, transmitted simultaneously positive and negative effects, please see Table 4. And from this only three, STERV.HE, CABK.MC and SSE.L, appear in the eleven tables simultaneously.

Table 4 Simultaneous effects of centralities in the Top 20

Taking into account the market capitalization by industry, the twelve most capitalised industries represent 59.8% and constitute 45.9% of the firms (Table 26). On the other hand considering it by country, United Kingdom, France (Tables 27, 28, 29), Switzerland, and Germany represent 70.7% of market capitalization and host 62.2% of the firms (Table 30). We can notice that in both partitions, the countries or industries with the highest centralities are not precisely the most capitalised.

On the other hand, when analysing the network’s connectedness again by its constituents, the United Kingdom’s connections remained unaffected in their number and their strength by the effect of the pandemic. France and Germany have a slight increase in number and strength of connections in the Pre and Dur periods. Austria was the country which strengthened its relations the most, although it has only one connection. We present these results in Table 31.

In addition, we observe in Table 31 that all but two countries, Ireland and Luxembourg, have a standardised number of edges greater than the average per day for the whole network, 24.2%. This is a clear indication of homophilic behaviour. Therefore, we reviewed the number of connections between industries, please see Table 32. We took 12 firms, representing 50% of the index constituents, and we noticed the same behaviour.


To generate the homophily profile, we established an increasing sequence of cut-offs to obtain the links that represent the stronger relations between firms. It is worth mentioning that those cut-offs are applied to the absolute value of the edge weight. For instance, two links with weight 0.4 and \(-\,\)0.4 represent equally strong relations, but not of the same kind. Since to calculate the homophilic ratio and profile, we only take into account the magnitude of the links, regarding the homophilic representation, the net and absolute networks are the same, regardless of the subsets of nodes considered. Moreover, we know that the partial correlations are in the interval \([-0.24,0.68]\); therefore, the positive network will also be the same as the net and absolute ones for values greater than \(|-0.24|\). Also, we studied homophily over two distinct partitions of the vertex set of the network: by country and by industry. In both cases, we calculated the homophily ratio for the 1,201 days of period.

Dividing the firms by country, we obtain a homophily baseline of 0.125 and the homophily ratio of the networks exhibited in Table 5. It is clear not only that each homophily index exceeds the baseline, but the homophily index is higher in each network, under stronger edges. Hence, once we reach a cut-off of 0.45, every existing link is between firms belonging to the same country for every daily network.Footnote 19

Table 5 Homophily ratios by country

Considering the division of firms by the respective industry, in Table 6, we have a baseline homophily equal to 0.028 and, as in the previous case, all homophily ratios are above the baseline, and again, as the strength of the links we consider increases, the homophily increases as well, reaching full homophily with a cut-off of 0.55 in every daily skeleton.

Table 6 Homophily ratios by industry

As a result, we found that the stronger relations tend to be established between firms that belong to the same country and industry. This finding can also be observed visually in Figs. 13 and 14, where most of these strong connections are within sectors or within countries.Footnote 20


We consider the skeletons of each data type encompassing the whole time frame. We also construct the skeletons for each of the Covid related periods (Whole, Sans, Pre, Dur, and Post) to examine if there is another piece of evidence about the impact of the pandemic onto the topology of the network.

Table 7 Daily Networks—Edge Statistics

When looking into the daily networks’ average statistics (Table 7), we notice no particular change in its number of edges or its added weight.

Since the Pre and Dur periods include precisely 84 days, we divided the Sans period into 84-day intervals (from March 2016 to February 2020). We compute the mean, standard deviation, minimum, and maximum of the first twelve uniformly divided periods, and by comparing these against the values of the Dur skeleton (Table 8), we can see that the measures of the Dur period are above the maximum or below the observed minimum for the previous periods. In fact, the edge count and weight of the Dur period are higher than the corresponding maximum of the other periods. In contrast, all its other measures are lower than the respective minimum, with only one exception, the diameter of the absolute data.

Table 8 84-Day Skeletons—Global Measures

So, even when there is no remarkable change in the edge count and weight of the overall network (Table 7), it is noteworthy that the number of resilient edges in the Dur period is over 14% higher than the maximum in the previous 84-Day Skeleton’s intervals (Table 8). This finding implies that the number of relations did not substantially change, but their stability increased.

While studying the centralities of the skeletons corresponding to the Covid periods, we observe two types of behaviour. On the one hand, rankings of degree and eigenvector centralities did not maintain much stability, while closeness, harmonic, and betweenness were pretty stable during all periods.

As we can see in Table 9, no firm simultaneously appears in the top 20 of the three types of data. When we consider the top 30 rankings, one firm accomplishes the simultaneous occurrence, namely, CABK.MC, whose net degree centralities are 1.24, 1.32, 1.5, 1.74, and 1.62 for the Total, Sans, Pre, Dur and Post periods, respectively.

Table 9 Simultaneous Top 20 (Degree Centrality)

In contrast, when considering all types of data available for the eigenvector centrality in Table 10, three firms appear simultaneously in the top 20 rankings, CABK.MC, CFR.SW, and DGE.L.

Table 10 Simultaneous Top 20 (Eigenvector Centrality)

We should notice that CABK.MC appears simultaneously in the degree and eigenvector centrality (positive and absolute networks), which means that it is one of the most influential firms in the skeleton network.

In contrast, five firms, BBVA.MC, CABK.MC, CFR.SW, GLE.PA and SSE.L, appear in the top ten of the closeness centrality ranking for every period and every data type (see Table 11).

For the harmonic centrality, six firms consistently appear in all top ten rankings, namely, CFR.SW, BBVA.MC, CABK.MC, GLE.PA, STERV.HE and UPM.HE (Table 12). Moreover, BBVA.MC, CABK.MC, CFR.SW, CSGN.SW, and STERV.HE are always present in the top ten of betweenness centrality despite data type and period (Table 13). So three firms, BBVA.MC, CABK.MC, and CFR.SW, accomplished being in each top ten ranking of three centralities of every skeleton by period.

Table 11 Simultaneous Top 10 (Closeness Centrality)
Table 12 Simultaneous Top 10 (Harmonic Centrality)
Table 13 Simultaneous Top 10 (Betweenness Centrality)

Finally, as in the case of daily networks in Sect. 5.3, we observed that the stronger ties in the network have homophilic behaviour, since the homophilic ratios are greater in every instance than the respective homophilic baselines of 0.125 for countries and 0.028 for industries. When taking different thresholds for edge strength we observe that the homophilic ratio also increased as the cut-off also increased (see Figs. 15 and 16). Moreover, by comparing the homophily ratios of skeletons (Table 14) and daily networks (Tables 5 and 6), we observed that skeletons always have greater homophily ratios than the mean of their respective daily networks. When considering the partition by industries, the homophily in the skeletons exceeds the maximum homophily of the daily networks for each cut-off. Therefore, we can say that resilient edges tend to be more homophilic; in other words, stable relations are more likely to form when firms share the same country and industry.Footnote 21

Table 14 Homophily ratios over the skeletons


In this paper, we analysed the network’s topology derived from the relationships among the firms that constitute the S&P 350 Europe index, using their adjusted closing prices from January 2016 to September 2020. For this, we calculated local and global parameters of the network. What distinguished this work from similar papers in the literature was the focus on homophily profile of the network, and the resilience of the connections, i.e. the network skeleton. The analysis of centralities was carried out through two approaches, first considering daily networks and second using the skeletons—the most resilient relations. In the first one, only three firms were found simultaneously in the top 20 of each of the eleven centralities calculated, so these firms are the ones that best transmitted positive and negative effects during the whole period. These are Scottish & Southern Energy (SSE.L), CaixaBank (CABK.MC), and Stora Enso OYJ R. (STERV.H.). These firms are from the Paper & Forest Products, Banks, and Electric Utilities industries, and they are located in Finland, Spain, and the United Kingdom, respectively. In the second approach, for the degree and eigenvector centralities, no firms were simultaneously present in the top 20 rankings, indicating a lack of stability. At the same time though, closeness, harmonic, and betweenness were pretty stable during all periods, and three firms, managed to appear simultaneously in each top 10 rankings. These firms are Banco Bilbao Vizcaya Argentaria S.A. (BBVA.MC) in Spain, CaixaBank (CABK.MC) in Spain, and Compagnie Financière Richemont S.A. (CFR.SW) from Switzerland. The first two are from the bank industry and the third from Textiles, Apparel & Luxury Goods.

By locating the centrality of degree and eigenvector of the companies, we obtain which are the most influential. It is very likely that the most influential are the ones that guide the direction in which the network will move in the event of an economic shock. On the other hand, closeness centrality will help to see which companies are the ones that will transmit the new trend faster. While betweenness centrality helps to locate which entities have a key location by acting as intermediaries with other entities, making their connection a necessary link for the transmission of a shock. This inputs help complement the company’s risk profile. By constructing the ranking of the entities with the highest values in each category, we find which entities are the most influential, pointing out the systemic risk entities. Overall, with this information, policymakers can identify and pay special attention, if necessary, to which sectors need to change policies, by strengthening or loosening them. This information is not only useful as a macroprudential policy instrument, but also as a a micro level tool since it can help companies to take better networking decisions to diminish their systemic risk.

Moreover, using the 84-day skeleton construction, we detected an increase of 20% over the number of resilient relationships during the Covid-19 pandemic, while the total number of edges do not have a similar change. However, we could not conclude whether there was a significant change, either in the number of edges, or in the centrality values over time.

The financial network turned out to be highly homophilic, and in fact, a direct relationship between the partial correlation coefficient and the homophilic ratio was discovered, where the stronger relations tend to be established between firms that belong to the same country and industry. On the same note, homophily ratios of the skeletons proved to be greater than in the daily networks, which suggests resilient relations have a larger proclivity to be homophilic than unstable ones. Homophily and resilience could provide very useful insights for investors in terms of hedging their portfolios. For example, stocks that are homophilic to their sectors would experience large losses when these sectors receive negative shocks. Since the resilient relations are more likely to be homophilic, portfolios including such stocks may take time to recover from such losses.

This paper can be extended in multiple ways. Although average distance, radius, and diameter help us better understand the power needed to be travelled by a shock to trigger a cascade effect over a network; the fact that, in this case, the radius is always greater than the average distance makes us wonder whether an analysis of average eccentricities would be more useful for a systemic risk analysis than the average distance. In addition, estimating the clustering coefficient could be helpful to measure the density of the neighbourhood of the vertices and the graph, complementing the topological analysis. Furthermore, a skeleton generalisation could be made, allowing flexibility in the absence of connections. On the other hand, we considered an undirected network, preventing us from deriving the causality of the relationships; looking for their causality will be fruitful for a better understanding of the network and its reaction in case of systemic risk.


  1. In our work, partial correlations help measure the degree of association between every pair of firms, removing the effect of a third firm, in other words, eliminating third effects or confounders. For example, two very distant countries’ stock market returns may look highly correlated. However, this correlation is mainly due to the fact that these countries follow a US stock market closely. When the effect of US stock market is “partialled out”, the true correlation between these two stock markets might be close to zero. Some discussion on partial correlations can be found in Anufriev and Panchenko [8] and Eratalay and Vladimirov[24]. Moreover, a more related work will be Kenett et al. [39], who explains partial correlations of stock returns.

  2. Homophily has been deeply explored since the 1920s up to our days and in different fields, such as segregation, health, and learning, to mention a few (Wellman [55], Currarini, Jackson, and Pin [15], Flatt, Agimi, and Albert [27], Golub and Jackson [30]).

  3. September 2020 is chosen because this is when the data were collected. This date was also convenient since it also corresponds to the period before pharmaceutical intervention via vaccinations started in December 2020.

  4. For this sample size, 0.42% corresponds to four or five observations. The maximum percentage of outliers was 1.8%, while the median was 0.41%. The percentage was above 1% for only four firms. Removing these outliers has similar effect on the residuals as including dummy variables for these outliers to the conditional mean equation. Please see also Kiraci [41]. Since our attention is the conditional correlation dynamics of these residuals, but not on these dummy variables of outliers, we believe this is acceptable.

  5. Given the cross-sectional size of our data, the model with an unobservable factor would be very parameter intensive and infeasible.

  6. Discussion and examples of such three step estimation can also be found in Bauwens, Laurent, and Rombouts [11], Carnero and Eratalay [14], Almeida, Hotta, and Ruiz [6].

  7. Although edges that go from one vertex to itself (called loops) can be defined, they have no useful interpretation within the scope of this study.

  8. For instance, such values could represent the cost of communicating or the distance between two locations, or the flow capacity in a transportation network, or the strength of the relationship between the elements etc.

  9. Notice that, since the adjacency matrix is symmetrical, we cannot infer any causality within the network. Rather it presents the contemporaneous reactions of stock returns to different financial or economic shocks.

  10. To get into the mathematical theory behind metric spaces, please see Willard [56].

  11. The radius and average path length cannot be related to an inequality, since there are graphs whose radius is greater than, or less than, or equal to the average path length. See Fig. 8.

  12. Graph theorists refer to the degree centrality in unweighted graphs simply as degree and in weighted graphs as the weight of the vertex. The existence of such a solution is guaranteed by the Perron–Frobenius Theorem, see Horn and Johnson [32].

  13. Clearly both will differ, so a statistical significance test is often used to quantify how significant their difference is. In our case, we will not use such a test since we will focus on how the difference of the homophily ratios is related to the strength of the relations of the network by considering a sequence of increasing cut-offs to the weight of the edges.

  14. Sometimes referred as inversed homophily (Easley and Kleinberg [20]).

  15. In general, the number of vertices is not set from the beginning since vertices can pop in and out of existence depending on the analysed phenomenon; in our case, the set is fixed as we consider the same collection of firms for the whole period under study.

  16. The word ‘sans’ is a preposition (also a noun) that means ‘without’.

  17. The obtained net partial correlation matrices with cut-off are not positive definite for all periods, and the obtained eigenvector centralities present positive and negative values, which does not allow us to rank the firms according to their influence on the network.

  18. The comprehensive Top 20 highest centralities are in Tables: 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, and 25.

  19. Recall that using Fisher’s transformation we applied a cut-off of 0.558 since the beginning, then the first cut-off for Tables 5 and 6 correspond to all the edges in the studied networks.

  20. A cut-off value equal to 0.3 was applied in these networks, i.e., only links between firms whose partial correlation was greater than or equal to 0.3 were drawn. In each figure, there are networks for the Pre, Dur, and Post periods where the colour of a node corresponds to the country or industry that it belongs to, respectively.

  21. Notice that this is a network derived from the relations of the stock returns. In this context, an edge is formed between two stocks because they reacted similarly or oppositely to some news. Whether there is trade or some other exchange between these firms is outside of the focus of this paper.


\(C_C^{+}(i)\) :

Positive closeness centrality of vertex i

\(C_D^{abs}(i)\) :

Absolute degree centrality of vertex i

\(C_D^{net}(i)\) :

Net degree centrality of vertex i

\(C_D^{+}(i)\) :

Positive degree centrality of vertex i

\(C_E^{abs}(i)\) :

Absolute eigenvector centrality of vertex i

\(C_E^{+}(i)\) :

Positive eigenvector centrality of vertex i

\(C_H^{abs}(i)\) :

Absolute harmonic centrality of vertex i

\(C_H^{+}(i)\) :

Positive harmonic centrality of vertex i


Distance from nodes i to j

\({\overline{d}}(G)\) :

Average path length or average distance of graph G

\(\text {diam}(G)\) :

Diameter of graph G


Homophily ratio of graph G

\(h^*(G)\) :

Homophily baseline ratio of graph G


Number of edges of the network G

N :

Number of vertices of the network

\(\text {rad}(G)\) :

Radius of graph G


Weight of the edge ij


Weight of the graph G


  1. Acemoglu, Daron, Ozdaglar, Asuman, & Tahbaz-Salehi, Alireza. (2015). Systemic risk and stability in financial networks. American Economic Review, 105(2), 564–608.

    Article  Google Scholar 

  2. Aielli, Gian Piero. (2013). Dynamic conditional correlation: on properties and estimation. Journal of Business & Economic Statistics, 31(3), 282–299.

    Article  Google Scholar 

  3. Albert, Réka., Jeong, Hawoong, & Barabási, Albert-László. (1999). Diameter of the world-wide web. Nature, 401(6749), 130–131.

    Article  Google Scholar 

  4. Allen, Franklin, & Babus, Ana. (2009). “Networks in finance”. In: The network challenge: strategy, profit, and risk in an interlinked world 367.

  5. Allen, Franklin, & Gale, Douglas. (2000). Financial contagion. Journal of political economy, 108(1), 1–33.

    Article  Google Scholar 

  6. de Almeida, Daniel, Hotta, Luiz K., & Ruiz, Esther. (2018). MGARCH models: Trade-off between feasibility and flexibility. International Journal of Forecasting, 34(1), 45–63.

    Article  Google Scholar 

  7. Ambros, Maximilian, et al. (2021). COVID-19 pandemic news and stock market reaction during the onset of the crisis: evidence from high-frequency data. Applied Economics Letters, 28(19), 1686–1689.

    Article  Google Scholar 

  8. Anufriev, Mikhail, & Panchenko, Valentyn. (2015). Connecting the dots: Econometric methods for uncovering networks with an application to the Australian financial institutions. Journal of Banking & Finance, 61, S241–S255.

    Article  Google Scholar 

  9. Barigozzi, Matteo, & Brownlees, Christian. (2019). Nets: Network estimation for time series. Journal of Applied Econometrics, 34(3), 347–364.

    Article  Google Scholar 

  10. Barros Pereira, Hernane Borges de et al. (2022). “Network dynamic and stability on European Union”. Physica A: Statistical Mechanics and its Applications 587, p. 126532.

  11. Bauwens, Luc, Laurent, Sébastien., & Rombouts, Jeroen VK. (2006). Multivariate GARCH models: a survey. Journal of applied econometrics, 21(1), 79–109.

    Article  Google Scholar 

  12. Billio, Monica, et al. (2012). Econometric measures of connectedness and systemic risk in the finance and insurance sectors. Journal of financial economics, 104(3), 535–559.

    Article  Google Scholar 

  13. Caccioli, Fabio, Barucca, Paolo, & Kobayashi, Teruyoshi. (2018). Network models of financial systemic risk: a review. Journal of Computational Social Science, 1(1), 81–114.

    Article  Google Scholar 

  14. Carnero, M Angeles, & Eratalay, M Hakan. (2014). Estimating VAR-MGARCH models in multiple steps. Studies in Nonlinear Dynamics & Econometrics, 18(3), 339–365.

    Article  Google Scholar 

  15. Currarini, Sergio, Jackson, Matthew O., & Pin, Paolo. (2009). An economic model of friendship: Homophily, minorities, and segregation. Econometrica, 77(4), 1003–1045.

    Article  Google Scholar 

  16. Demirer, Mert, et al. (2018). Estimating global bank network connectedness. Journal of Applied Econometrics, 33(1), 1–15.

    Article  Google Scholar 

  17. Diebold, Francis X., & Yilmaz, Kamil. (2009). Measuring financial asset return and volatility spillovers, with application to global equity markets. The Economic Journal, 119(534), 158–171.

    Article  Google Scholar 

  18. Diebold, Francis X., & Yılmaz, Kamil. (2014). On the network topology of variance decompositions: Measuring the connectedness of financial firms. Journal of Econometrics, 182(1), 119–134.

    Article  Google Scholar 

  19. Diebold, Francis X., & Yilmaz, Kamil. (2015). Trans-Atlantic equity volatility connectedness: US and European financial institutions, 2004–2014. Journal of Financial Econometrics, 14(1), 81–127.

    Google Scholar 

  20. Easley, D., & Kleinberg, J. (2010). Networks, Crowds, and Markets: Reasoning about a Highly Connected World. Cambridge University Press. ISBN: 9781139490306.

  21. Elliott, Matthew, Golub, Benjamin, & Jackson, Matthew O. (2014). Financial networks and contagion. American Economic Review, 104(10), 3115–53.

    Article  Google Scholar 

  22. Elliott, Matthew, Hazell, Jonathon, & Georg, Co-Pierre. (2020). “Systemic risk-shifting in financial networks”. In: Available at SSRN 2658249.

  23. Engle, Robert. (2002). Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models. Journal of Business & Economic Statistics, 20(3), 339–350.

    Article  Google Scholar 

  24. Eratalay, M Hakan, & Vladimirov, Evgenii V. (2020). Mapping the stocks in MICEX: Who is central in the Moscow Stock Exchange? Economics of Transition and Institutional Change, 28(4), 581–620.

    Article  Google Scholar 

  25. Faloutsos, Michalis, Faloutsos, Petros, & Faloutsos, Christos. (1999). On power-law relationships of the internet topology. ACM SIGCOMM computer communication review, 29(4), 251–262.

    Article  Google Scholar 

  26. Fisher, Ronald Aylmer. et al. (1924). “035: The Distribution of the Partial Correlation Co-efficient.” In:

  27. Flatt, Mr Jason D, Mr Yll Agimi, & Albert, Steve M. (2012). Homophily and health behavior in social networks of older adults. Family & community health 35(4), 312

  28. Freixas, Xavier, Parigi, Bruno M., & Rochet, Jean-Charles. (2000). “Systemic risk, interbank relations, and liquidity provision by the central bank”. In: Journal of money, credit and banking, 611–638.

  29. Gai, Prasanna, & Kapadia, Sujit. (2010). Contagion in financial networks. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 466(2120), 2401–2423.

    Article  Google Scholar 

  30. Golub, Benjamin, & Jackson, Matthew O. (2012). How homophily affects the speed of learning and best-response dynamics. The Quarterly Journal of Economics, 127(3), 1287–1338.

    Article  Google Scholar 

  31. Goodell, John W., & Huynh, Toan Luu Duc. (2020). Did Congress trade ahead? Considering the reaction of US industries to COVID-19. Finance Research Letters, 36, 101578.

    Article  Google Scholar 

  32. Horn, Roger A., & Johnson, Charles R. (2012). Matrix analysis. Cambridge University Press.

    Book  Google Scholar 

  33. Huynh, Toan Luu Duc., Foglia, Matteo, & Doukas, John A. (2022). COVID-19 and tail-event driven network risk in the eurozone. Finance Research Letters, 44, 102070.

    Article  Google Scholar 

  34. Huynh, Toan Luu Duc., et al. (2021). Feverish sentiment and global equity markets during the COVID-19 pandemic. Journal of Economic Behavior & Organization, 188, 1088–1108.

    Article  Google Scholar 

  35. Iori, Giulia, & Mantegna, Rosario N. (2018). “Empirical analyses of networks in finance”. In: Handbook of Computational Economics. Vol. 4. Elsevier, pp. 637–685.

  36. Jackson, Matthew O. (2011). “An overview of social networks and economic applications”. In: Handbook of social economics. Vol. 1. Elsevier, pp. 511–585.

  37. Karkowska, Renata, & Urjasz, Szczepan. (2021). Connectedness structures of sovereign bond markets in Central and Eastern Europe. International Review of Financial Analysis, 74, 101644.

    Article  Google Scholar 

  38. Keeling, Matt J., & Eames, Ken TD. (2005). Networks and epidemic models. Journal of the Royal Society Interface, 2(4), 295–307.

    Article  Google Scholar 

  39. Kenett, Dror Y., et al. (2010). Dominating clasp of the financial sector revealed by partial correlation analysis of the stock market. PloS one, 5(12), e15032.

    Article  Google Scholar 

  40. Killworth, Peter D., & Bernard, H Russell. (1978). The reversal small-world experiment. Social networks, 1(2), 159–192.

    Article  Google Scholar 

  41. Kiraci, Arzdar. (2013). Confirmation, Correction and Improvement for Outlier Validation using Dummy Variables. International Econometric Review, 5(2), 43–52.

    Google Scholar 

  42. Kuzubaş, Tolga Umut, Ömercikoğlu, Inci, & Saltoğlu, Burak. (2014). Network centrality measures and systemic risk: An application to the Turkish financial crisis. Physica A: Statistical Mechanics and its Applications, 405, 203–215.

    Article  Google Scholar 

  43. Lewis, Ted G. (2011). Network science: Theory and applications. Wiley.

    Google Scholar 

  44. Martinez-Jaramillo, Serafin, et al. (2014). An empirical study of the Mexican banking system’s network and its implications for systemic risk. Journal of Economic Dynamics and Control, 40, 242–265.

    Article  Google Scholar 

  45. Milgram, Stanley. (1967). The small world problem. Psychology today, 2(1), 60–67.

    Google Scholar 

  46. Millington, Tristan, & Niranjan, Mahesan. (2020). Partial correlation financial networks. Applied Network Science, 5(1), 1–19.

    Article  Google Scholar 

  47. Millington, Tristan, & Niranjan, Mahesan. (2021). “Stability and similarity in financial networks—How do they change in times of turbulence?” In: Physica A: Statistical Mechanics and its Applications 574, 126016. ISSN: 0378-4371. URL:

  48. Opsahl, Tore, Agneessens, Filip, & Skvoretz, John. (2010). Node centrality in weighted networks: Generalizing degree and shortest paths. Social networks, 32(3), 245–251.

    Article  Google Scholar 

  49. Pearson, Ronald K. et al. (2015). “The class of generalized hampel filters”. In: 2015 23rd European Signal Processing Conference (EUSIPCO). IEEE, pp. 2501–2505.

  50. Pereira, Eder Johnson de Area Leão et al. (2019). “Multiscale network for 20 stock markets using DCCA”. In: Physica A: Statistical Mechanics and its Applications 529, p. 121542.

  51. Plümper, Thomas, Neumayer, Eric. (2020). “Lockdown policies and the dynamics of the first wave of the Sars-CoV-2 pandemic in Europe”. In: Journal of European Public Policy 0.0, pp. 1–21.

  52. Solé, Ricard V., et al. (2010). Language networks: Their structure, function, and evolution. Complexity, 15(6), 20–26.

    Article  Google Scholar 

  53. Wang, Gang-Jin., et al. (2017). Extreme risk spillover network: application to financial institutions. Quantitative Finance, 17(9), 1417–1433.

    Article  Google Scholar 

  54. Watts, Duncan J., & Strogatz, Steven H. (1998). “Collective dynamics of ‘small-world’networks”. Nature 393(6684), 440–442.

  55. Wellman, Beth. (1926). The school child’s choice of companions. The Journal of Educational Research, 14(2), 126–132.

    Article  Google Scholar 

  56. Willard, Stephen. (2012). General topology. Courier Corporation.

    Google Scholar 

  57. Wu, B.Y., & Chao, K.M. (2004). Spanning Trees and Optimization Problems. Discrete Mathematics and Its Applications. CRC Press. ISBN: 9780203497289.

  58. Xie, Lijuan, Wang, Mei, Huynh, Toan Luu Duc. (2021). “Trust and the stock market reaction to lockdown and reopening announcements: A cross-country evidence”. In: Finance Research Letters, p. 102361.

  59. Zachary, Wayne W. (1977). An information ow model for con ict and fission in small groups. Journal of anthropological research, 33(4), 452–473.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Ariana Paola Cortés Ángel.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The financial support of the GrowInPro project (Horizon 2020, grant agreement No. 822,781) is gratefully acknowledged. We are very grateful to Luca Alfieri, Jesús Alva Samos and four anonymous referees for their very useful comments and suggestions.



Diameter, radius and average path length

The graphs shown below are examples where radius and average distance hold different inequality outcomes. In each of them the top vertex can reach any other vertex in at most \(\text {rad}(G_i)\) steps for \(i = 1,2,3\).

$$\begin{aligned} 1 \,=\, & \text {rad}(G_1) < {\overline{d}}(G_1) = 1.1 \\ 2 \,=\, & \text {rad}(G_2) > {\overline{d}}(G_2) = 1.5 \\ 2 \,=\, & \text {rad}(G_3) = {\overline{d}}(G_3) = 2 \end{aligned}$$
Fig. 8
figure 8

Graphs where its radius and average distance have different order relationships

Radius and diameter provide boundaries of how strong a shock should be to guarantee its effects will reach every vertex. Since the node that will be the epicenter of the future shock is unknown, we should expect the shock to start in a random vertex. If the shock is higher than the diameter, then every node will be affected, regardless of which one is the epicenter. But when the strength of the shock is lower, it could be the case that it does not spread throughout the network, depending on the node hit first by the shock. That is the case when the radius comes in handy since it is the minimum value that guarantees there is a vertex from which the shock can travel, reaching all other vertices.

For example, in the graph depicted below, if the shock has the strength to move a distance equal to 3, it should hit the center vertex to be spread all over the network. If the shock hits any other node first, then some nodes will ‘escape’ its effects. Contrastingly, If the shock’s strength exerts a distance 6 (or higher) travel, then any vertex could serve as an epicenter for the shock to reach the whole network (Fig. 9).

Fig. 9
figure 9

Red vertex is the epicenter of the shock in each case. All other vertices are labelled with their distance to the red vertex

Tables and figures

Tables and figures appear in this section in the same order they were mentioned in the main text.

From Section 5.1

Fig. 10
figure 10

Weights over time. Notice there is no change in the behaviour of net weight, positive weight, and absolute weight in the Covid-related periods. Source: Authors’ calculations

Fig. 11
figure 11

Global measures over time. Diameter, radius, average distance, and the normalised number of edges, where positive values are considered. Source: Authors’ calculations

Fig. 12
figure 12

Global measures over time. Diameter, radius, average distance, and the normalised number of edges, where absolute values are considered. Notice that the normalised number of edges is the same for the net scenario. Source: Authors’ calculations

From Section 5.2

Table 15 Average net degree centrality \(C_D^{net}\) - 2016–2020
Table 16 Average absolute degree centrality (\(C_D^{abs}\)), 2016–2020
Table 17 Average positive degree centrality (\(C_D^{+}\)), 2016–2020
Table 18 Average absolute closeness centrality (\(C_C^{abs}\)), 2016–2020
Table 19 Average positive closeness centrality (\(C_C^{+}\)), 2016–2020
Table 20 Average absolute harmonic centrality (\(C_H^{abs}\)), 2016–2020
Table 21 Average positive harmonic centrality (\(C_H^{+}\)), 2016–2020
Table 22 Average absolute eigenvector centrality (\(C_E^{abs}\)), 2016–2020
Table 23 Average positive eigenvector centrality (\(C_E^{+}\)), 2016–2020
Table 24 Average absolute betweenness centrality (\(C_B^{abs}\)), 2016–2020
Table 25 Average positive betweenness centrality (\(C_E^{+}\)), 2016–2020
Table 26 Average degree centralities, analysis by industry, 2016–202. Part I
Table 27 Average degree centralities, analysis by industry, 2016–202. Part II
Table 28 Average degree centralities, analysis by industry, 2016–202. Part III
Table 29 Average degree centralities, analysis by industry, 2016–202. Part IV
Table 30 Average degree centralities, analysis by country, 2016–202
Table 31 Network description by Country
Table 32 Normalized number of edges per industry

From Section 5.3

Fig. 13
figure 13

Partial correlation networks coloured by country. For this picture, only edges whose weight is greater than or equal to 0.3 are considered, so the net, absolute and positive networks are the same and depicted here. Source: Authors’ calculations

Fig. 14
figure 14

Partial correlation networks coloured by industry.For this picture, only edges whose weight is greater than or equal to 0.3 are considered, so the net, absolute and positive networks are the same and depicted here. Source: Authors’ calculations

From Section 5.4

Fig. 15
figure 15

Homophily by country in the net skeleton, each subfigure was drawn using a different cut-off value k, obtaining the homophily ratio hr. Source: Authors’ calculations

Fig. 16
figure 16

Homophily by sector in the net skeleton, each subfigure was drawn using a different cut-off value k, obtaining the homophily ratio hr. Source: Authors’ calculations

Tickers, countries and industries

See Tables 33, 34, 35, 36, 37, 38, 39, 40, 41.

Table 33 Firms Part I
Table 34 Firms Part II
Table 35 Firms Part III
Table 36 Firms Part IV
Table 37 Firms Part V
Table 38 Firms Part VI
Table 39 Firms Part VII
Table 40 Countries
Table 41 Industries

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cortés Ángel, A.P., Eratalay, M.H. Deep diving into the S&P Europe 350 index network and its reaction to COVID-19. J Comput Soc Sc (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Financial networks
  • Centralities
  • Homophily
  • Multivariate GARCH
  • Networks connectivity
  • Gaussian graphical model
  • Covid-19

JEL Clasification

  • C32
  • C58
  • G15