1 Introduction

Social network analysis has been a popular research topic over the last couple of decades, thanks to the unprecedentedly large amount of internet data available, and the increasing power of computers to deal with such data, which details ties between people or objects all over the world. A lot of models have been developed to characterise and/or generate networks in various ways. One well-known class of models in the statistical literature is the exponential random graph model (ERGM), in which the probability mass function on the graph space is proportional to the exponential of a linear combination of graph statistics; see, for example, Snijders (2002). The Bernoulli random graph (BRG), in which any two nodes have the same probability of being connected, independent of any other pair of nodes, is a special case of an ERGM. Although the choice of graph statistics allows an ERGM to encompass networks with different characteristics, in general the ERGMs do not describe real-life networks well; see, for example, Snijders (2002) and Hunter et al. (2008).

Instead of characterising a network by graph statistics, such as the total number of degrees, the configuration model considers the sequence of the individual degrees; see, for example, Newman (2010, Chapter 13). Each node is assigned a number of half-edges according to its degree, and the half-edges are paired at random to connect the nodes. Despite its simple rule of network generation, the configuration model may contain multiple edges or self-connecting nodes, which might not occur in real-life networks. Also, the whole network is not guaranteed to be connected. Moreover, even though the individual degrees may be flexibly modelled by a degree distribution, they are not completely independent as they have to sum to an even integer.

One prominent feature of social networks in real life is that they are scale-free, which means that the degree distribution follows a power law (approximately); see, for example, Albert et al. (1999, 2000), and Stumpf et al. (2005). The preferential attachment (PA) model by Barabási and Albert (1999) is one widely known model (Newman 2010, Chapter 14) that generates such a network with a few parameters and a simple rule. Other models also exist that characterise either the degree distribution, for instance the small-world model by Watts and Strogatz (1998), or other aspects such as how clustered the nodes in the network are (Vázquez et al. 2002).

While the majority of the network models focus on the topology of the network, some models are developed to describe the dynamics within the network, in particular how fast information spreads with respect to the structure of the network. As spreading rumours or computer viruses through connections in a social network is similar to spreading a disease through real-life contacts to create an epidemic, most of these models incorporate certain compartment models in epidemiology. For instance, the susceptible-infectious-recovered (SIR) model splits the population into three compartments according to the stage of the disease of each individual. A susceptible individual becomes infectious upon contact with an infectious individual and recovers after a random period. Traditionally, the infectious period and the contacts made by an infected individual are assumed to follow an exponential distribution and a homogeneous Poisson process, respectively. While these assumptions may be unrealistic for real-life data, they are useful as the epidemic process is now Markovian. The dynamics of compartment sizes over time can usually be characterised by a small number of parameters in the rate matrix, which is used to obtain the transition probabilities through the Kolmogorov’s equations; see, for example, Wilkinson (2011), Section 5.4. While other kinds of compartment models can be formulated in a similar way, some models depart from the Markovian assumptions and will be discussed later. For more details on the SIR model and its variants, see, for example, Andersson and Britton (2000).

Often implicitly assumed in such compartment models is that the epidemic is homogeneous mixing, that is, each individual can interact uniformly with all other individuals in the community he/she belongs to. However, this is not the case when it comes to network epidemics, as one can only infect and be infected by their neighbours in the network, and the collection of neighbours differs from individual to individual. Therefore, modelling an epidemic on a structured population requires relaxing the homogeneous mixing assumption. Instead of assuming the same set of values for the parameters governing the dynamics, one approach is to apply a separate set of parameter values to, for example, each individual or all individuals with the same degree. Such an approach focuses on the modelling side and is dominant in the physics literature. A comprehensive review is provided by Pastor-Satorras et al. (2015).

Our work on network epidemic modelling is motivated by a data set from App MovementFootnote 1, which is an online platform that enables communities to propose and design community-commissioned mobile applications (Garbett et al. 2016). The process of generating the application starts with a community creating a campaign page and sharing it via online social networks. If we view an individual having seen a campaign and in turn promoting it as being “infected” (and “infectious” simultaneously), then the process of sharing a campaign can be compared to spreading a real-life virus to create an epidemic. The main difference is that such an infectious individual cannot potentially infect anyone in the population but only those connected to them on the social networks. For one campaign, the cumulative count of infected and the network of infected users are plotted in Figs. 1 and 2, respectively. The former deviates from the typical S shape of a homogeneous mixing epidemic, while the latter displays star-like structures and long paths, which typical features in real-life networks. It should be noted that this does not represent the complete underlying network \(\mathcal {G}\), which is usually unknown.

Fig. 1
figure 1

Cumulative count of infected for an epidemic of a campaign. The time points where the count increments are the infection times \(\mathbf {I}\)

Fig. 2
figure 2

Network representation of the transmission tree \(\mathcal {P}\) for the same campaign epidemic as shown in Fig. 1

Due to the difference in the data being applied to, as well as the inclination towards inference, epidemic models in the statistics literature provide a stark contrast from the classical compartment model, not only with respect to the network issue. First, to accommodate heterogeneities in mixing, Ball et al. (1997) and Britton et al. (2011) proposed models which incorporate two levels and three levels of mixing, respectively. Each individual belongs to both the global level and one or more local levels, such as household, school or workplace, and homogeneous mixing is assumed to take place at each level but with a separate rate. Such models are prompted by data with detailed information of these local level structures each individual belongs to, such as the 1861 Hagelloch measles outbreak data analysed by Britton et al. (2011). Second, some SIR models and their variants relax the assumption that the infectious period follows the exponential distribution, essentially rendering the epidemic process non-Markovian. For instance, Streftaris and Gibson (2002) used the Weibull distribution, while Neal and Roberts (2005) and Groendyke et al. (2012) used the Gamma distribution. In general, the compartment dynamics cannot be represented by a simple differential equation. Third, information is often missing in epidemic datasets, such as the infection times and, if a network structure is assumed, the actual network itself. Therefore, models are developed with a view to inferring these missing data, usually achieved by Markov Chain Monte Carlo (MCMC) algorithms. Examples of models which impose a network structure include Britton and O’Neill (2002), Neal and Roberts (2005), Ray and Marzouk (2008) and Groendyke et al. (2011). In the data considered by these authors, no covariates exist to inform if two individuals are neighbours in the network, and the edge inclusion probability parameter is assumed to be the same for any two individuals in the network. Essentially the underlying network is a BRG, which yields a Binomial (or approximately Poisson) degree distribution. Such a network model seems unrealistic for our App Movement data, compared to a model that generates a scale-free network or utilises a power law type degree distribution.

In view of the differences in objectives and applications shown above, we propose a network epidemic model as an attempt to narrow the gap in the literature. We focus on a susceptible-infectious (SI) model, in which the epidemic process takes place on a network which is assumed to be built from the PA model, thus deviating from a BRG. When it comes to inference, the data contain the infection times and potentially the transmission tree, while the underlying network is unknown and therefore treated as latent variables. We aim at simultaneously inferring the infection rate parameter, the parameters governing the degree distribution, and the latent structure of the network, in terms of the posterior edge inclusion probabilities, by using an MCMC algorithm. While the choice of the SI model is due to the data in hand, we believe the model structure and algorithm introduced can be extended to other compartment models.

The rest of the article is divided as follows. The latent network epidemic model is introduced in Sect. 2. Its likelihood and its associated MCMC algorithm are derived in Sect. 3. They are then applied to two sets of simulated data in Sect. 4, and a set of real online commissioning data in Sect. 5. Section 6 concludes the article.

2 Model

In this section, we introduce the latent network SI epidemic model. Describing the formation of the network and the epidemic separately will facilitate the derivation of the likelihood in the next section. The notations and definitions are kept to be similar to those in Britton and O’Neill (2002) and Groendyke et al. (2011).

Consider an epidemic in a closed population of size m. Let \(\mathbf {I} = (I_1, I_2, \ldots , I_m)\) denote the ordered vector of infection times, where \(I_i\) is the infection time of individual i, and \(I_i \le I_j\) for any \(i<j\). We assume that the first individual is the only initial infected individual. In order to have a temporal point of reference, only the times of \(m-1\) infections will be random, and so we define \({\tilde{\mathbf {I}}}=\mathbf {I}-I_1=({\tilde{I}}_1=0,{\tilde{I}}_2,\ldots ,{\tilde{I}}_m)\) for convenience. We also assume that the observation period is long enough to include all infections.

Next, consider the undirected random graph \(\mathcal {G}\) of m nodes which represents the social structure of the population, in which the node i represents the \(i{\text {th}}\) individual. Using the adjacency matrix representation, if individuals i and j are socially connected, we write \(\mathcal {G}_{ij}=1\) and call them neighbours of each other, \(\mathcal {G}_{ij}=0\) otherwise. In this sense, \(\mathcal {G}_{ij}\) can be interpreted as a potential edge of i and j. We also assume symmetry in social connections and that each individual is not self-connected, that is, \(\mathcal {G}_{ij} = \mathcal {G}_{ji}\) and \(\mathcal {G}_{ii}=0\), respectively, for \(1\le i,j\le m\).

To characterise \(\mathcal {G}\), we use a modified version of the PA model by Barabási and Albert (1999), which generates a network by sequentially adding nodes into it. This requires an order of how the nodes enter the network, which is not necessarily the same as the epidemic order. Therefore, we define a vector random variable of the network order, denoted by \(\varvec{\sigma }=(\sigma _1,\sigma _2,\ldots ,\sigma _m)\), whose support is all m! possible permutations of \(\{1,2,\ldots ,m\}\). Node \(\sigma _i~(1\le i\le m)\), labelled by the epidemic order, is the \(i{\text {th}}\) node that enters the network. Such order is mainly for the sake of characterisation using the PA model, and the network is assumed to have formed before the epidemic takes place, and remain unchanged throughout the course of the epidemic. Such an assumption is reasonable because the timescale of an epidemic is usually much smaller than that of network formation, the process of which is described next.

2.1 Sequence of new edges

Initially, there are two nodes \(\sigma _1\) and \(\sigma _2\) which are connected i.e. \(\mathcal {G}_{\sigma _1\sigma _2}=1\). When node \(\sigma _i~(3\le i\le m)\) enters the network, it connects to \(X_i\) existing nodes, where \(X_i\) follows a censored Poisson distribution with parameter \(\mu \) and support \(\{1,2,\ldots ,i-1\}\), that is,

$$\begin{aligned} \Pr (X_i=x)=\left\{ \begin{array}{l@{\quad }l} e^{-\mu }(1+\mu ), &{} x=1, \\ \displaystyle \frac{e^{-\mu }\mu ^x}{x!}, &{} x=2,3,\ldots ,i-2, \\ \displaystyle \sum _{z=i-1}^\infty \frac{e^{-\mu }\mu ^z}{z!}, &{} x=i-1. \end{array} \right. \end{aligned}$$
(1)

Independence is assumed between \(X_i\) and \(X_j\) if \(i\ne j\). We model the number of new edges as a random variable because using a constant number of new edges, denoted by \(\mu _0\), which is what the original model by Barabási and Albert (1999) did, fixes the total number of edges to \((m-2)\mu _0+1\) and makes the model too restrictive. Empirically this censored distribution performs better than a truncated Poisson distribution in terms of identifying \(\mu \).

2.2 Attaching edges to nodes

When node \(\sigma _i\) joins the network, according to the original PA rule, an existing node \(\sigma _j~(1\le j<i)\) gets connected to node \(\sigma _i\), that is, \(\mathcal {G}_{\sigma _i\sigma _j}=1\), with probability proportional to its current degree \(\sum _{k=1}^{i-1}\mathcal {G}_{\sigma _k\sigma _j}\). To allow the degree of PA to vary, we allow such probability to be a mixture of the current degree and how recently the node has joined the network. To be more specific, the process of choosing \(x_i\) nodes is equivalent to obtaining a weighted random sample without replacement from \(\{1,2,\ldots ,i-1\}\), with the weight assigned to node \(\sigma _j\) equal to \(w_j\), where

$$\begin{aligned} w_j=(1-\gamma )\frac{\sum _{k=1}^{i-1}\mathcal {G}_{\sigma _k\sigma _j}}{~\sum _{l=1}^{i-1}\sum _{k=1}^{i-1}\mathcal {G}_{\sigma _k\sigma _l}~}+\gamma \frac{j}{~\sum _{l=1}^{i-1}l~}, \end{aligned}$$
(2)

where \(\gamma \in [0,1]\) and can be seen as the parameter governing the degree of PA. When \(\gamma =0\), this reduces to the original PA rule. When \(\gamma \) increases, more weights are given to latter nodes, and the inequality in the degrees of the nodes is reduced. Such inequality reduction is facilitated by assigning weights according to how recent the nodes join the network, rather than equal weights, in the non-PA component. Note that, however, even in the extreme case where \(\gamma =1\), where the degree distribution is unimodal and closer to symmetry, the model does not reduce to a BRG, where the degree distribution is Binomial with parameters \((m-1,p)\), where p is the edge inclusion probability, but provides a crude approximation to it.

2.3 Constructing the epidemic

The Markovian epidemic process is constructed as follows. At time 0, the whole population is susceptible except individual 1, who is infected. Once infected at time \({\tilde{I}}_i\), individual i makes infectious contacts at points of a homogeneous Poisson process with rate \(\beta \sum _{j=1}^m\mathcal {G}_{ij}\) with its neighbours (according to \(\mathcal {G}\)), and stays infected until the end of the observation period. The random transmission tree \(\mathcal {P}\), with the same node as \(\mathcal {G}\) and whose root is the node labelled 1, can be constructed simultaneously. If individual i makes infectious contact at arbitrary \(t_0\) (governed by the aforementioned Poisson process) with susceptible neighbour j, we write \(\mathcal {P}_{ij}=1\), again using the adjacency matrix representation. This implies \({\tilde{I}}_j=t_0\), and \(\mathcal {P}_{ji}=0\) as individual i cannot be re-infected. Also, \(\mathcal {P}_{ij}=1\) implicity implies that \(\mathcal {G}_{ij}=(\mathcal {G}_{ji}=)1\), as the epidemic can only spread through social connections, i.e. the edges in \(\mathcal {G}\). Also, we assume \(\mathcal {P}_{ii}=0\) as any individual cannot be infected by themselves.

3 Likelihood and inference

We proceed to compute the likelihood, denoted by L, as a function of \(\beta \), \(\mu \), \(\gamma \) and \(\varvec{\sigma }\). We assume both \(\mathcal {G}\) and \(\mathcal {P}\) are given because, as argued by Britton and O’Neill (2002) and Groendyke et al. (2011), it is easier to condition on \(\mathcal {G}\) and \(\mathcal {P}\) in order to calculate L, and, if they are unobserved, include them as latent variables in the inference procedure. Two conditional independence assumptions need to be noted. Because of the Markovian nature of the epidemic, \(\mathcal {P}\) and \({\tilde{\mathbf {I}}}\) are independent given \(\mathcal {G}\). It is also common that the data \((\{{\tilde{\mathbf {I}}},\mathcal {P}\})\) and (a subset of) the parameters \((\mu ,\gamma ,\varvec{\sigma })\) are independent apriori, given \(\mathcal {G}\), when models are formulated by centred parameterisations (Papaspiliopoulos et al. 2003). Therefore, the likelihood can be broken down into the following components:

$$\begin{aligned}&L:=L(\beta ,\mu ,\gamma ,\varvec{\sigma })\nonumber \\&\quad =\pi (\mathcal {P},{\tilde{\mathbf {I}}},\mathcal {G}|\beta ,\mu ,\gamma ,\varvec{\sigma })\nonumber \\&\quad =\pi (\mathcal {P},{\tilde{\mathbf {I}}}|\mathcal {G},\beta ,\mu ,\gamma ,\varvec{\sigma })\times \pi (\mathcal {G}|\beta ,\mu ,\gamma ,\varvec{\sigma })\nonumber \\&\quad =\pi (\mathcal {P}|\mathcal {G})\times \pi ({\tilde{\mathbf {I}}}|\mathcal {G},\beta )\times \pi (\mathcal {G}|\mu ,\gamma ,\varvec{\sigma }). \end{aligned}$$
(3)

The dropping of any unrelated quantities can be explained by how the network and the epidemic are constructed in Sect. 2, and is demonstrated in the derivations of each component in Appendix A, the results of which are given below:

$$\begin{aligned} \pi (\mathcal {P}|\mathcal {G})= & {} \prod _{j=2}^{m}\mathbf {1}\left\{ \sum _{i=1}^{j-1}\mathcal {P}_{ij}=1\right\} \left( \sum _{i=1}^{j-1}\mathcal {G}_{ij}\right) ^{-1}\nonumber \\&\quad \times \prod _{1\le i<j\le m}(1-\mathcal {P}_{ji})\mathbf {1}\left\{ \mathcal {P}_{ij}\le \mathcal {G}_{ij}\right\} , \end{aligned}$$
(4)
$$\begin{aligned} \pi ({\tilde{\mathbf {I}}}|\mathcal {G},\beta )= & {} \beta ^{m-1}\exp \left( -\beta \sum _{1\le i<j\le m}\mathcal {G}_{ij}({\tilde{I}}_j-{\tilde{I}}_i)\right) ,\end{aligned}$$
(5)
$$\begin{aligned} \pi (\mathcal {G}|\mu ,\gamma ,\varvec{\sigma })= & {} { e^{-(m-2)\mu }\mu ^{|\mathcal {G}|-1}(1+\mu )^{\sum _{i=3}^{m}\varvec{1}\{\sum _{j=1}^{i-1}\mathcal {G}_{\sigma _i\sigma _j}=1\}}} \nonumber \\&\quad \times \prod _{i=3}^m \left[ \sum _{z=i-1}^{\infty }\frac{\mu ^z}{z!}\big /\frac{\mu ^{i-1}}{(i-1)!}\right] ^{\varvec{1}\{\sum _{j=1}^{i-1}\mathcal {G}_{\sigma _i\sigma _j}=i-1\}} \nonumber \\&\quad \times \prod _{i=3}^m\left[ w_1^{\mathcal {G}_{\sigma _i\sigma _1}}\prod _{j=2}^{i-1}\left( \frac{w_j}{1-\sum _{k=1}^{j-1}w_k}\right) ^{\mathcal {G}_{\sigma _i\sigma _j}}\right] . \end{aligned}$$
(6)

We can proceed to inference because the likelihood (3) can be expressed explicitly as the product of (4)–(6). However, this complete likelihood is only useful for inference when \(\mathcal {G}\) (and \(\mathcal {P}\)) is given or known, which is usually not the case in real-life applications. As each of the \(\left( {\begin{array}{c}m\\ 2\end{array}}\right) \) potential edges is a binary random variable, integrating \(\mathcal {G}\) out does not seem feasible as we will have to average over all \(2^{\left( {\begin{array}{c}m\\ 2\end{array}}\right) }\) possibilities. Also, unlike the scalar parameters \((\beta ,\mu ,\gamma )\), the support of \(\varvec{\sigma }\) is the permutation space of \(\{1,2,\ldots ,m\}\). It is not meaningful to calculate a certain kind of point estimate of \(\varvec{\sigma }\) and quantify its uncertainty using a frequentist approach. It is therefore quite natural to consider Bayesian inference, in which \(\mathcal {G}\) is considered as latent variables, the posterior probabilities of which are to be computed simultaneously with those of the model parameters. It is sensible to assume the infection rate \(\beta \), which relates to the intrinsic properties of the disease, to be independent of \(\mu \) and \(\gamma \) apriori, which relate to the properties of the network. We assign the following independent and vaguely informative priors:

$$\begin{aligned}&\beta \sim \text {Gamma}(a_{\beta }=1,b_{\beta }=0.001),\nonumber \\&\mu \sim \text {Gamma}(a_{\mu }=1,b_{\mu }=0.001),\nonumber \\&\gamma \sim U[0,1],\nonumber \\&\pi (\varvec{\sigma })=(m!)^{-1}\varvec{1}\{\varvec{\sigma }\text { is a permutation of }\{1,2,\ldots ,m\}\}, \end{aligned}$$
(7)

where a / b is the mean of a random variable \(X\sim \text {Gamma}(a,b)\). By Bayes’ theorem, we have

$$\begin{aligned}&\pi (\mathcal {G},\beta ,\mu ,\gamma ,\varvec{\sigma }|\mathcal {P},{\tilde{\mathbf {I}}}) \propto \pi (\mathcal {P},{\tilde{\mathbf {I}}},\mathcal {G},\beta ,\mu ,\gamma ,\varvec{\sigma }) \end{aligned}$$
(8)
$$\begin{aligned}&\quad =\pi (\mathcal {P},{\tilde{\mathbf {I}}},\mathcal {G}|\beta ,\mu ,\gamma ,\varvec{\sigma }) \times \pi (\beta ,\mu ,\gamma ,\varvec{\sigma })\nonumber \\&\quad =\pi (\mathcal {P}|\mathcal {G})\times \pi ({\tilde{\mathbf {I}}}|\mathcal {G},\beta )\times \pi (\mathcal {G}|\mu ,\gamma ,\varvec{\sigma })\nonumber \\&\qquad \times \pi (\beta )\pi (\mu )\pi (\gamma )\pi (\varvec{\sigma }) \end{aligned}$$
(9)

As the posterior density, up to a proportionality constant, can be obtained explicitly as the product of (4)–(7), a natural candidate for inference is MCMC. We use a component-wise Metropolis-within-Gibbs (MWG) algorithm, described in detail in Appendix B, in which each of the parameters \((\beta , \mu , \gamma , \varvec{\sigma })\) is sampled conditional on the other three parameters and the whole of \(\mathcal {G}\), while each potential edge of \(\mathcal {G}\) is sampled conditional on all parameters and other potential edges of \(\mathcal {G}\).

4 Simulation study

A simulation study is carried out to examine if the inference algorithm in Appendix B can recover the true values of the parameters used to simulate from the model in Sect. 2. Specifically, we set \(m=\) 70 and consider all combinations of the following true values: \(\gamma = 0,0.2,0.5,0.8,1, \beta = 0.4\), and \(\mu = 4,6,8,10\). For each of the 20 combinations, we first simulate the PA network and then simulate the epidemic on the network. Because of how we construct and simulate from the model, we have complete information on the underlying graph \(\mathcal {G}\), the transmission tree \(\mathcal {P}\), and the infection times \({\tilde{\mathbf {I}}}\). When \(\mathcal {G}\) is given together with \(\mathcal {P}\) and \({\tilde{\mathbf {I}}}\), the MCMC algorithm only needs to be applied to \(\beta \), \(\mu \), \(\gamma \) and \(\varvec{\sigma }\), and it successfully recovers each of the three scalar parameters. Also, the posterior correlations between \(\beta \) and \(\mu \) and between \(\beta \) and \(\gamma \) are both close to zero, which makes sense because of the independence conditional on \(\mathcal {G}\), according to (9). However, we should focus on how good the algorithm is at inferring \(\mathcal {G}\) given \(\mathcal {P}\) and \({\tilde{\mathbf {I}}}\) only, because \(\mathcal {G}\) is usually unknown in real-life data, while \(\mathcal {P}\) being known is motivated by the data set in Sect. 5. Therefore, the complete MCMC algorithm for \(\beta \), \(\mu \), \(\gamma \), \(\varvec{\sigma }\) and \(\mathcal {G}\) is applied to the same set of simulated data for each parameter combination.

A chain of 20,000 iterations (no thinning) is obtained, with the first 10,000 iterations discarded as burn-in. For \(\mu \) and \(\gamma \), a random walk Metropolis (RWM) step with a Gaussian proposal is used, and this step is modified into an adaptive one during burn-in in order to tune the proposal standard deviation, using the method outlined by, for example, Xiang and Neal (2014, Section 3). The algorithm in general fails to correctly identify any of the three scalar parameters. The true value of \(\gamma \) is plotted against its posterior distribution at different true values of \(\mu \) in Fig. 3. The absence of significant linear correlation between the posterior mean of \(\gamma \) and its true value, and the huge degree of uncertainty at all true values considered suggest that it is difficult to identify \(\gamma \) in particular. Similar to Fig. 3, for different true values of \(\mu \), we also plot the true value of \(\mu \) against its posterior distribution in Fig. 4, which suggests that, intriguingly, the latter does not depend on the former. This is still the case if we further fix \(\gamma \) to its true value.

Fig. 3
figure 3

True value of \(\gamma \) against its posterior distribution (horizontal line) and mean (dot), at different true values of \(\mu \). The dashed line is the line \(y=x\)

Fig. 4
figure 4

True value of \(\mu \), slightly adjusted for visibility, against its posterior distribution (horizontal line) and mean (symbol), at different true values of \(\gamma \), which is estimated (circle) or fixed (triangle) to its true value in the MCMC algorithm

The identifiability issue prompts us to consider dropping or fixing at least one of \(\beta \), \(\mu \) and \(\gamma \). While \(\beta \) and \(\mu \) are essential to characterise the epidemic and the network, respectively, leaving \(\gamma \) out means we do not allow the degree of how preferentially attaching the network is to vary. Therefore, \(\gamma \) is fixed to be 0, which is equivalent to the network model being reduced to the original PA model, and will not be estimated.

A second simulation study is carried out, this time with m being allowed to take different values, namely \(m= 30,50,70\), combined with the following true values: \(\beta =\) 0.4 and \(\mu = 4,6,8,10\). For each parameter combination, we also allow different proportions of \(\mathcal {G}\) in the simulated data to be known in addition to \(\mathcal {P}\). A proportion of 0 means only \(\mathcal {P}\) is known, while a proportion of 1 means both \(\mathcal {P}\) and \(\mathcal {G}\) are given. The true value of \(\mu \) against its posterior distribution is plotted for different combinations of m and proportions of \(\mathcal {G}\) in Fig. 5. The posterior of \(\mu \) again shows no correlation with its true value in the first row, which corresponds to no \(\mathcal {G}\) given at all, but it converges towards its true value as the proportion goes to 1. Also, it is now possible to recover the true value of \(\mu \), with even, say, a quarter of the potential edges of \(\mathcal {G}\) additional to \(\mathcal {P}\).

Fig. 5
figure 5

True value of \(\mu \) against its posterior distribution (horizontal line) and mean (dot), at different combinations of m and proportions of \(\mathcal {G}\) known additional to those implied by \(\mathcal {P}\). The dashed line is the line \(y=x\)

Rather than looking at the identifiability of one parameter alone, we can investigate the product \(\alpha :=\beta \times \mu ^{*}\), the posterior of which can be obtained post-inference, where \(\mu ^{*}:=\mu +e^{-\mu }\). Plotting the true value of \(\alpha \) against its posterior (not shown) in the similar way to Fig. 5 reveals that it is identifiable regardless of its true value, m, or the proportion of \(\mathcal {G}\) given. The introduction of \(\mu ^{*}\) is due to that the mean of the distribution in (1) is approximately \(\mu +e^{-\mu }\) when i is large. As \(\alpha \) is the product of the (unscaled) epidemic rate and the average network connectedness, we can interpret it as the network scaled epidemic rate. Epidemics on two different networks are comparable through this parameter if the two networks have similar values of \(\mu ^{*}\).

Such findings regarding \(\alpha \) can be explained by looking the results of one parameter combination in detail. The joint posterior density in Fig. 6 displays an inverse relationship between \(\beta \) and \(\mu ^{*}\) when no \(\mathcal {G}\) is given, echoing the findings by Britton and O’Neill (2002), who showed the inverse relationship between \(\beta \) and the edge inclusion probability parameter p in their BRG model and argued that “the model parameterisation permits different explanations of the same outcome”. This means that we cannot simultaneously identify the parameters that characterise the epidemic rate and the network connectedness, respectively, and being able to identify one relative to the other is as good as we can do.

Fig. 6
figure 6

Joint posterior density plot of \(\beta \) and \(\mu ^{*}\) at selected proportions of \(\mathcal {G}\) given, for a simulated data set with \(m=50\) and true values of the parameters \((\beta , \mu , \gamma )=(0.4, 6.0, 0.0)\). The black dashed curve is the line \(xy=2.401\), the product of the true values of \(\beta \) and \(\mu ^{*}\). As \(\mu ^{*}=\mu +e^{-\mu }\) is bounded below by 1, the contours where the proportion of \(\mathcal {G}\) is 0 are truncated

5 Application

Before applying the proposed model to its data set introduced in Sect. 1, we shall describe App Movement in detail. This platform removes the resource constraints around mobile application development through providing an automated process of designing and developing mobile applications. The process begins with the support phase whereby a community creates a campaign page in response to a community need and engages the community in supporting the concept through promoting and sharing the campaign on online social networks. When the target of 150 members supporting the campaign within 14 days has been met to ensure an active user base, the campaign proceeds to the design phase, in which ideas regarding the design of the mobile application are being voted on. Once supporters have cast their votes, the platform incorporates the highest rated design decisions and automatically generates the mobile application. Since its launch in February 2015, App Movement has been adopted by over 50,000 users supporting 111 campaigns, 20 of which have been successful in reaching their target number of supporters, with 18 generated mobile applications currently available in the Google Play Store and Apple App store, for iOS and Android devices, respectively.

The design of logging the usage of App Movement enables us to convert the data into a format suitable for modelling and fitting. To illustrate this, assume that user 1 shares a campaign page, with uniform resource locator (URL) A, on certain social network. When user 2, who is connected to user 1 and has never viewed the campaign page, clicks on URL A, a new URL B unique to user 2, directing to a page with the same contents of the campaign, is created. Any subsequent visits to the same page of users 1 and 2 will be redirected to the same URLs A and B, respectively. Therefore, within each campaign, there is a 1–1 relationship between the URLs and the users. We can say that user 1 infects user 2, at the time when URL B is created. Similarly, the users associated with the URLs created by clicking URL B can be said to be infected by user 2. This process is illustrated in the flow diagram in Fig. 7. By carrying out this process of connecting users with those who infected them until we reach, in tree terminology, the root and all the leaves, we end up with both the transmission tree and the infection times of the epidemic of the campaign sharing. The inference outlined in Sect. 3 can then be carried out.

Fig. 7
figure 7

Flow diagram of App Movement sharing. A straight arrow represents the generation of the labelled URL when the user visits the campaign page, while each solid-dashed curved arrow pair represents the click of the labelled URL and the direction of infection, respectively

Fig. 8
figure 8

Traceplots and posterior densities of \(\beta \) and \(\mu \) of the PA model for the epidemic shown in Figs. 1 and 2

The model is fitted to each of ten campaign epidemics separately, assuming they have no influence on each other for simplicity, with m ranging from 334 to 402. Each campaign corresponds to a different proposed application. The inference algorithm is used with \(\gamma \) fixed to 0. For each epidemic, 5 chains of length 2000 (no thinning) are obtained, after the first 1000 iterations being discarded as burn-in, during which the proposal standard deviation for \(\mu \) is tuned. The traceplots and posterior densities of \(\beta \) and \(\mu \) are plotted in Fig. 8, for the model fit to the epidemic visualised in Figs. 1 and 2. The acceptance rate for \(\mu \) is 0.269 and is similar for the other 9 epidemics considered. The posterior means and standard deviations of \(\beta \), \(\mu \) and \(\alpha \) for all epidemics are reported in Table 1. Also reported is the correlation between \(\beta \) and \(\mu ^{*}=\mu +e^{-\mu }\), which is modest but consistently negative. For any parameter \(\theta \), we denote E\((\theta |\mathcal {P},{\tilde{\mathbf {I}}})\) as its posterior mean. We can see that E\((\alpha |\mathcal {P},{\tilde{\mathbf {I}}})\) is not dependent on m and is significantly different across the epidemics. Combining with the fact that the correlation with \(\mu \) (or \(\mu ^{*}\)) is modest (not shown), \(\alpha \) can be seen to be successfully identified.

Table 1 Posterior mean (SD) and correlation of the scalar parameters in the PA model fitted to ten different campaign epidemics

Model comparison or selection is difficult here because the BRG model by Britton and O’Neill (2002) is not nested in our proposed model, even when \(\gamma \) is treated as a parameter. Nevertheless, we fit the BRG model to the same campaign epidemics, focusing on the parameter results to examine its goodness-of-fit. The posterior means and standard deviations of the parameters are reported in Table 2, which shows that E\((p|\mathcal {P},{\tilde{\mathbf {I}}})\) is of the same magnitude across all the epidemics. For the singled out epidemic with \(m=\) 335, compared to the average degree E\((p|\mathcal {P},{\tilde{\mathbf {I}}})\times (m-1)=\) 2.158, which means each user on average is connected to slightly more than two other users, the most infectious user has infected 18 other users. If we use E\((p|\mathcal {P},{\tilde{\mathbf {I}}})\) as the true value of p and simulate a BRG, the probability that one particular user is connected to at least 18 users is \(1.453\times 10^{-11}\). Combining these two quantities with the independence of potential edges, we can see that it is very unlikely a BRG generated in this way will be connected, let alone overlay \(\mathcal {P}\). On the other hand, the network construction described in Sect. 2 ensures that the PA network generated is always connected. Finally, contrary to the clear inverse relationship between \(\beta \) and p reported in Britton and O’Neill (2002) for both simulated and real-life data, the joint posterior of \((\beta ,p)\) can be well approximated by a bivariate Gaussian distribution, for all epidemics reported here. Combining with the fact that the correlations are small (last column of Table 2), \(\beta \) and p can be said to be close to independence aposteriori. This suggests that the presence of \(\mathcal {P}\) actually makes p (and \(\beta \)) identifiable, but the estimate of the successfully identified p now shows a poor fit of the BRG model to our data.

Table 2 Posterior mean (SD) and correlation of the parameters in the BRG model fitted to ten different campaign epidemics

While the values of \(E(p|\mathcal {P},{\tilde{\mathbf {I}}})\) in Table 2 are low, those of E\((\beta |\mathcal {P},{\tilde{\mathbf {I}}})\) are similar to their PA counterparts in Table 1, but are unusually high compared to real-life epidemics. This is because, while real-life epidemics usually spanned days [see, for example, Britton and O’Neill (2002) and Neal and Roberts (2005)], the campaign epidemics spanned weeks (see the time scale of Fig. 1). Out of the ten epidemics reported here, epidemics 1 and 6 spanned the longest and shortest, with a period of 187.371 and 36.7 days, respectively, and this explains why their respective values of \(E(\beta |\mathcal {P},{\tilde{\mathbf {I}}})\) are on opposite extremities among those reported in Table 1.

Using the posterior of \(\beta \) and \(\mu \), we can simulate the network epidemic and obtain the predictive distribution of the cumulative counts over time, of which the 95% predictive intervals (PI) overlay the observed data in Fig. 9. While the early period of the epidemic lies within the 95% PI, it is the slower periods of infections towards the end of the observation period that are more difficult to reproduce.

Fig. 9
figure 9

Cumulative count of infected for the campaign epidemic shown in Fig. 1, overlaid by 95% predictive intervals of simulated cumulative counts over time (red dashed lines)

6 Discussion

We have described a network epidemic model which combines the SI epidemic model with the PA network model, with the inference carried out by MCMC as the likelihood can be explicitly computed. The results of two simulation studies suggest dropping one parameter in order to make the network scaled epidemic rate parameter identifiable. The model and inference algorithm are successfully applied to ten different “sharing” epidemics of a set of online community commissioning data. The results suggest that the PA model is a better alternative for network generation than the BRG model, for data sets of epidemics taking place on social networks.

Several modifications can potentially make the model more useful. First of all, information on the average connectedness or the degree distribution on social networks can be solicited beforehand, so that an informative prior can be assigned to \(\mu \) and/or \(\gamma \), at least one of which can then be identified. Given the vast amount of data about social networks such as Facebook and Twitter freely available on the Internet, such information should be possible to obtain.

Another way of gaining information for the parameters, particularly for the App Movement data, is combining the epidemics in the modelling and inference. As users on the social network are usually involved in more than one epidemic, we can pull together several epidemics which have overlap in the users and build a larger underlying network \(\mathcal {G}\) comprising all the users involved, which is guaranteed to be connected. As a user may be infected in one epidemic but not another, each of the epidemics may then be incomplete. While the likelihood calculations, for example (5), may differ slightly, and each epidemic has a different rate parameter (no matter whether it is network scaled or not), the inference procedure is basically the same, and only one set of parameters \((\mu , \gamma )\) is used to govern the network generation. Borrowing strength from other epidemics in this way will utilise more information available in the data, and result in parameters being better identified or more precisely estimated.

That the epidemic is Markovian given the network is a simplistic assumption, which has been shown inadequate for real-life epidemics. To relax this assumption, one can use alternative distributions for the infectious period, such as the Gamma distribution, which is used by, for example, Xiang and Neal (2014). This is not relevant here because the SI model is used instead of the more popular SIR model, or any compartment model in which being infectious is not the final state. Instead of using one single epidemic rate \(\beta \) for all the infections, meaning that the interarrival times are exponentially distributed, one can use a different rate \(\beta _i\) for the infection of individual i, where \(\beta _i\) is drawn from certain probability distribution. This approach to modelling non-Markovian processes, proposed by Masuda and Rocha (in press), can be applied to any kind of compartment model and can encompass a range of interarrival time distributions, simply by choosing a different probability distribution from which \(\beta _i\) is drawn. When it comes to the inference, the interarrival time distribution parameter(s) and each \(\beta _i\), given all other epidemic rate parameters, will be updated individually, on top of the existing parameters and latent variables. Given that most of the computational time lies in updating the potential edges of \(\mathcal {G}\) one by one, this should not add much to the computational burden. On the other hand, including users who have not seen or joined the campaign to correct for the potential overestimation of \(\beta \) will add to the computation burden, because the number of users not infected by a particular campaign is vast compared to the number of infected.