A network epidemic model for online community commissioning data

A statistical model assuming a preferential attachment network, which is generated by adding nodes sequentially according to a few simple rules, usually describes real-life networks better than a model assuming, for example, a Bernoulli random graph, in which any two nodes have the same probability of being connected, does. Therefore, to study the propagation of “infection” across a social network, we propose a network epidemic model by combining a stochastic epidemic model and a preferential attachment model. A simulation study based on the subsequent Markov Chain Monte Carlo algorithm reveals an identifiability issue with the model parameters. Finally, the network epidemic model is applied to a set of online commissioning data.


Introduction
Social network analysis has been a popular research topic over the last couple of decades, thanks to the unprecedentedly large amount of internet data available, and the increasing power of computers to deal with such data, which details ties between people or objects all over the world.A lot of models have been developed to characterise and/or generate networks in various ways.One well-known class of models in the statistical literature is the exponential random graph model (ERGM), in which the probability mass function on the graph space is proportional to the exponential of a linear combination of graph statistics; see, for example, Snijders (2002).
The Bernoulli random graph (BRG), in which any two nodes have the same probability of being connected, independent of any other pair of nodes, is a special case of an ERGM.Although the choice of graph statistics allows an ERGM to encompass networks with different characteristics, in general the ERGMs do not describe real-life networks well; see, for example, Snijders (2002) and Hunter et al. (2008).
Instead of characterising a network by graph statistics, such as the total number of degrees, the configuration model considers the sequence of the individual degrees; see, for example, Newman (2010), Chapter 13.Each node is assigned a number of half-edges according to its degree, and the half-edges are paired at random to connect the nodes.Despite its simple rule of network generation, the configuration model may contain multiple edges or self-connecting nodes, which might not occur in real-life networks.Also, the whole network is not guaranteed to be connected.
Moreover, even though the individual degrees may be flexibly modelled by a degree distribution, they are not completely independent as they have to sum to an even integer.
One prominent feature of social networks in real life is that they are scale-free, which means that the degree distribution follows a power law (approximately); see, for example, Albert, Jeong and Barabási (1999Barabási ( , 2000)), and Stumpf, Wiuf and May (2005).The preferential attachment (PA) model by Barabási and Albert (1999) is one widely known model (Newman, 2010, Chapter 14) that generates such a network with a few parameters and a simple rule.Other models also exist that characterise either the degree distribution, for instance the small-world model by Watts and Strogatz (1998), or other aspects such as how clustered the nodes in the network are (Vázquez et al., 2002).
While the majority of the network models focus on the topology of the network, some models are developed to describe the dynamics within the network, in particular how fast information spreads with respect to the structure of the network.As spreading rumours or computer viruses through connections in a social network is similar to spreading a disease through real life contacts to create an epidemic, most of these models incorporate certain compartment models in epidemiology.For instance, the Susceptible-Infectious-Recovered (SIR) model splits the population into three compartments according to the stage of the disease of each individual.A susceptible individual becomes infectious upon contact with an infectious individual, and recovers after a random period.Traditionally, the infectious period and the contacts made by an infected individual are assumed to follow an exponential disribution and a homogeneous Poisson Process, respectively.
While these assumptions may be unrealistic for real life data, they are useful as the epidemic process is now Markovian.The dynamics of compartment sizes over time can usually be characterised by a small number of parameters in the rate matrix, which is used to obtain the transition probabilities through the Kolmogorov's equations; see, for example, Wilkinson (2011), Section 5.4.While other kinds of compartment models can be formulated in a similar way, some models depart from the Markovian assumptions, and will be discussed later.For more details on the SIR model and its variants, see, for example, Andersson and Britton (2000).
Often implicitly assumed in such compartment models is that the epidemic is homogeneous mixing, that is, each individual can interact uniformly with all other individuals in the community he/she belongs to.However, this is not the case when it comes to network epidemics, as one can only infect and be infected by their neighbours in the network, and the collection of neighbours differs from individual to individual.Therefore, modelling an epidemic on a structured population requires relaxing the homogeneous mixing assumption.Instead of assuming the same set of values for the parameters governing the dynamics, one approach is to apply a separate set of parameter values to, for example, each individual or all individuals with the same degree.Such an approach focuses on the modelling side, and is dominant in the physics literature.A comprehensive review is provided by Pastor-Satorras, Castellano, Van Mieghem and Vespignani (2015).
Our work on network epidemic modelling is motivated by a data set from App Movement 1 , which is an online platform that enables communities to propose and design community-commissioned mobile applications (Garbett, Comber, Jenkins and Olivier, 2016).The process of generating the application starts with a community creating a campaign page and sharing it via online social networks.If we view an individual having seen a campaign and in turn promoting it as being "infected" (and "infectious" simultaneously), then the process of sharing a campaign can be compared to spreading a real-life virus to create an epidemic.The main difference is that such an infectious individual cannot potentially infect anyone in the population but only those connected to them on the social networks.For one campaign, the cumulative count of infected and the network of infected users are plotted in Figures 1 and 2, respectively.The former deviates from the typical S shape of a homogeneous mixing epidemic, while the latter displays star-like structures and long paths, which typical features in real-life networks.It should be noted that 1 https://app-movement.com this does not represent the complete underlying network G, which is usually unknown.q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 1 2 3 Due to the difference in the data being applied to, as well as the inclination towards inference, epidemic models in the statistics literature provide a stark contrast from the classical compartment model, not only with respect to the network issue.First, to accommodate heterogeneities in mixing, Ball et al. (1997) and Britton et al. (2011) proposed models which incorporate two levels and three levels of mixing, respectively.Each individual belongs to both the global level and one or more local levels, such as household, school or workplace, and homogeneous mixing is assumed to take place at each level but with a separate rate.Such models are prompted by data with detailed information of these local level structures each individual belongs to, such as the 1861 Hagelloch measles outbreak data analysed by Britton et al. (2011).Second, some SIR models and their variants relax the assumption that the infectious period follows the exponential distribution, essentially rendering the epidemic process non-Markovian.For instance, Streftaris and Gibson (2002) used the Weibull distribution, while Neal and Roberts (2005) and Groendyke, Welch and Hunter (2012) used the Gamma distribution.In general, the compartment dynamics cannot be represented by a simple differential equation.Third, information is often missing in epidemic datasets, such as the infection times and, if a network structure is assumed, the actual network itself.Therefore, models are developed with a view to inferring these missing data, usually achieved by Markov Chain Monte Carlo (MCMC) algorithms.Examples of models which impose a network structure include Britton and O'Neill (2002), Neal and Roberts (2005), Ray and Marzouk (2008) and Groendyke, Welch and Hunter (2011).In the data considered by these authors, no covariates exist to inform if two individuals are neighbours in the network, and the edge inclusion probability parameter is assumed to be the same for any two individuals in the network.Essentially the underlying network is a BRG, which yields a Binomial (or approximately Poisson) degree distribution.Such a network model seems unrealistic for our App Movement data, compared to a model that generates a scale-free network or utilises a power law type degree distribution.
In view of the differences in objectives and applications shown above, we propose a network epidemic model as an attempt to narrow the gap in the literature.We focus on a Susceptible-Infectious (SI) model, in which the epidemic process takes place on a network which is assumed to be built from the PA model, thus deviating from a BRG.When it comes to inference, the data contains the infection times and potentially the transmission tree, while the underlying network is unknown and therefore treated as latent variables.We aim at simultaneously inferring the infection rate parameter, the parameters governing the degree distribution, and the latent structure of the network, in terms of the posterior edge inclusion probabilities, by using an MCMC algorithm.While the choice of the SI model is due to the data in hand, we believe the model structure and algorithm introduced can be extended to other compartment models.
The rest of the article is divided as follows.The latent network epidemic model is introduced in Section 2. Its likelihood and its associated MCMC algorithm are derived in Section 3.They are then applied to two sets of simulated data in Section 4, and a set of real online commissioning data in Section 5. Section 6 concludes the article.

Model
In this section we introduce the latent network SI epidemic model.Describing the formation of the network and the epidemic separately will facilitate the derivation of the likelihood in the next section.The notations and definitions are kept to be similar to those in Britton and O'Neill (2002) and Groendyke et al. (2011).
Consider an epidemic in a closed population of size m.Let I = (I 1 , I 2 , . . ., I m ) denote the ordered vector of infection times, where I i is the infection time of individual i, and I i ≤ I j for any i < j.
We assume that the first individual is the only initial infected individual.In order to have a temporal point of reference, only the times of m − 1 infections will be random, and so we define Ĩ = I − I 1 = ( Ĩ1 = 0, Ĩ2 , . . ., Ĩm ) for convenience.We also assume that the observation period is long enough to include all infections.
Next, consider the undirected random graph G of m nodes which represents the social structure of the population, in which the node i represents the i th individual.Using the adjacency matrix representation, if individuals i and j are socially connected, we write G ij = 1 and call them neighbours of each other, G ij = 0 otherwise.In this sense G ij can be interpreted as a potential edge of i and j.We also assume symmetry in social connections and that each individual is not To characterise G, we use a modified version of the PA model by Barabási and Albert (1999), which generates a network by sequentially adding nodes into it.This requires an order of how the nodes enter the network, which is not necessarily the same as the epidemic order.Therefore, we define a vector random variable of the network order, denoted by σ = (σ 1 , σ 2 , . . ., σ m ), whose support is all m! possible permutations of {1, 2, . . ., m}.Node σ i (1 ≤ i ≤ m), labelled by the epidemic order, is the i th node that enters the network.Such order is mainly for the sake of characterisation using the PA model, and the network is assumed to have formed before the epidemic takes place, and remain unchanged throughout the course of the epidemic.Such an assumption is reasonable because the timescale of an epidemic is usually much smaller than that of network formation, the process of which is described next.

Sequence of new edges
Initially, there are two nodes σ 1 and σ 2 which are connected i.e.G σ1σ2 = 1.When node enters the network, it connects to X i existing nodes, where X i follows a censored Poisson distribution with parameter µ and support {1, 2, . . ., i − 1}, that is, (1) Independence is assumed between X i and X j if i = j.We model the number of new edges as a random variable because using a constant number of new edges, denoted by µ 0 , which is what the original model by Barabási and Albert (1999) did, fixes the total number of edges to (m−2)µ 0 +1 and makes the model too restrictive.Empirically this censored distribution performs better than a truncated Poission distribution in terms of identifying µ.

Attaching edges to nodes
When node σ i joins the network, according to the original PA rule, an existing node σ j (1 ≤ j < i) gets connected to node σ i , that is, G σiσj = 1, with probability proportional to its current degree To allow the degree of PA to vary, we allow such probability to be a mixture of the current degree and how recently the node has joined the network.To be more specific, the process of choosing x i nodes is equivalent to obtaining a weighted random sample without replacement from {1, 2, . . ., i − 1}, with the weight assigned to node σ j equal to w j , where where γ ∈ [0, 1] and can be seen as the parameter governing the degree of PA.When γ = 0, this reduces to the original PA rule.When γ increases, more weights are given to latter nodes, and the inequality in the degrees of the nodes is reduced.Such inequality reduction is facilitated by assigning weights according to how recent the nodes join the network, rather than equal weights, in the non-PA component.Note that, however, even in the extreme case where γ = 1, where the degree distribution is unimodal and closer to symmetry, the model does not reduce to a BRG, where the degree distribution is Binomial with parameters (m − 1, p), where p is the edge inclusion probability, but provides a crude approximation to it.

Constructing the epidemic
The Markovian epidemic process is constructed as follows.At time 0, the whole population is susceptible except individual 1, who is infected.Once infected at time Ĩi , individual i makes infectious contacts at points of a homogeneous Poisson process with rate β m j=1 G ij with its neighbours (according to G), and stays infected until the end of the observation period.The random transmission tree P, with the same node as G and whose root is the node labelled 1, can be constructed simultaneously.If individual i makes infectious contact at arbitrary t 0 (governed by the aforementioned Poisson process) with susceptible neighbour j, we write P ij = 1, again using the adjacency matrix representation.This implies Ĩj = t 0 , and P ji = 0 as individual i cannot be re-infected.Also, P ij = 1 implicity implies that G ij = (G ji =)1, as the epidemic can only spread through social connections i.e. the edges in G. Also, we assume P ii = 0 as any individual cannot be infected by themselves.

Likelihood and inference
We proceed to compute the likelihood, denoted by L, as a function of β, µ, γ and σ.We assume both G and P are given because, as argued by Britton and O'Neill (2002) and Groendyke et al. (2011), it is easier to condition on G and P in order to calculate L, and, if they are unobserved, include them as latent variables in the inference procedure.Two conditional independence assumptions need to be noted.Because of the Markovian nature of the epidemic, P and Ĩ are independent given G.It is also common that the data ({ Ĩ, P}) and (a subset of) the parameters (µ, γ, σ) are independent apriori, given G, when models are formulated by centred parameterisations (Papaspiliopoulos, Roberts and Sköld, 2003).Therefore, the likelihood can be broken down into the following components: (3) The dropping of any unrelated quantities can be explained by how the network and the epidemic are constructed in Section 2, and is demonstrated in the derivations of each component in Appendix A, the results of which are given below: We can proceed to inference because the likelihood (3) can be expressed explicitly as the product of ( 4)-( 6).However, this complete likelihood is only useful for inference when G (and P) is given or known, which is usually not the case in real-life applications.As each of the m 2 potential edges is a binary random variable, integrating G out does not seem feasible as we will have to average over all 2 ( m 2 ) possibilities.Also, unlike the scalar parameters (β, µ, γ), the support of σ is the permutation space of {1, 2, . . ., m}.It is not meaningful to calculate a certain kind of point estimate of σ and quantify its uncertainty using a frequentist approach.It is therefore quite natural to consider Bayesian inference, in which G is considered as latent variables, the posterior probabilities of which are to be computed simultaneously with those of the model parameters.It is sensible to assume the infection rate β, which relates to the intrinsic properties of the disease, to be independent of µ and γ apriori, which relate to the properties of the network.We assign the following independent and vaguely informative priors: where a/b is the mean of a random variable X ∼ Gamma(a, b).By Bayes' theorem, we have As the posterior density, up to a proportionality constant, can be obtained explicitly as the product of ( 4)-( 7), a natural candidate for inference is MCMC.We use a component-wise Metropoliswithin-Gibbs (MWG) algorithm, described in detail in Appendix B, in which each of the parameters (β, µ, γ, σ) is sampled conditional on the other three parameters and the whole of G, while each potential edge of G is sampled conditional on all parameters and other potential edges of G.

Simulation study
A simulation study is carried out to examine if the inference algorithm in Appendix B can recover the true values of the parameters used to simulate from the model in Section 2. Specifically, we set m = 70 and consider all combinations of the following true values: γ = 0,0.2,0.5,0.8,1,β = 0.4, and µ = 4,6,8,10.For each of the 20 combinations, we first simulate the PA network, then simulate the epidemic on the network.Because of how we construct and simulate from the model, we have complete information on the underlying graph G, the transmission tree P, and the infection times Ĩ.When G is given together with P and Ĩ, the MCMC algorithm only needs to be applied to β, µ, γ and σ, and it successfully recovers each of the three scalar parameters.
Also, the posterior correlations between β and µ and between β and γ are both close to zero, which makes sense because of the independence conditional on G, according to (9).However, we should focus on how good the algorithm is at inferring G given P and Ĩ only, because G is usually unknown in real-life data, while P being known is motivated by the data set in Section 5. Therefore, the complete MCMC algorithm for β, µ, γ, σ and G is applied to the same set of simulated data for each parameter combination.
A chain of 20000 iterations (no thinning) is obtained, with the first 10000 iterations discarded as burn-in.For µ and γ, a random walk Metropolis (RWM) step with a Gaussian proposal is used, and this step is modified into an adaptive one during burn-in in order to tune the proposal standard deviation, using the method outlined by, for example, Xiang and Neal (2014), Section 3. The algorithm in general fails to correctly identify any of the three scalar parameters.The true value of γ is plotted against its posterior distribution at different true values of µ in Figure 3.The absence of significant linear correlation between the posterior mean of γ and its true value, and the huge degree of uncertainty at all true values considered suggest that it is difficult to identify γ in particular.Similar to Figure 3, for different true values of µ we also plot the true value of µ against its posterior distribution in Figure 4, which suggests that, intriguingly, the latter does not depend on the former.This is still the case if we further fix γ to its true value.
The identifiability issue prompts us to consider dropping or fixing at least one of β, µ and γ.
While β and µ are essential to characterise the epidemic and the network, respectively, leaving γ out means we do not allow the degree of how preferentially attaching the network is to vary.
Therefore γ is fixed to be 0, which is equivalent to the network model being reduced to the original PA model, and will not be estimated.
A no correlation with its true value in the first row, which corresponds to no G given at all, but it converges towards its true value as the proportion goes to 1. Also, it is now possible to recover the true value of µ, with even, say, a quarter of the potential edges of G additional to P. q q q q q q q q q q q q q q q q q q q q mu = 8 mu = 10 q q q q q q q q q q q q q q q q q q q q gamma = 0.8 gamma = 1 gamma = 0 gamma = 0.2 gamma = 0.5 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q m = 30 m = 50 m = 70 Rather than looking at the identifiability of one parameter alone, we can investigate the product α := β ×µ * , the posterior of which can be obtained post inference, where µ * := µ+e −µ .Plotting the true value of α against its posterior (not shown) in the similar way to Figure 5 reveals that it is identifiable regardless of its true value, m, or the proportion of G given.The introduction of µ * is due to that the mean of the distribution in ( 1) is approximately µ + e −µ when i is large.
As α is the product of the (unscaled) epidemic rate and the average network connectedness, we can interpret it as the network scaled epidemic rate.Epidemics on two different networks are comparable through this parameter if the two networks have similar values of µ * .Such findings regarding α can be explained by looking the results of one parameter combination in detail.The joint posterior density in Figure 6 displays an inverse relationship between β and µ * when no G is given, echoing the findings by Britton and O'Neill (2002), who showed the inverse relationship between β and the edge inclusion probability parameter p in their BRG model, and argued that "the model parameterisation permits different explanations of the same outcome".This means that we cannot simultaneously identify the parameters that characterise the epidemic rate and the network connectedness, respectively, and being able to identify one relative to the other is as good as we can do.

Application
Before applying the proposed model to its data set introduced in Section 1, we shall describe App Movement in detail.This platform removes the resource constraints around mobile application development through providing an automated process of designing and developing mobile applciations.The process begins with the Support Phase whereby a community creates a campaign page in response to a community need and engages the community in supporting the concept through promoting and sharing the campaign on online social networks.When the target of 150 members supporting the campaign within 14 days has been met to ensure an active user base, the campaign proceeds to the Design Phase, in which ideas regarding the design of the mobile application are being voted on.Once supporters have cast their votes, the platform incorporates the highest rated design decisions and automatically generates the mobile application.Since its launch in February 2015, App Movement has been adopted by over 50,000 users supporting 111 campaigns, 20 of which have been successful in reaching their target number of supporters, with 18 generated mobile applications currently available in the Google Play Store and Apple App store, for iOS and Android devices, respectively.
The design of logging the usage of App Movement enables us to convert the data into a format suitable for modelling and fitting.To illustrate this, assume that user 1 shares a campaign page, with uniform resource locator (URL) A, on certain social network.When user 2, who is connected to user 1 and has never viewed the campaign page, clicks on URL A, a new URL B unique to user 2, directing to a page with the same contents of the campaign, is created.Any subsequent visits to the same page of users 1 and 2 will be redirected to the same URLs A and B, respectively.Therefore, within each campaign, there is a 1-1 relationship between the URLs and the users.We can say that user 1 infects user 2, at the time when URL B is created.Similarly, the users associated with the URLs created by clicking URL B can be said to be infected by user 2. This process is illustrated in the flow diagram in Figure 7.By carrying out this process of connecting users with those who infected them until we reach, in tree terminology, the root and all the leaves, we end up with both the transmission tree and the infection times of the epidemic of the campaign sharing.The inference outlined in Section 3 can then be carried out.The model is fitted to each of ten campaign epidemics separately, assuming they have no influence on each other for simplicity, with m ranging from 334 to 402.Each campaign corresponds to a different proposed application.The inference algorithm is used with γ fixed to 0. For each epidemic, 5 chains of length 2000 (no thinning) are obtained, after the first 1000 iterations being discarded as burn-in, during which the proposal standard deviation for µ is tuned.The traceplots and posterior densities of β and µ are plotted in Figure 8, for the model fit to the epidemic visualised in Figures 1 and 2. The acceptance rate for µ is 0.269, and is similar for the other 9 epidemics considered.The posterior means and standard deviations of β, µ and α for all epidemics are reported in Table 1.Also reported is the correlation between β and µ * = µ + e −µ , which is modest but consistently negative.For any parameter θ, we denote E(θ|P, Ĩ) as its posterior mean.We can see that E(α|P, Ĩ) is not dependent on m and is significantly different across the epidemics.Combining with the fact that the correlation with µ (or µ * ) is modest (not shown), α can be seen to be successfully identified.

Campaign
Model comparison or selection is difficult here because the BRG model by Britton and O'Neill (2002) is not nested in our proposed model, even when γ is treated as a parameter.Nevertheless, we fit the BRG model to the same campaign epidemics, focusing on the parameter results to examine its goodness-of-fit.The posterior means and standard deviations of the parameters are reported in Table 2, which shows that E(p|P, Ĩ) is of the same magnitude across all the epidemics.
For the singled out epidemic with m = 335, compared to the average degree E(p|P, Ĩ)×(m−1) = 2.158, which means each user on average is connected to slightly more than two other users, the most infectious user has infected 18 other users.If we use E(p|P, Ĩ) as the true value of p and simulate a BRG, the probability that one particular user is connected to at least 18 users is 1.453 × 10 −11 .Combining these two quantities with the independence of potential edges, we can see that it is very unlikely a BRG generated in this way will be connected, let alone overlay P.
On the other hand, the network construction described in Section 2 ensures that the PA network generated is always connected.Finally, contrary to the clear inverse relationship between β and p reported in Britton and O'Neill (2002) for both simulated and real-life data, the joint posterior of (β, p) can be well approximated by a bivariate Gaussian distribution, for all epidemics reported here.Combining with the fact that the correlations are small (last column of Table 2), β and p can be said to be close to independence aposteriori.This suggests that the presence of P actually makes p (and β) identifiable, but the estimate of the successfully identified p now shows a poor fit of the BRG model to our data.
While the values of E(p|P, Ĩ) in Table 2 are low, those of E(β|P, Ĩ) are similar to their PA counterparts in Table 1, but are unusually high compared to real-life epidemics.This is because, while real-life epidemics usually spanned days (see, for example, Britton and O'Neill (2002) and Neal and Roberts (2005)), the campaign epidemics spanned weeks (see the time scale of Figure 1).Out of the ten epidemics reported here, epidemics 1 and 6 spanned the longest and shortest, with a period of 187.371 and 36.7 days, respectively, and this explains why their respective values of E(β|P, Ĩ) are on opposite extremities among those reported in Table 1.
Using the posterior of β and µ, we can simulate the network epidemic and obtain the predictive distribution of the cumulative counts over time, of which the 95% predictive intervals (PI) overlay the observed data in Figure 9.While the early period of the epidemic lies within the 95% PI, it is the slower periods of infections towards the end of the observation period that are more difficult to reproduce.

Discussion
We have described a network epidemic model which combines the SI epidemic model with the PA network model, with the inference carried out by MCMC as the likelihood can be explicitly computed.The results of two simulation studies suggest dropping one parameter in order to make the network scaled epidemic rate parameter identifiable.The model and inference algorithm are successfully applied to ten different "sharing" epidemics of a set of online community commissioning data.The results suggest that the PA model is a better alternative for network generation than the BRG model, for data sets of epidemics taking place on social networks.
Several modifications can potentially make the model more useful.First of all, information on the average connectedness or the degree distribution on social networks can be solicited beforehand, so that an informative prior can be assigned to µ and/or γ, at least one of which can then be identified.Given the vast amount of data about social networks such as Facebook and Twitter freely available on the Internet, such information should be possible to obtain.
Another way of gaining information for the parameters, particularly for the App Movement data, is combining the epidemics in the modelling and inference.As users on the social network are usually involved in more than one epidemic, we can pull together several epidemics which have overlap in the users, and build a larger underlying network G comprising all the users involved, which is guaranteed to be connected.As a user may be infected in one epidemic but not another, each of the epidemics may then be incomplete.While the likelihood calculations, for example (5), may differ slightly, and each epidemic has a different rate parameter (no matter whether it is network scaled or not), the inference procedure is basically the same, and only one set of parameters (µ, γ) is used to govern the network generation.Borrowing strength from other epidemics in this way will utilise more information available in the data, and result in parameters being better identified or more precisely estimated.
That the epidemic is Markovian given the network is a simplistic assumption, which has been shown inadequate for real-life epidemics.To relax this assumption, one can use alternative distributions for the infectious period, such as the Gamma distribution, which is used by, for example, Xiang and Neal (2014).This is not relevant here because the SI model is used instead of the more popular SIR model, or any compartment model in which being infectious is not the final state.Instead of using one single epidemic rate β for all the infections, meaning that the interarrival times are exponentially distributed, one can use a different rate β i for the infection of individual i, where β i is drawn from certain probability distribution.This approach to modelling non-Markovian processes, proposed by Masuda and Rocha (2017), can be applied to any kind of compartment model, and can encompass a range of interarrival time distributions, simply by choosing a different probability distribution from which β i is drawn.When it comes to the inference, the interarrival time distribution parameter(s) and each β i , given all other epidemic rate parameters, will be updated individually, on top of the existing parameters and latent variables.Given that most of the computational time lies in updating the potential edges of G one by one, this should not add much to the computational burden.On the other hand, including users who have not seen or joined the campaign to correct for the potential overestimation of β will add to the computation burden, because the number of users not infected by a particular campaign is vast compared to the number of infected.

A Likelihood derivation
This appendix derives each componet of the likelihood, in reverse order of (3).
First, the quantity π(G|µ, γ, σ) can be divided into two components, which correspond to the processes described in Sections 2.1 and 2.2, respectively.To derive the contribution of the independent sequence of random numbers of new edges to π(G|µ, γ, σ), we first establish its relationship with G. Specifically, for 3 ≤ i ≤ m, X i is the sum of the first i − 1 elements of the This makes sense as the sum of new edges is equal to the total number of edges minus one.Using (1), we can calculate the likelihood of the sequence of random numbers of new edges: where 1{A} is the indicator function of event A, and |G| := For the process of attaching edges to nodes, it is straightforward to compute the likelihood using (2).However, because of the nature of weighted sampling without replacement, we have to, for each i, calculate the probability conditional on each of the X i !permutations of the selected nodes and then average over all X i !probabilities to arrive at the likelihood.As calculating the exact likelihood in this way is not computationally feasible because the factorial grows faster than the exponential function, we approximate the likelihood based on one permutation of weighted sampling without replacement instead.The contribution by the new edges brought by node i is , where w j is given by (2).Therefore, the likelihood of the process of adding new edges is Multiplying ( 10) and ( 11) gives the expression of π(G|µ, γ, σ) in ( 6) as Second, π( Ĩ|G, β) contains contributions from the m − 1 infections and from the connections through which no infections occurred: which is the same as (5), and confirms that the infection times are independent of the transmission tree, as the infection mechanism is a Poisson process (Britton and O'Neill, 2002), hence the dropping of P in the function argument.
Third, the contribution to likelihood by P is straightforward, as π(P|G) "is the uniform distribution on the set of all possible infection pathways" (Britton and O'Neill, 2002).We have to ensure nobody infects the initial infected (individual 1), and for each individual j of the remaining m−1 individuals, there is one and only one neighbour who is infected prior to individual j, ends up L 2 (G; σ, γ) in the likelihood.We propose γ * from a symmetrical proposal q(•|γ) and accept γ * with probability Sampling σ: To update the ordering as a whole, we propose σ * , which is accepted with probability This requires a symmetrical proposal on the permutation space.Specifically, we use a "random walk by insertion" method used by Bezáková, Kalai and Santhanam (2006).Two indices i and j are first sampled with replacement from {1, 2, . . ., m} uniformly.Without loss of generality, assume that i < j.While the current ordering is the proposed ordering is σ * = (σ 1 , . . ., σ i−1 , σ i+1 , . . ., σ j−1 , σ j , σ i , σ j+1 , . . ., σ m ).
The intuition is that the i th card of a deck of cards is taken out and inserted in the j th position.
As (i, j) and (j, i) have the same probability of being sampled in their particular orders, the proposal is symmetrical.This method is, according to Bezáková et al. (2006), more efficient than the random swap method, in which an arbitrary pair of adjacenct indices (σ is picked, and a swap between them produces the proposed ordering.
Theoretical properties are not clear yet to provide guildlines on optimising the number of random insertions in each MCMC iteration.As it is found out that the majority of the computation time per iteration is taken by updating all potential edges of G individually, which will be described below, we simply propose to update the ordering m times in each iteration, so that each index will on average be picked and inserted once.It should however be noted that an index potentially changes its position even if it is not selected, as long as its position lies between i and j inclusive.
Because of the difference in the network ordering and epidemic ordering, for each pair (i, j), we proceed to sample G σiσj instead of G ij .This will not pose a problem in practice as we will go through all combinations of (i, j) satisfying 1 ≤ i < j ≤ m.Because of the 1-1 relationship between (i, j) and (σ i , σ j ) given σ, eventually all the potential edges will be updated.For notational convenience, we also define s = min(σ i , σ j ) and t = max(σ i , σ j ), which implies Ĩt > Ĩs .
If P st = 1, as mentioned in Section 2.3 and implied by (4), the four equivalent quantities, namely G σiσj , G σj σi , G st and G ts , are equal to 1 with posterior probability 1, regardless of all other parameters and G −st .Therefore, we shall only consider π(G st |P st = 0, G −st , • • • ) in detail.
Before doing so, we observe that β can be integrated out in the joint posterior in (8), which is achieved by substituting (5) and the prior of β in (7) into (9), followed by integration with respect to β: .
The posterior distribution π(G st |P st = 0, G −st , • • • ) can then be obtained directly.The interpretation of the right-hand expression in the two lines is as follows.The numerator and the denominator correspond to the network likelihood and the transmission tree likelihood, respectively, under the corresponding scenario of whether nodes σ i and σ j are network neighbours, which also affects the summation in ( 12), hence the difference between the two lines in the bracketed term.It can be seen that the computational burden lies in computing the network likelihood.Assume that we are going to update G st , whose current value is x ∈ {0, 1}, and that we have retained the likelihood under G = G x −st after updating the previous potential edge.
We now have to calculate the network likelihood under the alternative scenario G = G 1−x −st .The summations involved in (10) and (11) makes it not possible to factorise the network likelihood, which therefore requires the whole of G 1−x −st to compute.
Sampling P: Although the inference algorithm above is for the transmission tree being part of the data, it can be extended to include sampling of P if it is unknown, in a way similar to how G is being treated as latent variables and inferred.As mentioned in Section 3, P follows a uniform distribution on all possible infection pathways given G, thus independent of how G is generated in the first place.As the same Gibbs step for sampling P described by Britton and O'Neill (2002) can therefore be used, it will not be repeated here.

Figure 1 :
Figure 1: Cumulative count of infected for an epidemic of a campaign.The time points where the count increments are the infection times I.

Figure 2 :
Figure 2: Network representation of the transmission tree P for the same campaign epidemic as shown in Figure 1.
second simulation study is carried out, this time with m being allowed to take different values, namely m = 30,50,70, combined with the following true values: β = 0.4 and µ = 4,6,8,10.For each parameter combination, we also allow different proportions of G in the simulated data to be known in addition to P. A proportion of 0 means only P is known, while a proportion of 1 means both P and G are given.The true value of µ against its posterior distribution is plotted for different combinations of m and proportions of G in Figure 5.The posterior of µ again shows

Figure 3 :
Figure 3: True value of γ against its posterior distribution (horizontal line) and mean (dot), at different true values of µ.The dashed line is the line y = x.

Figure 4 :
Figure 4: True value of µ, slightly adjusted for visibility, against its posterior distribution (horizontal line) and mean (symbol), at different true values of γ, which is estimated (circle) or fixed (triangle) to its true value in the MCMC algorithm.

Figure 5 :Figure 6 :
Figure 5: True value of µ against its posterior distribution (horizontal line) and mean (dot), at different combinations of m and proportions of G known additional to those implied by P. The dashed line is the line y = x.

Figure 7 :Figure 8 :
Figure 7: Flow diagram of App Movement sharing.A straight arrow represents the generation of the labelled URL when the user visits the campaign page, while each solid-dashed curved arrow pair represents the click of the labelled URL and the direction of infection, respectively.

Figure 9 :
Figure 9: Cumulative count of infected for the campaign epidemic shown in Figure 1, overlaid by 95% predictive intervals of simulated cumulative counts over time (red dashed lines).

Table 1 :
Posterior mean (standard deviation) and correlation of the scalar parameters in the PA model fitted to ten different campaign epidemics.

Table 2 :
Posterior mean (standard deviation) and correlation of the parameters in the BRG model fitted to ten different campaign epidemics.