# Spreading dynamics on complex networks: a general stochastic approach

## Abstract

Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (*e.g.*, epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.

### Keywords

Spreading dynamics Complex networks Stochastic processes Contact networks Epidemics Markov processes### Mathematics Subject Classification (2000)

93A30 05C82 60J28 92D25 92D30## 1 Introduction

Mathematical modelling has proven a valuable tool when addressing population dynamics problems, both for public health and ecological issues. The increased availability of powerful computer resources has facilitated the use of agent-based models and other complex modelling approaches, all accounting for numerous parameters and assumptions (Auchincloss and Diez Roux 2008; McLane et al. 2011; Broeck et al. 2011). Our confidence in these models may increase when they are shown to agree with empirical observations and/or with previously accepted models. However, when discrepancies appear, the complexity of these computer programs may conceal the effects of underlying assumptions, making it difficult to isolate the source of disagreement. While analytical approaches offer more insights on the underlying assumptions, their use is often restricted to simpler interaction structures and/or dynamics.

The purpose of this paper is to systematically model the global behaviour of stochastic systems composed of numerous elements interacting in a complex way. “Complex” here implies that interactions among the elements follow some nontrivial patterns that are neither perfectly regular nor completely random, as often seen in real-world systems. “Stochastic” implies that the system may not be completely predictable to an observer and that a probabilistic solution is sought.

We present (Sect. 2) a general modelling scheme where network theory (Barrat et al. 2008; Boccaletti et al. 2006; Durrett 2007; Newman 2010) accounts for the interactions between the elements of the system and where a birth-death Markov process (Gardiner 2004) models the stochastic dynamics. Since a tremendous amount of information may be required to store the state of the whole system, we seek the part of this information that is important for the problem at hand and then approximate the dynamics by tracking only this limited subset. Part of the discarded data may still affect, albeit weakly, the behaviour of the system. We fill this knowledge gap by inferring the missing information such that it is consistent both with the information we follow and any other prior information that is available to us.

An important part of this paper (Sect. 3) provides explicit examples to these general ideas. Keeping in mind that the generality of our approach is in the mathematical description and not in the studied cases, we focus our study to spreading processes such as the propagation of infectious diseases on contact networks (Keeling and Eames 2005; Bansal et al. 2007; Danon et al. 2011; House and Keeling 2011; Sharkey 2011; Taylor et al. 2012). Specifically, our examples correspond to either one of two standard epidemic models, the susceptible-infectious-susceptible (SIS) and the susceptible-infectious-removed (SIR) dynamics, for which we also adopt the relevant vocabulary. While our first examples study simpler cases, facilitating the understanding of our systematic method, the later models show how the same approach applies to more complex interaction structures.

We then compare and analyse the results of these examples (Sect. 4). This reveals some general considerations for both the accuracy and the complexity of our modelling approach. We find that treating the inferences of missing information explicitly helps systematize the model development and highlights numerous possibilities for future developments. An important simplification occurs for SIR spreading processes and related dynamics, leading to an *exact* model with a small number of dynamical variables.

We conclude (Sect. 5) on how our general approach may be applied to population dynamics in general, as well as to other spreading processes such as the cascading extinctions of species in food webs (Rezende et al. 2007; Bascompte and Stouffer 2009; Dunne and Williams 2009). Returning to the problem of understanding the source of discrepancies in complex models, we explain how *modelling these models* with our method could help identifying important assumptions and isolating the source of disagreement. Mathematical details and further generalizations are presented in two separate Appendices.

## 2 General modelling scheme

This section is not concerned with any specific model per se, but instead seeks to answer a “meta-modelling” question: given limited resources, how can one design a simple, yet reliable, model? Specific examples are deferred to Sect. 3.

### 2.1 Levels of abstraction

- Level I:
The

*real-world system*that we desire to model. Examples include the propagation of infections in populations; and/or the formation and dissolution of groups and/or partnerships. Such systems may be very complex, and we definitely do not know everything about them. We are particularly interested in a subset of these unknowns, which we will hereafter refer to as*open questions*(*e.g.*, “How does the population structure affects the spread of infections?”).- Level II:
A (possibly hypothetical) agent-based Monte Carlo computer simulation, which we call

*full system*. Since we do not know everything about Level I, obtaining Level II clearly requires some assumptions and/or approximations. Nonetheless, we*assume*that running this simulation on an infinitely powerful computer would provide some (perhaps partial) answers to our open questions. In this sense, Level II*approximately reproduces*the behaviour of Level I: understanding Level II provides some understanding of Level I.- Level III:
A

*simplified system*which requires much less computational resources than the Level II full system. In practice, the truth is that we do not have access to an infinitely powerful computer, and regular computers may not be sufficient for our needs (which may include, for instance, time-consuming sensitivity analysis for parameters estimation). To obtain Level III, we must endeavour to extract the relevant parts of Level II, and remove everything else. We say that Level III*approximately reproduces*the behaviour of Level II if it (approximately) provides the same answers to the Level I open questions.

*outside the scope of this article*, meaning that, given a complex system (Level I) to analyse, it is left to the experts in the appropriate field to provide a reliable Level II.

Another point of importance is that there is no a priori guarantee that we will figure out the correct “relevant parts” to obtain a simple Level III model that approximately reproduces the behaviour of the Level II simulations. In fact, it is not even guaranteed that such a simplification exists. Nonetheless, we may a posteriori test whether or not the approach was successful by directly comparing the outcomes predicted by the Level II simulations to those of the Level III model.

For the purpose of such comparison, the examples presented in Sect. 3 have been chosen simple enough for us to actually perform the Level II simulations. Ultimately, we would want to acquire enough confidence in our Level III models to directly address the Level I open questions, hence skipping the costly Level II simulations. We also identify some factors influencing the success of our approach in Sect. 4.

### 2.2 States, rules, and priors

- \(Z(t)\):
The

*state of the full system*at time \(t\) This corresponds to any form of storage available to the Level II simulations (*e.g.,*heap, stack, registers, hard drive).^{1}Further details concerning this object are delayed to Sect. 2.3.- \(V\):
The

*rules of the full system*This corresponds to the program itself: given the state \(Z(t)\) at time \(t\), \(V\) tells us towards which state^{2}\(Z(t+{{\mathrm{d}}}t)\) we should update the memory after an infinitesimal time step \({{\mathrm{d}}}t\). It is easy to show that such a simulation respects the Markov property: knowing the current state \(Z(t)\) of the computer’s memory, none of the past state may further affect our probability estimate for its future state.

- \(X(t)\):
The

*state of the simplified system*at time \(t\). We will typically be interested in \(X(t)\) that are*much smaller*than \(Z(t)\). Hence, knowledge of \(X(t)\) typically convey only*partial*information on \(Z(t)\). As a trivial example, if \(Z(t) \in \{ 0, 1 \}^N\) is a sequence of \(N\) zeros or ones, then we could choose \(X(t) \in \{ 0, 1, 2, \ldots , N \}\) as specifying the total number of ones in this sequence. Further details concerning this object are delayed to Sect. 2.4.- \(W\):
The

*rules of the simplified system*. Given the state \(X(t)\) at time \(t\), \(W\) provides a probability distribution for \(X(t+{{\mathrm{d}}}t)\) after an infinitesimal time step \({{\mathrm{d}}}t\). Although \(V\) was*shown*to respect the Markov property, we*choose*\(W\) to also be Markovian.

^{3}the behaviour of \(Z(t)\). Since \(X(t)\) may only convey partial information on \(Z(t)\), a crucial step is to assert the likelihood of each Level II states by using as much information as is available to us. For this purpose, we define one more object.

- \(Y\):
The

*prior information*that can be used in a Bayesian inference of \(Z(t)\) from \(X(t)\). Indeed, not all \(Z(t)\) may be equally likely, and some constraints may even*forbid*large subsets of them. In the previous trivial example, \(X(t)\) specifies the total number of ones in \(Z(t) \in \{ 0, 1 \}^N\), and our best guess may thus be to assign equal probability to every \(Z(t)\) satisfying this constraint (and probability zero to the violating cases). However, complications may occur: if some mechanisms in \(V\) disfavours the formation of long sequences of \(0\) in \(Z(t)\), then it may be necessary to account for this when translating \(V\) to \(W\). Such specifications are also done through \(Y\).

\({\mathbb {P}}\!\left( {Z(t)}\vert {X(t),Y}\right) \) is the probability for the Level II system to be in state \(Z(t)\) at time \(t\), given that the Level III system is in state \(X(t)\) at time \(t\), and provided the prior information \(Y\);

\({\mathbb {P}}\!\left( {Z(t)}\vert {Y}\right) \) is the probability for the Level II system to be in state \(Z(t)\) at time \(t\), given that we only know the prior information \(Y\);

\({\mathbb {P}}\!\left( {X(t)}\vert {Z(t),Y}\right) \) is the probability for the Level III system to be in state \(X(t)\) at time \(t\), given that the Level II system is in state \(Z(t)\) at time \(t\), and provided the prior information \(Y\); and

\({\mathbb {P}}\!\left( {X(t)}\vert {Y}\right) \) is the probability for the Level III system to be in state \(X(t)\) at time \(t\), given that we only know the prior information \(Y\).

Use \({\mathbb {P}}\!\left( {Z(t)}\vert {X(t),Y}\right) \) to infer, from the available information (

*i.e.*, \(X(t)\) and \(Y\)), the distribution of Level II states \(Z(t)\) that is currently predicted by our Level III model.Use the Level II rules \(V\) to obtain the updated distribution of Level II states \(Z(t+{{\mathrm{d}}}t)\).

Use \({\mathbb {P}}\!\left( {X(t+{{\mathrm{d}}}t)}\vert {Z(t+{{\mathrm{d}}}t),Y}\right) \) to translate \(Z(t+{{\mathrm{d}}}t)\) back to a distribution of Level III states, \(X(t+{{\mathrm{d}}}t)\).

Up to now, our discussion has remained very general—too general in fact to be of much practical use. Since we are primarily interested in systems composed of many elements interacting through complex patterns, it is natural to reconsider the previous quantities in terms of networks.

### 2.3 Networks

A *network* (graph) is a collection of *nodes* (vertices) and *links* (edges) (Barrat et al. 2008; Boccaletti et al. 2006; Durrett 2007; Newman 2010). Nodes model the elements of a system; links join nodes pairwise to represent interactions between the corresponding elements. Two nodes sharing a link are said to be *neighbours* and the *degree* of a node is its number of neighbours. The part of a link that is attached to a node is called a *stub*: there are two stubs per link and each node is attached to a number of stubs equal to its degree. A link with both ends leading to the same node is called a *self-loop* and *repeated links* occur when more than one link join the same two nodes.

Links may (or may not) be directed: an *undirected* link represents a bidirectional and symmetric interaction between the two linked nodes, while a *directed* links represents interactions that are either unidirectional or asymmetrical. A network that has only undirected links is said to be *undirected*, one that has only directed links is *directed* and one that has both is *semi-directed*.

There are systems such that specifying its state \(Z(t)\) exactly amounts to specifying the network structure. However, most systems are not purely structural: they are composed of elements that, by themselves, require additional information to be properly characterized. Hence, we assign to each node (resp. link) a *node state* (resp. *link state*) that specifies the intrinsic properties of the corresponding element (resp. interaction) in the system. There is a total of \(\mathcal {N}\) discrete node states (resp. \(\mathcal {L}\) discrete link states) and the intrinsic state \(\nu \) of a given node (resp. \(\ell \) of a given link) may change in time to any of these accessible values; the special case \({\mathcal {N}} = 1\) (resp. \({\mathcal {L}} = 1\)) corresponds to the situation where every nodes (resp. links) remain intrinsically identical during the whole process. At any given time \(t\), the state \(Z(t)\) of the full system is specified by the intrinsic state of all of its components (nodes and links) in addition to the network structure governing their interactions. While each node (or link) may be in only one intrinsic state at any given time, different pieces of information may be encoded in this single state.

### 2.4 Motifs

Specifying the complete structure of a complex network requires a tremendous amount of information. Since we want the state \(X(t)\) of a simplified system to be of manageable size, approximations have to be made. A convenient way to do so, and one that has proven to give good results in the past (House et al. 2009; Karrer and Newman 2010; Gleeson 2011; Marceau et al. 2010; Hébert-Dufresne et al. 2010; Marceau et al. 2011; Noël et al. 2012), is to specify the network structure through its building blocks.

A network *motif* is a pattern that may appear a number of times in the network. For example, two linked nodes form a *pair motif* while three nodes all neighbours of one another form a *triangle motif*. Motifs may encode intrinsic node states or other relevant information; further details and examples are provided throughout Sect. 3 as well as in Appendix A.

*state vector*\(\mathbf{x}(t) \in {\mathbb {N}}_0^d\) of a system as a vector of \(d\) natural numbers (including zero) specifying how many times different motifs appear in the network at a given time \(t\). We may perceive these \(d\) tracked motifs as building blocks that should be attached together to form a network structure (see Fig. 2), and we now specify the simplified system in these terms. Hence, we enumerate the available building blocks with the simplified system state \(X(t) = \mathbf{x}(t)\), whereas the prior information \(Y\) specifies how such blocks may be attached. There will usually be numerous valid ways to attach the blocks, some more probable than others. Given the available information, the resulting distribution is our best estimate for \({\mathbb {P}}\!\left( {Z(t)}\vert {\mathbf{x}(t), Y}\right) \).

By judiciously choosing the motifs enumerated in \(\mathbf{x}(t)\) and by specifying informative prior information \(Y\), one may hope for this probability distribution to be densely localized around the “real” value of \(Z\) in the full system. This mapping can then be used to convert the rules \(V\) of the full system to the rules \(W\) of the new simplified one. We approach this problem from the perspective of birth-death Markov processes.

### 2.5 Birth-death stochastic processes

In a *birth-death* process (Gardiner 2004; Van Kampen 2007), the elements composing a system may be destroyed (death) while new ones may be created (birth). It is therefore natural to state the rules \(W\) of our simplified system in those terms: any change in the state vector \(\mathbf{x}(t)\) may be perceived as an event where motifs are created and/or destroyed.

Quantitatively, a *forward transition event of type*\(j\) takes the system from state \(\mathbf{x}(t)\) to state \(\mathbf{x}(t+{{\mathrm{d}}}t) = \mathbf{x}(t) + \mathbf{r}^j\) and has probability \(q_{j}^+\!\bigl ({\mathbf{x}(t)},{Y}\bigr ) {{\mathrm{d}}}t\) to occur during the time interval \([t,t+{{\mathrm{d}}}t)\). Similarly, a *backward transition event of type*\(j\) takes the system from state \(\mathbf{x}(t)\) to state \(\mathbf{x}(t+{{\mathrm{d}}}t) = \mathbf{x}(t) - \mathbf{r}^j\) and has probability \(q_{j}^-\!\bigl ({\mathbf{x}(t)},{Y}\bigr ) {{\mathrm{d}}}t\) to occur during the same time interval.

^{4}elements \(r_i^j\) of the

*shift vector*\(\mathbf{r}^j \in {\mathbb {Z}}^d\) together with the

*rate functions*\(q_{j}^+\!\bigl ({\mathbf{x}(t)},{Y}\bigr )\) and \(q_{j}^-\!\bigl ({\mathbf{x}(t)},{Y}\bigr )\) thus completely define the rules \(W\) governing the simplified system. This Markov process is summarized in the master equation

Solving the model specified by \(W,\; \mathbf{x}(t)\) and \(Y\) thus amounts to obtaining \({\mathbb {P}}\!\left( {\mathbf{x}(t)}\vert {Y}\right) \) by solving Eq. (2). However, such an approach is often unpractical due to the large number of accessible states for \(\mathbf{x}(t)\). In the rest of this section, we consider an approximation that greatly simplifies this problem by assuming that \({\mathbb {P}}\!\left( {\mathbf{x}(t)}\vert {Y}\right) \) is a multi-dimensional Gaussian probability distribution. Note that the following introduces no new mathematics, and we will thus heavily rely on the textbook (Gardiner 2004) to simplify the discussion. Our starting point, Eq. (2), amounts to (Gardiner (2004), equation (7.5.9)).

*drift vector*\(\mathbf{a}\bigl ( \mathbf{x}(t) \bigr )\) has the same dimension as \(\mathbf{x}(t)\) and is composed of the elements

*vector Wiener process*\(\mathbf{W}(t)\) needs not be of the same dimension as \(\mathbf{x}(t)\), so the

*stochastic weight matrix*\(\widetilde{\mathsf{B}}\bigl (\mathbf{x}(t)\bigr )\), of elements

^{5}

*evolution matrix*\(\mathsf{A}(t,t')\) and the

*diffusion matrix*\(\mathsf{B}\bigl (\mathbf{x}(t)\bigr )\)\(\bigl [\)of elements \(B_{ii'}\bigl (\mathbf{x}(t)\bigr )\)\(\bigr ]\)

*covariance matrix*\(\mathsf{C}(t)\) of \(\mathbf{x}(t)\) for a deterministic initial condition is provided by (Gardiner (2004), equation (4.4.85))

*actual*solution of (2) is close to a \(d\)-dimensional Gaussian.

Although many other tools are available for the analysis of stochastic systems, the simplicity, the generality and the straightforwardness of the Gaussian approximation (4) makes it an instrument of choice that will be used extensively in this article.

## 3 Application to spreading dynamics

Without prejudice to the generality of Sect. 2, we now focus our study to spreading processes. An epidemiological terminology is used: whatever propagates among neighbouring nodes, be it desirable or not, is called an *infection*. We find that the basic SIS and SIR epidemiological models, both to be defined shortly, require little prior knowledge from the part of the reader while being sufficiently complex for the needs of the present study. For similar reasons, we assume that links are undirected and intrinsically identical (*i.e.*, \({\mathcal {L}} = 1\)); see Appendix A for discussions concerning directed links and more than one intrinsic link state.

At a given time \(t\), the intrinsic state of each node of an *SIS model* may be either of \({\mathcal {N}} = 2\) accessible intrinsic node states: *Susceptible* (not carrying the infection) or *Infectious* (carrying the infection). The full system state \(Z(t)\) hence specifies each node’s intrinsic state together with the complete structure of the network. The rules \(V\) are simple: during any time interval \([t,t+{{\mathrm{d}}}t)\), each infectious node may recover (*i.e.*, it becomes susceptible) with probability \(\alpha \, {{\mathrm{d}}}t\) and, for each of its susceptible neighbours, has probability \(\beta \, {{\mathrm{d}}}t\) to transmit the infection (*i.e.*, the neighbour becomes infectious).

In addition to the susceptible and infectious intrinsic states, the nodes of an *SIR model* may also be *Removed* (once had the infection and can neither acquire nor transmit it ever again) and there are thus \({\mathcal {N}} = 3\) accessible intrinsic node states. The rules \(V\) are the same than for the SIS model with respect to infection (*i.e.*, infectious nodes transmit to their susceptible neighbour with probability \(\beta \, {{\mathrm{d}}}t\)), but recovery is replaced by removal (*i.e.*, infectious nodes become removed with probability \(\alpha \, {{\mathrm{d}}}t\)).

The remainder of this section studies how different choices of state vector \(\mathbf{x}(t)\) and prior information \(Y\) translate in the rules \(W\) of the simplified system. Each case corresponds to a different model where \(W\) is defined through a set of equations whose tags all share the same numeral [*e.g.*, (5a)–(5f)]. We thus define numerous different \(\mathbf{x}(t)\), \(\mathbf{r}\bigl (\mathbf{x}(t)\bigr ),\; q_{j}^+\!\bigl ({\mathbf{x}(t)},{Y}\bigr ),\;q_{j}^-\!\bigl ({\mathbf{x}(t)},{Y}\bigr )\) etc., which must all be understood within the scope of their respective equation tag numeral. Although figures present results concomitantly with the specification of the corresponding models, all discussions are delayed to Sect. 4.

### 3.1 Pair-based SIS model

Section 2.4 defined a pair motif as two linked nodes. Since the nodes of a SIS model are either susceptible or infectious, there are three possibilities for pair motifs: two linked susceptible nodes (noted \(S{-}S\)), two linked infectious nodes (noted \(I{-}I\)) and a susceptible node linked to an infectious one (noted \(S{-}I\)). Two nodes involved in a pair motif may have other neighbours.

Pair motifs are often used in conjunction with *node motifs*: the trivial structure that is one node. In the SIS model, there are two possibilities for a node motif: susceptible nodes (noted \(S\)) and infectious nodes (noted \(I\)). A state vector \(\mathbf{x}(t)\) based on both node and pair motifs would thus be composed of five elements enumerating the amount of times each motif appears in the network: \(x_S(t)\), \(x_I(t)\), \(x_{S-S}(t)\), \(x_{S-I}(t)\) and \(x_{I-I}(t)\). However, additional assumptions about the structure of the network may cause some of these quantities to be redundant.

#### 3.1.1 Degree-regular random network

*regular random network*of size \(N\): there are \(N\) nodes in the network which all have \(\kappa \) neighbours (degree \(\kappa \)), and each such neighbours are chosen randomly.

^{6}Such a network must respect the structural constraints \(x_S(t) = N - x_I(t),\; x_{S-S}(t) = \frac{1}{2} \bigl [ \kappa x_S(t) - x_{S-I}(t) \bigr ]\) and \(x_{I-I}(t) = \frac{1}{2} \bigl [ \kappa x_I(t) - x_{S-I}(t) \bigr ]\). Hence, with the prior information \(Y\) specifying \(N\) and \(\kappa \), the state vector

In those terms, the rules \(V\) specify that an infection has probability \(\beta \, x_{S-I}(t)\, {{\mathrm{d}}}t\) to occur during the time interval \([t,t+{{\mathrm{d}}}t)\) while a recovery has probability \(\alpha \, x_I(t)\, {{\mathrm{d}}}t\) to occur. Clearly, an infection translates to the destruction of a \(S\) motif and the creation of a new \(I\) one, and a recovery corresponds to the inverse process. However, pair motifs are also affected by such transitions since the affected node had neighbours. Hence, the effect on \(\mathbf{x}(t)\) of the infection or recovery of a node depends on some information that is not directly tracked by \(\mathbf{x}(t)\)—*i.e.*, what is the state of the infected or recovered node’s neighbours—and we thus have to *infer* this information from the available data.

In order to facilitate this inference, we define the *first neighbourhood motif*\(S \varGamma _{1}({k_S,k_I})\) as a susceptible node that has \(k_S\) susceptible neighbours and \(k_I\) infectious neighbours. Similarly, the motif \(I \varGamma _{1}({k_S,k_I})\) corresponds to an infectious node with \(k_S\) susceptible neighbours and \(k_I\) infectious ones. In both cases, we qualify as *central* the node whose neighbours are explicitly stated. The other nodes of the first neighbourhood motif, *i.e.*, the neighbours of the central node, may have other neighbours of their own.

^{7}

^{8}

^{9}this approximately corresponds to randomly drawing the neighbors (i.e., \(\kappa \) independent sampling) of the central infectious node among the pair motifs \(S{-}I\) and \(I{-}I\) {i.e., each sampled neighbor has probability \([x_{S-I}(t)]/[\kappa \, x_I(t)]\) to be susceptible}

#### 3.1.2 Erdős-Rényi network

*i.e.*, \(x_S(t) = N - x_I(t)\) and \(x_{S-S}(t) = M - x_{S-I}(t) - x_{I-I}(t)\)] and a state vector of three elements suffices

### 3.2 First neighbourhood SIS model

We consider a full model (\(V\) and \(Z(t)\)) for SIS dynamics on a *configuration model* (CM) network: given a sequence \(\{n_0, n_1, n_2, \ldots \}\), links are randomly assigned between nodes such that, for each degree \(\kappa \), there are \(n_\kappa \) nodes of degree \(\kappa \). In a computer simulation, we create \(n_\kappa \) nodes with \(\kappa \) stubs for each possible \(\kappa \) and then randomly pair stubs to form links. No particular mechanism is used to prevent the formation of repeated links and self-loops: this simplifies the analytical treatment and has little effect when the network size is sufficiently large.

^{10}Although one could generalize the principle of pair motifs to account for the degree of a node (see Appendix A.5), we here prefer to handle the heterogeneity in node degree by enumerating every possible first neighbourhood motifs in the state vector

*e.g.*, the prior information \(Y\) states that no node has a degree superior to \(\mathcal {K}\).

For the same reasons that models tracking node and pair motifs (Sect. 3.1) had their transition events defined in terms of first neighbourhood motifs, the transition events are here defined in terms of *second neighbourhood motifs*: a central node, its neighbours and the neighbours of those neighbours. In the same way that we note \(\nu \varGamma _{1}({\mathbf{k}})\) the first neighbourhood motif formed by a state \(\nu \) central node with neighbourhood specified by \(\mathbf{k}\), we note \(\nu \varGamma _2({\mathbf{K}})\) the second neighbourhood motif formed by a state \(\nu \) central node with neighbourhood specified by \(\mathbf{K}\).

The elements of \(\mathbf{K}\) may be indexed with first neighbourhood motifs: the central node has \(K_{\nu '\varGamma _{1}({\mathbf{k}'})}\) state \(\nu '\) neighbours whose other neighbours (*i.e.*, excluding the central node) are specified by \(\mathbf{k}'\). Hence, the second neighbourhood motif

is noted \(S\varGamma _2({\mathbf{K}})\) with all elements of \(\mathbf{K}\) zero except for \(K_{S\varGamma _{1}({0,0})} = 1,\; K_{I\varGamma _{1}({0,1})} = 1\) and \(K_{S\varGamma _{1}({1,1})} = 2\). Note that the central node of this second neighbourhood motif is also the central node of the first neighbourhood motif \(S\varGamma _{1}({3,1})\). In general, we note \(\nu \widetilde{\varGamma }_{1}({\mathbf{K}})\) the first neighbourhood motif that shares the same central node as the second neighbourhood motif \(\nu \varGamma _2({\mathbf{K}})\).

We digress further to introduce the *unit vector* notation \(\widehat{\mathbf {e}}_m\) where \(m\) represents a motif; all the elements of this vector are zero except for the \(m\)th, which is one. The total number of elements in \(\widehat{\mathbf {e}}_m\) should be clear from the context. As a concrete example, the right hand side of (5b) could be noted \(\widehat{\mathbf {e}}_I + (n-2j)\widehat{\mathbf {e}}_{S-I}\).

*i.e.*, two) while \(\widehat{\mathbf {e}}_{\nu \varGamma _{1}({\mathbf{k}})}\) has the same dimension as \(\mathbf{x}\). Sums are taken over all the accessible values of \(\nu \) and \(\mathbf{k}\).

### 3.3 First neighbourhood SIR model

As in Sect. 3.2, we consider a full network model where the network structure is specified solely by the degree of its nodes. However, this time we consider SIR epidemiological dynamics: the accessible node states are \(\nu \in \{ S, I, R \}\), infection is the same as in SIS but recovery is replaced by removal (see the introduction of Sect. 3 for details).

### 3.4 First neighbourhood on-the-fly SIR model

*unknown to us*. This last statement is important: were we to learn the state of one of these neighbours, this would cease to be a \(\nu \Lambda _{1}({\kappa })\) motif and instead become a \(\nu \Lambda _{1}({\kappa -1})\) one. As usual, the state vector tracks the number of such motifs

*exactly*gives the probability for an unknown neighbours of the central node of \(\nu '\Lambda _{1}({\kappa '})\) to be the central node of \(\nu \Lambda _{1}({\kappa })\). Note that the Kronecker deltas \(\small {\left( \delta _{ii'} = \left\{ \begin{array}{ll} 1 &{} i = i' \\ 0 &{} i \ne i' \end{array} \right. \right) }\) in the numerator and the \(-1\) in the denominator both account for the stub of \(\nu '\Lambda _{1}({\kappa '})\) that we are pairing with a random stub.

A typical computer simulation would first build the network and then perform the SIR propagation dynamics on this network. However, we do not want to have to store the network structure for later consultation, which would require additional space in \(\mathbf{x}(t)\). Instead, we delay the network construction, leaving the stubs unpaired, and start the propagation dynamics right away. Just when the state of an unknown neighbour is required do we pair the corresponding stub with a randomly selected one, hence building the network *on-the-fly*. Since the knowledge of stubs being matched will be lost in the future, this information must only be required at the very moment it is obtained if we want the resulting dynamics to *exactly* reproduce the behaviour of the full system.

We thus take a different, although equivalent, perspective on the infection dynamics where each link is “probed” at most once. Instead of considering a probability \(\beta \, dt\) of infection for each *susceptible* neighbours of infectious nodes, we consider the same probability for each of their *unknown* neighbours. Only when this probability returns true do we wonder about the state of the neighbour, whose state changes to infectious if and only if it was previously susceptible. In any case, we learned who were the neighbours of two nodes (*i.e.*, the infectious and its neighbour) and we must update the state vector accordingly.

*exactly*reproduces the behaviour of the full system through the solution of (2). Since the Eq. (4) are only approximations of (2), results obtained through these relationships are only approximative (Figs. 8, 9). This model may be solved analytically for the mean value (see Appendix B) and the results are in agreement with Volz (2008), Miller (2010). This is a generalization to the case \(\alpha \ne 0\) of the model presented in Noël et al. (2012). Further details are discussed in Sect. 4.4. We note that a conceptually similar ideas are presented in a deterministic context (Ball and Neal 2008), and more recently in Decreusefond et al. (2012) as a tool for a mathematically rigorous proof that a specific heterogeneous mean field model (Volz 2008) holds in the limit of large network size.

## 4 Discussion

We now take a retrospective look at the results presented in Sect. 3 and obtain from these special examples general considerations concerning our modelling approach.

### 4.1 Accuracy of the results

One of the aims of this paper is to obtain simplified models that accurately reproduce the behaviour of complex systems. Since approximations are usually involved, it is to be expected that the results of the simplified model only agree with those of the full system over some range of parameters, where the approximations were valid.

The parameters used in Figs. 4, 5, 6, 7, 8, 9 were chosen in order to investigate the limits of our approximations: while there is no perfect correspondence between the results of the full and simplified systems, their agreement is probably sufficient for both qualitative and quantitative applications. We distinguish between two categories of approximations: those inherent to the use of Eq. (4) and those due to the imperfect representation of \(Z(t)\) through \(\mathbf{x}(t)\) and \(Y\).

#### 4.1.1 Gaussian approximation

Since (2) and (9) define a system that *exactly* reproduces the behaviour of the corresponding full system, any discrepancy in Fig. 8 must originate from the use of the Gaussian approximation (4). An important requirement for this approximation to be valid is that the size \(N\) of the system must be large.

Figures 4, 5, 6, 7, 8, 9 all use networks of size \(N = 1000\). As a rule of thumb, we found that (4) perform better for networks of at least a few hundred nodes, which is the case of many relevant real-world systems. Note that, for very small systems (tens of nodes), one could also directly and completely solve (2).

While a large network size \(N\) is required to justify treating the elements of \(\mathbf{x}(t)\) as real numbers, other phenomena may affect the validity of this approximation. For example, when the initial conditions are such that there is a single infectious node, the continuous approximation fails at considering the probability for that node to recover (or to be removed) before transmitting the infection to one of its neighbours. Figures 4, 5, 6, 7, 8, 9 circumvent this problem by using an initial condition with \(20\) infectious nodes: the probability for all of them to recover (or to be removed) before transmitting the infection is very low.

It is worth noting that the plateaux seen on Figs. 4, 6, 7 and 8 reflect different dynamical behaviours for the SIS and SIR systems. Indeed, while the total number of removed nodes reaches a maximum in the SIR system because there are no infectious left to recover, the steady state observed at the later times for our SIS models corresponds to a constant flow of recovery and new infections. In the former case, the approximation errors performed at earlier times accumulate. In the later, the exact path taken to attain equilibrium is of lesser importance and errors do not accumulate the same way.

#### 4.1.2 Representation approximation

In general, the simplified system will not exactly reproduce the behaviour of the full system, even when using (2) instead of (4). This is the case of all our SIS models: while some of the discrepancy seen in Figs. 4, 5, 6, 7 is explained by the Gaussian approximation, the imperfect representation of \(Z\) also contributes to the error.

Part of the problem can be understood as our failure to consider the correlation between the neighbours of a node and the time elapsed since this node has been in its present intrinsic state. For example, the neighbours of a susceptible node that has just recovered (*i.e.*, it was infectious a moment ago) may be much different than those of a susceptible node that has recovered a long time ago, while being similar to those of a node that is still infectious. Hence, one could hope to improve these SIS models through changes in \(Y\) alone (*i.e.*, with the same \(\mathbf{x}(t)\)): first estimate the probability distribution for the time since when each node has last changed state and then infer the neighbourhoods accordingly. An alternative that could be simpler to implement, at the cost of increasing the size of \(\mathbf{x}(t)\), would consist in tracking more exhaustive motifs (*e.g.*, second neighbourhoods instead of first ones in Sect. 3.2).

However, there are more intricate consequences to the recovery of infectious nodes on a structure that is fixed in time: if at some point all the nodes of the same component (*i.e.*, a connected subnetwork that is disconnected from the rest of the network) are susceptible at the same time, then none of them may ever become infectious again. The connectivity of a network is strongly affected by the average degree of its nodes: our parameters correspond to an average degree of \(5\) for Figs. 4, 5, 6 (average degree of a neighbour also \(5\)) and of \(3\) for Fig. 7 (average degree of a neighbour \(\approx 3.23\)). When using smaller parameter values, this components-induced discrepancy becomes much larger since the simplified model then overestimates the number of infectious nodes. One could take the components into account by solving independent systems for each component (and merge the results afterwards) or by a clever adaptation of the inference process (see Sect. 4.6 for possible directions). Note that these effects are usually much less important when the network structure changes over time.

### 4.2 Pair-based models

Compared to the other models presented in Sect. 3, the two pair-based models of Sect. 3.1 use very small state vectors (*i.e.*, two or three elements). This is an important advantage of pair-based models in general: there are usually much less pair and node motifs than, *e.g.*, first neighbourhood ones, and tracking them thus requires much smaller \(\mathbf{x}(t)\).

Although we limited our study of pair-based models to regular and Erdős-Rényi networks, more complex network structures could also be considered. In the same way that (5) and (6) differ mostly by their inference terms, obtaining good inference from the little information stored in \(\mathbf{x}(t)\) is probably the principal challenge behind general and accurate pair-based stochastic models.

However, non-stochastic pair-based models are already possible on nontrivial network structures for SIR dynamics or, more generally, for processes such that a change in the state of one neighbour of a node can be treated as independent of that of another neighbour (SIS fails this assumption) (Miller et al. 2011). Knowing (in \(Y\)) that a system behaves in this manner greatly simplifies the inference process, and this is the main reason for the success of the SIR pair-based model for the evolution of mean values on CM networks that is presented in Volz (2008), Miller (2010). Whether or not the same approach may be used to obtain stochastic results is an open question.

### 4.3 First (and higher) neighbourhood models

By opposition, sufficiently accurate inference terms for first neighbourhood models are often straightforward to obtain. Although (7e) may be difficult to appreciate at first sight, it is the only inference term used in both Sects. 3.2 and 3.3. In fact, (7e) may well be the only inference term needed for generic first neighbourhood models for CM network structures.

Although first neighbourhood motifs are a “natural language” for expressing dynamics taking place on CM networks, they could also be used in the presence of other complex structures. This may be done through changes in \(\mathbf{x}(t)\) and/or \(Y\); see Appendix A for details.

The generality and ease of design of first-neighbourhood models comes at a cost: the state vector \(\mathbf{x}(t)\) is typically much larger than it would be in an equivalent pair-model. How large is \(\mathbf{x}(t)\) strongly depends on the maximal node degree present in the network and on the total number of accessible intrinsic node states (see Appendix A for details). For typical values of these quantities, this does not cause major problems for the evaluation of the mean: numerically solving (4b) requires an acceptable amount of resources even for an \(\mathbf{x}(t)\) of dimension \(10^6\) and (4a) may often be simplified (*i.e.*, summed analytically).

However, evaluating the covariance matrix using (4e) may cause problems: unless analytical simplifications are possible, solving this system scales as the square of the number of elements in \(\mathbf{x}(t)\). Future developments may decrease this bottleneck effect of the covariance matrix; see Sect. 4.5 for details. In any case, the size of \(\mathbf{x}(t)\) may be decreased by “coarse graining” the number of links between the central node and its neighbours; see Appendix A.7 for details.

### 4.4 On-the-fly models

The on-the-fly model presented in Sect. 3.4 for SIR dynamics on CM networks *exactly* reproduces the behaviour of the full system. This is even more remarkable when one considers that the size of the state vector in the on-the-fly model is much smaller than in the alternative first neighbourhood model of Sect. 3.3. The reasons behind the success of the on-the-fly approach are similar to those discussed in Sect. 4.2 for the pair models presented in Miller et al. (2011), Volz (2008), Miller (2010): it is encoded in \(Y\) that, for each link, we *at most once* need to simultaneously know the state of the two nodes joined by that link (Noël et al. 2012).

The inference term (7e) is of “general purpose” in the sense that its \(Y\) does not provide information on the dynamical properties of the system, but only on how the motifs in \(\mathbf{x}(t)\) may be interconnected. This is why both (7) and (8) rely on (7e).

However, the inference terms of (9) have a specific character: \(Y\) contains information about (9) itself. Any change to the dynamics implies changes in the inference terms, with no guarantee that an acceptable solution exists. In fact, (9) was *designed* with this problem in mind. In other words, we obtained a simple and reliable model at the cost of “pre-computations” in the design process. Of all the possibilities in model-space, the information acquired by pointing at this specific one is what replaces the reduced size of the state vector. The same could be said of the deterministic SIR pair-based model on CM networks presented in Volz (2008), Miller (2010).

By contrast with the case discussed in Sect. 4.3, the small size of the state vector here allows for evaluations of the covariance matrix through (4e), even when relatively high degree nodes are present. Alternatively, one may take advantage of the fact that, even for more complicated dynamics, the state vector of on-the-fly models can remain of manageable size for mean values calculations; see the introduction to Appendix A for the concrete example of Marceau et al. (2011).

### 4.5 Complicated states versus complex assumptions

Section 4.4 revealed an unexpected depth to \(Y\): one may achieve models of similar levels of accuracy by trading off complexity in the assumptions for a reduction in the size of the state vector \(\mathbf{x}(t)\). As an extreme example, if \(Y\) already gives the full behaviour of the system, then there is no need for tracking any information in \(\mathbf{x}(t)\). Without reaching such extremes, our on-the-fly model and the deterministic SIR pair-based model presented in Volz (2008), Miller (2010) both demonstrate the benefits of investing some time in the assumptions of our models.

While these examples required case-by-case analysis, one may benefit from the same realization in a general context: a first simplified model (\(W,\; \mathbf{x}(t)\) and \(Y\)) may generate the assumptions \(Y'\) to a different simplified model (\(W',\; \mathbf{x}'(t)\) and \(Y'\)). For example, when some dynamical process (*e.g.*, SIS or SIR) occurs on a network whose structure changes in time independently from this dynamics, one could obtain a first model for the structure alone and then feed the results to the second model, handling the remaining dynamics. Even more generally, one could compensate for the higher computational requirements of (4e) by first solving (4b) on an elaborate model then feeding the resulting mean values to a simpler model for the sole purpose of estimating the covariance matrix.

### 4.6 Additional inference tools

While we introduced \(Y\) as a direct application of Bayes’ rule, we have now seen that useful assumptions may be obtained by other means, including the solution of another system of the form (2). The next step in this direction would be to improve our inference process using alternative tools and models available to network science.

For example, branching processes (Newman et al. 2001) may be used to infer information concerning the connectivity and the components of the network structure. As discussed in Sect. 4.1.2, this point was a major shortfall of SIS models. This approach is even more interesting for the recently developed tools (Allard et al. 2009; Karrer and Newman 2010; Allard et al. 2012) that are particularly compatible with the motifs and intrinsic node state approach presented in this paper.

Another tool of considerable interest are exponential random networks (Park and Newman 2004). Indeed, these maximum entropy methods can simplify inferences that would have otherwise been prohibitively complex. Once again, this approach may be generalized to different kind of motifs and intrinsic node states (Rogers 2011).

## 5 Conclusion: general applicability

Although the examples of Sect. 3 focus on simple SIS and SIR dynamics, any specificity that could be modelled through a standard epidemiological compartmental model may a priori be considered by our approach: genders, age groups, vaccination, incubation period, disease phases, *etc*. Furthermore, population dynamics considerations may be accounted for in a straightforward manner. Assuming first neighbourhood motifs, births and deaths of individuals correspond to events adding and removing motifs, respectively. Similarly, changes in interaction patterns amount to events replacing the affected motifs by new ones. In fact, from the model’s perspective, there is no important distinction between a change in the interaction structure of the population and a change in the node states: both are events affecting motifs.

Beyond the propagation of infections or parasites, an additional class of spreading processes is particularly relevant to population biology: the cascading extinctions of species in food webs (Rezende et al. 2007; Bascompte and Stouffer 2009; Dunne and Williams 2009). Our formalism is applicable to such problems by representing each species as a node and by using links to indicate feeding connections between species. While such cascades may require different rules than those of SIS or SIR dynamics, the general approach may still be adapted to this specific application. This illustrates how our philosophy has the potential to be applicable to any dynamics as long as the relevant information can be encoded through the structure of a network and the intrinsic state of its components.

The generality of our systematic approach and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing models too complex for direct comparison. Indeed, by considering such an existing model as the full system (specified by \(V_1\) and \(Z_1(t)\)), one may seek a simplified system (specified by \(W_1,\; X_1(t)\) and \(Y_1\)) approximately reproducing the original model (over a sufficient range of parameters).

## Footnotes

- 1.
Computer science terminology is used for the sake of specificity. However, the reader should keep in mind that these simulations may be hypothetical: our goal is to

*replace*them by a Level III model. - 2.
While \(V\) should technically be deterministic, we may also perceive it as stochastic by considering pseudo-random number generators as “actually” random. In any case, these subtleties are of no concerns for our purpose.

- 3.
Remember that what we are ultimately interested in are the Level I open questions, so \(X(t)\) and \(Z(t)\) should be compared with respect to these concerns.

- 4.
The \(r_i^j\) may be positive, negative, or zero. For a given \(\mathbf{x}(t)\), if \(\mathbf{x}(t) \pm \mathbf{r}^j\) contains any negative entries, then we should have \(q_{j}^{\pm }\!\bigl ({\mathbf{x}(t)},{Y}\bigr ) = 0\) [i.e., \(\mathbf{x}(t+{{\mathrm{d}}}t)\) has zero probability to be negative].

- 5.
Notice that, although \(q_{j}^+\!\bigl ({\mathbf{x}(t)},{Y}\bigr )\) and \(q_{j}^-\!\bigl ({\mathbf{x}(t)},{Y}\bigr )\) tend to “cancel out” in the deterministic contribution [negative sign in (4a)], the stochastic contributions of transitions happening both forward and backwards instead “accumulate” [positive sign in (4d)].

- 6.
This is thus a special case of the configuration model, defined later, for which \(n_\kappa = N\) and \(n_{\kappa '} = 0 \ \forall \kappa ' \ne \kappa \).

- 7.
Compare (5b) with (5a): if a forward transition of type \(j\) occurs at time \(t\), we get \(\mathbf{x}(t+{{\mathrm{d}}}t) = \mathbf{x}(t) + \mathbf{r}^j\), so \(x_I(t+{{\mathrm{d}}}t) = x_I(t)+1\) and \(x_{S-I}(t+{{\mathrm{d}}}t) = x_{S-I}(t) + \kappa - 2j\). This event has probability \(q_{j}^+\!\bigl ({\mathbf{x}(t)},{Y}\bigr ){{\mathrm{d}}}t\) to occur during the time interval \([t,t+{{\mathrm{d}}}t)\).

- 8.
Recall that a backward transition of type \(j\) occurring at time \(t\) has the effect \(\mathbf{x}(t+{{\mathrm{d}}}t) = \mathbf{x}(t) - \mathbf{r}^j\). This event has probability \(q_{j}^-\!\bigl ({\mathbf{x}(t)},{Y}\bigr ){{\mathrm{d}}}t\) to occur during the time interval \([t,t+{{\mathrm{d}}}t)\).

- 9.
The actual distribution should be obtained by sampling

*without*replacement, but for large networks we may make the approximation that the sampling is done*with*replacement, hence resulting in a binomial distribution. - 10.
For example, a node of high degree is much more likely to acquire the infection, and is much more dangerous once it has acquired it, than a low degree node.

## Notes

### Acknowledgments

The research team acknowledges to the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Fonds de recherche du Québec—Nature et technologies (FRQ–NT) for financial support. We are grateful to the anonymous referees that have helped improve our presentation.

### References

- Allard A, Noël PA, Dubé LJ, Pourbohloul B (2009) Heterogeneous bond percolation on multitype networks with an application to epidemic dynamics. Phys Rev E 79(036):113. doi:10.1103/PhysRevE.79.036113 Google Scholar
- Allard A, Hébert-Dufresne L, Noël PA, Marceau V, Dubé LJ (2012) Bond percolation on a class of correlated and clustered random graphs. J Phys A 45(405):005. doi:10.1088/1751-8113/45/40/405005 Google Scholar
- Auchincloss AH, Diez Roux AV (2008) A new tool for epidemiology: the usefulness of dynamic-agent models in understanding place effects on health. Am J Epidemiol 168:1–8. doi:10.1093/aje/kwn118 CrossRefGoogle Scholar
- Ball F, Neal P (2008) Network epidemic models with two levels of mixing. Math Biosci 212:69–87CrossRefMATHMathSciNetGoogle Scholar
- Bansal S, Grenfell BT, Meyers LA (2007) When individual behaviour matters: homogeneous and network models in epidemiology. J R Soc Interface 4:879–891. doi:10.1098/rsif.2007.1100 CrossRefGoogle Scholar
- Barrat A, Barthélemy M, Vespignani A (2008) Dynamical processes on complex networks. Cambridge University Press, New YorkCrossRefMATHGoogle Scholar
- Bascompte J, Stouffer DB (2009) The assembly and disassembly of ecological networks. Philos Trans R Soc Lond B 364:1781–1787. doi:10.1098/rstb.2008.0226 CrossRefGoogle Scholar
- Boccaletti S, Latora V, Moreno Y, Chavez M, Hwang DU (2006) Complex networks: structure and dynamics. Phys Rep 424:175–308. doi:10.1016/j.physrep.2005.10.009 CrossRefMathSciNetGoogle Scholar
- Broeck W, Gioannini C, Goncalves B, Quaggiotto M, Colizza V, Vespignani A (2011) The GLEaMviz computational tool, a publicly available software to explore realistic epidemic spreading scenarios at the global scale. BMC Infect Dis 11(1):37. doi:10.1186/1471-2334-11-37 CrossRefGoogle Scholar
- Dangerfield CE, Ross JV, Keeling MJ (2009) Integrating stochasticity and network structure in an epidemic model. J R Soc Interface 6:761–774. doi:10.1098/rsif.2008.0410 Google Scholar
- Danon L, Ford A, House T, Jewell CP, Keeling MJ, Roberts GO, Ross JV, Vernon MC (2011) Networks and the epidemiology of infectious disease. Interdiscip Perspect Infect Dis 284:909:1-28. doi:10.1155/2011/284909 Google Scholar
- Decreusefond L, Dhersin JS, Moyal P, Tran VC (2012) Large graph limit for an sir process in random network with heterogeneous connectivity. Ann Appl Probab 22:541–575CrossRefMATHMathSciNetGoogle Scholar
- Dunne JA, Williams RJ (2009) Cascading extinctions and community collapse in model food webs. Philos Trans R Soc Lond B 364:1711–1723. doi:10.1098/rstb.2008.0219 CrossRefGoogle Scholar
- Durrett R (2007) Random graph dynamicsGoogle Scholar
- Eames KTD, Keeling MJ (2002) Modeling dynamic and network heterogeneities in the spread of sexually transmitted diseases. PNAS 99:13,330–13,335. doi:10.1073/pnas.202244299 CrossRefGoogle Scholar
- Gardiner CW (2004) Handbook of stochastic methods for physics. Chemistry and the natural sciences. Springer, BerlinCrossRefMATHGoogle Scholar
- Gleeson JP (2011) High-accuracy approximation of binary-state dynamics on networks. Phys Rev Lett 107(068):701. doi:10.1103/PhysRevLett.107.068701 Google Scholar
- Hébert-Dufresne L, Noël PA, Marceau V, Allard A, Dubé LJ (2010) Propagation dynamics on networks featuring complex topologies. Phys Rev E 82(3):036,115. doi:10.1103/PhysRevE.82.036115 CrossRefGoogle Scholar
- House T, Keeling MJ (2011) Insights from unifying modern approximations to infections on networks. J R Soc Int 8(54):67–73. doi:10.1098/rsif2010.0179 Google Scholar
- House T, Davies G, Danon L, Keeling MJ (2009) A motif-based approach to network epidemics. Bull Math Biol 71:1693–1706. doi:10.1007/s11538-009-9420-z CrossRefMATHMathSciNetGoogle Scholar
- Karrer B, Newman MEJ (2010) Random graphs containing arbitrary distributions of subgraphs. Phys Rev E 82(6):066118. doi:10.1103/PhysRevE.82.066118 CrossRefMathSciNetGoogle Scholar
- Keeling MJ, Eames KTD (2005) Networks and epidemic models. J R Soc Interface 2(4):295–307. doi:10.1098/rsif2005.0051 Google Scholar
- Keeling MJ, Rand DA, Morris AJ (1997) Correlation models for childhood epidemics. Proc R Soc B 264(1385):1149–1156. doi:10.1098/rspb.1997.0159 CrossRefGoogle Scholar
- Marceau V, Noël PA, Hébert-Dufresne L, Allard A, Dubé LJ (2010) Adaptive networks: coevolution of disease and topology. Phys Rev E 82(3):036116. doi:10.1103/PhysRevE.82.036116 CrossRefMathSciNetGoogle Scholar
- Marceau V, Noël PA, Hébert-Dufresne L, Allard A, Dubé LJ (2011) Modeling the dynamical interaction between epidemics on overlay networks. Phys Rev E 84(2):026105. doi:10.1103/PhysRevE.84.026105 CrossRefGoogle Scholar
- McLane AJ, Semeniuk C, McDermid GJ, Marceau DJ (2011) The role of agent-based models in wildlife ecology and management. Ecol Model 222:1544–1556. doi:10.1016/j.ecolmodel.2011.01.020 CrossRefGoogle Scholar
- Miller JC (2010) A note on a paper by Erik Volz: SIR dynamics in random networks. J Math Biol 62(3):349–358. doi:10.1007/s00285-010-0337-9 CrossRefGoogle Scholar
- Miller JC, Slim AC, Volz EM (2011) Edge-based compartmental modeling for infectious disease spread. J R Soc Interface. doi:10.1098/rsif.2011.0403 Google Scholar
- Newman MEJ (2010) Networks: an introduction. Oxford University Press, OxfordCrossRefGoogle Scholar
- Newman MEJ, Strogatz SH, Watts DJ (2001) Random graphs with arbitrary degree distributions and their applications. Phys Rev E 64(026):118. doi:10.1103/PhysRevE.64.026118 Google Scholar
- Noël PA, Allard A, Hébert-Dufresne L, Marceau V, Dubé LJ (2012) Propagation on networks: an exact alternative perspective. Phys Rev E 85(031):118. doi:10.1103/PhysRevE.85.031118 Google Scholar
- Park J, Newman MEJ (2004) Statistical mechanics of networks. Phys Rev E 70(1—-13):066117CrossRefMathSciNetGoogle Scholar
- Rezende EL, Lavabre JE, Guimarães PR, Jordano P, Bascompte JB (2007) Non-random coextinctions in phylogenetically structured mutualistic networks. Nature 448:925–928. doi:10.1038/nature05956 CrossRefGoogle Scholar
- Rogers T (2011) Maximum-entropy moment-closure for stochastic systems on networks. J Stat Mech (05):P05007Google Scholar
- Sharkey KJ (2011) Deterministic epidemic models on contact networks: correlations and unbiological terms. Theor Popul Biol 79(4):115–129. doi:10.1016/j.tpb.2011.01.004 CrossRefMathSciNetGoogle Scholar
- Taylor M, Simon PL, Green DM, House T, Kiss IZ (2012) From Markovian to pairwise epidemic models and the performance of moment closure approximations. J Math Biol 64(6):1021–1042. doi:10.1007/s00285-011-0443-3 CrossRefMATHMathSciNetGoogle Scholar
- Van Kampen NG (2007) Stochastic processes in physics and chemistry, 3rd ednGoogle Scholar
- Volz E (2008) SIR dynamics in random networks with heterogeneous connectivity. J Math Biol 56(3):293–310. doi:10.1007/s00285-007-0116-4 CrossRefMATHMathSciNetGoogle Scholar