Abstract
Many biological systems in nature can be represented as a dynamic model on a network. Examples include gene regulatory systems, neuronal networks, food webs, epidemics spreading within populations, social networks, and many others. A fundamental question when studying biological processes represented as dynamic models on networks is to what extent the network structure is contributing to the observed dynamics. In other words, how does network connectivity affect a dynamic model on a network? In this chapter, we will explore a variety of network topologies and study biologically inspired dynamic models on these networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The term edge is frequently reserved for undirected edge, and directed edges are often called arcs. Link is often used as the general term encompassing all types of such objects.
- 2.
Directed graphs are also called digraphs, and graph could mean either directed or undirected. Often, graph is used to mean an undirected graph.
- 3.
There are different names for path in the literature. The definition given above is also called a walk. A simple path refers to a path in which all nodes are distinct. Terminology depends on the source.
- 4.
Some researchers prefer to exclude the case i = j and instead use \(\bar {d_i} = \frac {1}{N-1} \sum _{j=1, j \neq i}^N d(i,j).\) The difference in the leading factors becomes negligible in large graphs. See [33, chapter 6] for further discussion of these two definitions.
- 5.
This is also known as the global clustering coefficient. Some authors, most notably Watts and Strogatz [46], use a different definition known as the network average clustering coefficient which is the average of node (local) clustering coefficients. This alternate definition does not always give the same value as the global version.
References
Getting Started with R. URL https://support.rstudio.com/hc/en-us/articles/201141096-Getting-Started-with-R
Introduction to R. URL https://www.datacamp.com/courses/free-introduction-to-r
Adler, J.: Bootstrap percolation. Physica A: Statistical Mechanics and its Applications 171(3), 453–470 (1991)
Al-Anzi, B., Arpp, P., Gerges, S., Ormerod, C., Olsman, N., Zinn, K.: Experimental and computational analysis of a large protein network that controls fat storage reveals the design principles of a signaling network. PLoS Computational Biology 11(5), e1004,264 (2015)
Albert, R.: Scale-free networks in cell biology. Journal of Cell Science 118(21), 4947–4957 (2005)
Albert, R., Barabási, A.L.: Statistical mechanics of complex networks. Reviews of Modern Physics 74(1), 47 (2002)
Allen, L.J.: An Introduction to Stochastic Processes with Applications to Biology, 2nd edn. Chapman and Hall/CRC (2010)
Amaral, L.A.N., Scala, A., Barthelemy, M., Stanley, H.E.: Classes of small-world networks. Proceedings of the National Academy of Sciences 97(21), 11,149–11,152 (2000)
Bandyopadhyay, S., Mehta, M., Kuo, D., Sung, M.K., Chuang, R., Jaehnig, E.J., Bodenmiller, B., Licon, K., Copeland, W., Shales, M., et al.: Rewiring of genetic networks in response to DNA damage. Science 330(6009), 1385–1389 (2010)
Barabási, A.L., Albert, R.: Emergence of scaling in random networks. Science 286(5439), 509–512 (1999)
Barrat, A., Barthelemy, M., Vespignani, A.: Dynamical Processes on Complex Networks. Cambridge University Press (2008)
Bassett, D.S., Bullmore, E.: Small-world brain networks. The Neuroscientist 12(6), 512–523 (2006)
Baxter, G.J., Dorogovtsev, S.N., Goltsev, A.V., Mendes, J.F.: Bootstrap percolation on complex networks. Physical Review E 82(1), 011,103 (2010)
Bollobás, B.: Random Graphs, 2 edn. Cambridge Studies in Advanced Mathematics. Cambridge University Press (2001). https://doi.org/10.1017/CBO9780511814068
Brauer, F., Castillo-Chavez, C.: Mathematical Models in Population Biology and Epidemiology, vol. 40. Springer (2001)
Caswell, H.: Matrix Population Models: Construction, Analysis, and Interpretation, 2nd edn. Oxford University Press (2006)
Clauset, A., Shalizi, C.R., Newman, M.E.: Power-law distributions in empirical data. SIAM Review 51(4), 661–703 (2009)
Csardi, G., Nepusz, T.: The igraph software package for complex network research. InterJournal, Complex Systems p. 1695 (2006). URL http://igraph.org
Durrett, R.: Essentials of Stochastic Processes, vol. 1. Springer (1999)
Durrett, R.: Random Graph Dynamics, vol. 200. Cambridge University Press, Cambridge (2007)
Erdos, P., Rényi, A.: On random graphs. Publicationes Mathematicae 6, 290–297 (1959)
Erdos, P., Rényi, A.: On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci 5(1), 17–60 (1960)
Estrada, E.: Introduction to complex networks: Structure and dynamics. In: Evolutionary Equations with Applications in Natural Sciences, pp. 93–131. Springer (2015)
Gillespie, D.T.: Exact stochastic simulation of coupled chemical reactions. The Journal of Physical Chemistry 81(25), 2340–2361 (1977)
Gross, T., D’Lima, C.J.D., Blasius, B.: Epidemic dynamics on an adaptive network. Physical Review Letters 96(20), 208,701 (2006)
Gross, T., Sayama, H.: Adaptive networks: Theory, models and applications. In: Understanding Complex Systems. Springer (2009)
Just, W., Callender, H., LaMar, M.D.: Algebraic and Discrete Mathematical Methods for Modern Biology, chap. Disease transmission dynamics on networks: Network structure vs. disease dynamics., pp. 217–235. Academic Press (2015)
Just, W., Highlander, H.C.: Vaccination strategies for small worlds. In: A. Wootton, V. Peterson, C. Lee (eds.) A Primer for Undergraduate Research: From Groups and Tiles to Frames and Vaccines, pp. 223–264. Springer, New York (2018)
Keeling, M.J., Eames, K.T.: Networks and epidemic models. Journal of the Royal Society Interface 2(4), 295–307 (2005)
Kiss, I.Z., Miller, J.C., Simon, P.L.: Mathematics of Epidemics on Networks: From Exact to Approximate Models, vol. 46. Springer (2017)
Masuda, N., Holme, P.: Temporal Network Epidemiology. Springer (2017)
Milo, R., Shen-Orr, S., Itzkovitz, S., Kashtan, N., Chklovskii, D., Alon, U.: Network motifs: simple building blocks of complex networks. Science 298(5594), 824–827 (2002)
Newman, M.: Networks: An Introduction. Oxford University Press (2010)
Patel, M.: A simplified model of mutually inhibitory sleep-active and wake-active neuronal populations employing a noise-based switching mechanism. Journal of Theoretical Biology 394, 127–136 (2016)
Patel, M., Joshi, B.: Switching mechanisms and bout times in a pair of reciprocally inhibitory neurons. Journal of Computational Neuroscience 36(2), 177–191 (2014)
Porter, M.A., Gleeson, J.P.: Dynamical systems on networks: a tutorial. arXiv preprint arXiv:1403.7663 (2014)
Porter, M.A., Gleeson, J.P.: Dynamical systems on networks: A tutorial. In: Frontiers in Applied Dynamical Systems: Reviews and Tutorials, vol. 4. Springer-Verlag, Heidelberg, Germany (2016)
R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2018). URL https://www.R-project.org/
Ross, S.: A First Course in Probability, 9th edn. Pearson (2012)
Ross, S.M.: Introduction to Probability Models, 11th edn. Academic Press (2014)
Sayama, H., Pestov, I., Schmidt, J., Bush, B.J., Wong, C., Yamanoi, J., Gross, T.: Modeling complex systems with adaptive networks. Computers & Mathematics with Applications 65(10), 1645–1664 (2013)
Schmidt, D., Blumberg, M.S., Best, J.: Random graph and stochastic process contributions to network dynamics. Discrete and Continuous Dynamical Systems, Supp 2011 2, 1279–1288 (2011)
Schmidt, D.R., Galán, R.F., Thomas, P.J.: Stochastic shielding and edge importance for Markov chains with timescale separation. PLoS Computational Biology 14(6), e1006,206 (2018)
Schmidt, D.R., Thomas, P.J.: Measuring edge importance: a quantitative analysis of the stochastic shielding approximation for random processes on graphs. The Journal of Mathematical Neuroscience 4(1), 6 (2014)
Watts, D.J.: Small worlds: the dynamics of networks between order and randomness, vol. 9. Princeton University Press (2004)
Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature 393, 440–442 (1998)
Acknowledgements
The author thanks two reviewers whose comments greatly helped to clarify and focus the content of this chapter. The author also thanks Paul Hurtado for helpful discussions that influenced the content and scope of the chapter.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Here we provide the R code used to generate all network figures in this chapter with the exception of Figs. 1, 2, 7, 12, 13, and 14 which were created from scratch in MS PowerPoint. Networks were generated using the igraph package (version 1.0.1) in R (version 3.5.3) [18, 38]. Note that the code below will generate a realization or sample network from the given model. These are random graph models, and their generation is based on probabilistic rules (e.g., edge probabilities in the Erdős–Rényi model). Therefore, it is unlikely that you will generate the exact same sample network given in this chapter, but rather a different sample network with the same or similar network properties.
We also give a brief overview of the basics of stochastic processes, including formal definitions of stochastic process, random walk, and Poisson process. We provide sample R code to simulate a simple random walk which is intended for use in Exercise 28 and can be adapted for Challenge Problem 3. Lastly, we provide R code to simulate a Poisson process for use in Exercises 33 and 34 and Research Project 3.
1.1 R Code for Figures
################################################################### # Figures in Section 3 ################################################################### # First, install the igraph package and load the igraph library library(igraph) # FIGURE 3 # Sample connected undirected network, N is number of nodes N=10; g <- erdos.renyi.game(N,0.25) iglayout1 = layout.fruchterman.reingold(g) plot(g, layout=iglayout1, vertex.size=20, vertex.label.dist=0, vertex.color="lightblue") # Sample disconnected undirected network N=10; g <- erdos.renyi.game(N,0.1) iglayout1 = layout.fruchterman.reingold(g) plot(g, layout=iglayout1, vertex.size=20, vertex.label.dist=0, vertex.color="lightblue") # FIGURE 4 # Sample directed network, weakly connected N=10; g <- erdos.renyi.game(N,0.1,directed = TRUE) iglayout1 = layout.fruchterman.reingold(g) plot(g, layout=iglayout1, vertex.size=20, vertex.label.dist=0, vertex.color="lightblue") # FIGURE 5 # Sample undirected network and adjacency matrix N=4; g <- erdos.renyi.game(N,0.5) iglayout1 = layout.fruchterman.reingold(g) plot(g, layout=iglayout1, vertex.size=20, vertex.label.dist=0, vertex.color="lightblue") # Retrieves the adjacency matrix used in the above graph adj <- get.adjacency(g) adj # FIGURE 6 # Sample directed network and adjacency matrix N=4; g <- sample_pa(N, directed = TRUE) plot(g, layout=iglayout1, vertex.size=20, vertex.label.dist=0, vertex.color="lightblue") # Retrieves the adjacency matrix used in the above graph adj <- get.adjacency(g) adj ################################################################### # Figures in Section 4 ################################################################### # FIGURE 8 # Ring network g <- make_ring(10) plot(g, layout=layout_with_kk, vertex.color="lightblue") # Complete graph g <- make_full_graph(10) plot(g, layout=layout_with_kk, vertex.color="lightblue") # FIGURE 9 # Erdos-Renyi network N=30; p=0.2 g <- sample_gnp(N,p) plot(g, vertex.size=10, vertex.color="wheat") # hist(degree_distribution(g), 20, col = "wheat") # FIGURE 10 # Small-world network: Watts--Strogatz model N <- 20 # WS network with no rewiring (q=0) g1 = sample_smallworld(1, N, 3, 0) -- sample realization mean_distance(g1) transitivity(g1, type="average") # > mean_distance(g1) # [1] 2.105263 # > transitivity(g1, type="average") # [1] 0.6 # WS network with small rewiring (q=0.05) g2 = sample_smallworld(1, N, 3, 0.05) -- sample realization mean_distance(g2) transitivity(g2, type="average") # > mean_distance(g2) # [1] 1.936842 # > transitivity(g2, type="average") # [1] 0.4841667 g3 = sample_smallworld(1, N, 3, 1) -- sample realization mean_distance(g3) transitivity(g3, type="average") # > mean_distance(g3) # [1] 1.831579 # > transitivity(g3, type="average") # [1] 0.2939432 # Plot all three networks together in one row # Use layout option to keep nodes in a circle par(mfrow = c(1,3)) plot(g1, vertex.size=10, layout=layout.circle) plot(g2, vertex.size=10, layout=layout.circle) plot(g3, vertex.size=10, layout=layout.circle) # FIGURE 11 # Scale-free network: Barabasi-Albert model # This model is directed by default # Use "directed = FALSE" to make undirected g <- sample_pa(50, directed = FALSE, out.pref = TRUE) degree_distribution(g) plot(g, vertex.size=10, vertex.color="wheat")
1.2 Basics of Stochastic Processes
Knowledge of introductory probability is assumed. In particular, we assume knowledge of the definitions of random variable and conditional probability as well as familiarity with common probability distributions (e.g., exponential and Poisson distributions). For a review of these basics, see [39].
Definition 17
A stochastic process is a sequence of random variables {X(t) : t ∈ T}, where T is an index set typically thought of as time. In particular, for each t ∈ T, X(t) is a random variable and we refer to X(t) as the state of the process at time t.
The set of possible values that the random variable X(t) can take on is called the state space, S. The time index set T can be discrete or continuous, and the same goes for the state space S. However, here we focus on discrete-state processes since we are talking about processes evolving on the nodes of a finite size network. Often, the set of nodes is the state space S, as in the examples below, but this does not have to be the case. We could, for instance, define a stochastic process to occur on each node of the network; see the neuronal spiking example in Sect. 5.3 and other examples in Sect. 5. We will consider both discrete-time and continuous-time processes in this chapter.
As an example of a discrete-time stochastic process on a finite state space, let T = {0, 1, 2, … } and S = {1, 2, 3}. Then the process {X(t) : t = 0, 1, … } starts out in state 1, 2, or 3 and moves around on these three states according to the following rules:
-
If the process is in state 1 at time t (for t ∈ T), mathematically this is denoted by X(t) = 1, then the process moves to state 2 (with probability 1) in the next time step, or X(t + 1) = 2. We can denote this transition by the following conditional probability:
$$\displaystyle \begin{aligned} P(X(t+1)=2 ~|~X(t)=1) = 1.\end{aligned} $$(29) -
If X(t) = 2, then the process has two choices for where to move in the next time step: move to state 3 with probability \(\frac {1}{2}\) or move back to state 1 with probability \(\frac {1}{2}\). Using conditional probabilities, we have
$$\displaystyle \begin{aligned} \begin{array}{rcl} P(X(t+1)=3 ~|~X(t)=2) = \frac{1}{2} \end{array} \end{aligned} $$(30)$$\displaystyle \begin{aligned} \begin{array}{rcl} P(X(t+1)=1 ~|~X(t)=2) = \frac{1}{2}. \end{array} \end{aligned} $$(31) -
Finally, if X(t) = 3, then the process must move back to state 2 in the next time step. Again, we represent this state transition using a conditional probability.
$$\displaystyle \begin{aligned} P(X(t+1)=2 ~|~X(t)=3) = 1.\end{aligned} $$(32)
In general, these conditional probabilities are called transition probabilities which describe the probability of a transition from the current state i to next state j in one discrete-time step. This general notation is given by
where each p ij is the (i, j)th entry in the transition probability matrix, P. For this example,
Note that each row of P sums to 1. This is true in general since the sum over all possible one-step transition probabilities from a given state i to any other state j ∈ S must be 1.
The diagram in Fig. 13 is another way of illustrating this example process: the states are the nodes, the directed edges denote the possibility of a transition from one node to another, and the numbers above the edges denote the probability of such transitions. A sample path or realization of this process up through time t = 7 is
If we just list the sequence of states starting in state 1 and moving forward in time, this sample path looks like
Another interesting stochastic process called a random walk is defined below. We define the discrete-time random walk on a discrete state space only (such as the integers Z); see [40] for further reading on other varieties of random walks. Additionally, we focus on the simple random walk which can only move in increments of ± 1.
Definition 18
A discrete-time simple random walk on state space S = Z is a stochastic process {Y (t) : t = 0, 1, 2, … } such that the state of the process increases or decreases by 1 at each time step with probabilities p and 1 − p, respectively. These transition probabilities are given by
for some p ∈ [0, 1].
See Fig. 14 for a graphical representation of the simple random walk process on Z. Suppose the simple random walk {Y (t) : t = 0, 1, 2, … } starts in state 0 at time 0. Then a sample path of the process is
We provide R code at the end of the Appendix to generate a simple random walk. Using this code, we generate and plot five sample paths of the symmetric simple random walk in Fig. 15 below. The symmetric case means that the parameter p = 1∕2 = 1 − p.
Note that due to the random nature of a stochastic process such as this random walk, each simulation run will be different. In general, in order to draw meaningful conclusions about a given stochastic process, one needs to run multiple simulations with the same parameter settings and look at the distribution of outcomes.
Lastly, we define a Poisson process which is a continuous-time stochastic process that takes values in the discrete state space S = Z ∗, the non-negative integers. This process is used in the neuronal spiking process described in Sect. 5.3. Poisson processes are commonly used to model a variety of biological processes.
Definition 19
A Poisson process on state space S = Z ∗ (non-negative integers) is a continuous-time stochastic process {N(t) : t ≥ 0} that counts events that have occurred up to and including time t. Specifically, {N(t) : t ≥ 0} is a Poisson process with rate λ > 0 if
-
N(0) = 0.
-
{N(t) : t ≥ 0} has independent increments, meaning that for any 0 ≤ t 0 < t 1 < ⋯ < t n < ∞, N(t 0), N(t 1) − N(t 0), …, N(t n) − N(t n−1) are all independent random variables.
-
N(s + t) − N(s) ∼Poisson(λt) for any s, t ≥ 0, meaning that the number of events in any interval of length t follows a Poisson distribution with rate λt.
For instance, let λ = 1. Then the expected number of events that will occur by time t is λt = t since λ = 1 in this case. In other words, on average, one event will occur per unit time. This follows because N(t) = N(t) − N(0) ∼Poisson(λt) and E[N(t)] = λt, a property of the Poisson distribution [40].
Now let us describe a sample path for a Poisson process {N(t) : t ≥ 0} with rate λ = 1. We assume that at time t = 0 no events have occurred (since N(0) = 0), so the first event will occur after an exponentially distributed amount of time with rate (parameter) λ. Starting at the time of the first event, the amount of time we have to wait until the second even occurs follows an exponential distribution with rate λ. It follows that the length of time between any two successive events in the Poisson process also follows an exponential distribution with rate λ. The times between events are called interarrival times, and a key property of the Poisson process is that the interarrival times follow an exponential distribution. Hence, the average length of the interarrival times is given by 1∕λ, or the mean of an exponential distribution with rate λ.
A sample path of a Poisson process {N(t) : t ≥ 0} with rate λ = 1 is given below.
Note that the state of the process increases by 1 at each time step, and the time increments are given by random draws from the exponential distribution with rate λ = 1. The expected number of events by time t = 10 is 10, and in this sample path we have 9 events that occurred by time t = 10 since N(9.65) = 9 and N(10.10) = 10. The tenth event occurred slightly after time t = 10, specifically at t = 10.10 in this case. See Fig. 16 for a plot of this sample path, and see R code used to generate this plot below. In general, a widely used method for simulating continuous-time discrete-state stochastic processes is known as the Gillespie Stochastic Simulation Algorithm (SSA) [24]; see also the GillespieSSA package in R.
For more reading on introductory stochastic processes, random walks, and Poisson processes, see [19, 40].
1.3 R Code for Selected Stochastic Processes
################################################################### # Simulate a random walk ################################################################### # Simple random walk on the integer line with probability p of # moving +1 step (right), probability 1-p of moving -1 step (left) # Parameters: n=number of time steps, p=prob increase state by 1, # x1=initial state RW.sim <- function(n,p,x1) { sim <- as.numeric(n) if (missing(x1)) { sim[1] <- 0 # initial condition: start at state 0 } else { sim[1] <- x1 } for (i in 2:n) { newstate <- sample(c(-1,1),1,prob=c(1-p,p)) + x1 sim[i] <- newstate x1 = newstate } sim } # Generate 5 sample paths and plot all on the same figure p = 0.5 # symmetric case run <- RW.sim(100,p,0) plot(run, type="l", col="blue", ylim=c(-18,18), xlab="Time step", ylab="State", main = "Simple Random Walk") run <- RW.sim(100,p,0) points(run, type="l", col="red") run <- RW.sim(100,p,0) points(run, type="l", col="darkgreen") run <- RW.sim(100,p,0) points(run, type="l", col="orange") run <- RW.sim(100,p,0) points(run, type="l", col="magenta") ################################################################### # Simulate a Poisson process ################################################################### # Install poisson package and load library library(poisson) # Simulate a homogeneous Poisson process with a given rate # Parameters: rate=rate of Poisson process, num.events=number # of events to simulate, num.sims=number of times to run the # simulation (default is 1) PP.sim <- function (rate, num.events, num.sims = 1, t0 = 0) { if (num.sims == 1) { x = t0 + cumsum(rexp(n = num.events, rate = rate)) return(c(t0,x)) } else { xtemp = matrix(rexp(n=num.events∗num.sims, rate=rate), num.events) x = t0 + apply(xtemp, 2, cumsum) return(rbind(rep(t0, num.sims), x)) } } # Run the function PP.sim rate = 1 PP.sim(rate, num.events = 10) PP.sim(rate, num.events = 10, num.sims = 5) # Run the function and plot the time series for 1 simulation rate = 1 num.events = 10 run1 <- PP.sim(rate, num.events) plot(run1, 0:num.events, xlab="Time", ylab="Number of Events", main = "Poisson Process Sample Path", pch = 19, col="blue") run1
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Schmidt, D.R. (2020). Network Structure and Dynamics of Biological Systems. In: Callender Highlander, H., Capaldi, A., Diaz Eaton, C. (eds) An Introduction to Undergraduate Research in Computational and Mathematical Biology. Foundations for Undergraduate Research in Mathematics. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-33645-5_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-33645-5_7
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-030-33644-8
Online ISBN: 978-3-030-33645-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)