Keywords

1 Introduction

Over the past decades, networks have been the subject of an intensive research effort. As networks offer the right framework to model e.g. social, physical, chemical, biological and technological phenomena, various specific aspects have been studied in depth. Arguably among the most studied objects is the Erdős-Rényi graph [6, 7]. In such a random graph G(np) there are n vertices, and each of the \(N=\left( {\begin{array}{c}n\\ 2\end{array}}\right) \) edges is ‘up’ with a fixed probability \(p\in (0,1)\) or ‘down’ otherwise. By now there is a sizeable literature on this type of graph, providing detailed insight into its probabilistic properties, an example of a key result being that if the ‘up-probability’ p is larger than \(\log n/n\), then the resulting graph is almost surely connected.

The existing literature predominantly focuses on static graphs: the random graph is drawn just once, and does not change over time. In many real-life situations, however, the network structure temporally evolves, with edges appearing and disappearing. In a few recent contributions, first results on such dynamic random graphs have been reported, but the analysis of this class of models is still in its infancy; see e.g. [8, 9, 15], and [1] for an illustration of its use in engineering.

In [15] various dynamic random graph models are discussed, among them a dynamic Erdős-Rényi graph in which all N edges evolve independently. In this model, each edge makes transitions from present to absent and vice versa in a Markovian manner: it exists for an exponential time with parameter \(\mu \) (which we refer to as the ‘up-rate’), and disappears for an exponential time with parameter \(\lambda \) (the ‘down-rate’). For this model various metrics can be analyzed in closed form. In particular the distribution of the number of edges at time t, throughout this paper denoted by Y(t), can be explicitly computed. A special case is that in which no edges exist at \(t=0\): then the distribution of Y(t) coincides with the number of edges in a static Erdős-Rényi graph G(np(t)) (with an up-probability that depends on t).

In many applications the model that we just sketched is of limited relevance, as various features that play a role in real-life networks are not covered. To remedy this, in [15] alternative random graph processes were proposed, such as the dynamic counterparts of the configuration model and the stochastic block model. It is noted that a specific property that is often not fulfilled in real networks is that of the edges evolving independently; in practice likely there will be ‘external’ factors that affect all these N processes simultaneously, rendering them dependent. An example is a dynamic random graph in which the values of the up-rate and down-rate are determined by an independent stochastic process (think of temperature in a chemical network, weather conditions in a road traffic network, economic conditions in a financial network, etc.).

Motivated by the above considerations, the focus of this paper is on models in which the edges evolve dependently; the main contribution is that we propose and analyze two such models. In the first model, studied in Sect. 2, the up-rate and the down-rate of each of the edges are determined by an external, autonomously evolving Markov process X(t), in the sense that at time t these rates (for all edges) are \(\lambda _i\) and \(\mu _i\) if \(X(t)=i\); this mechanism is usually referred to as regime switching. In the second model, which is analyzed in Sect. 3, the up-rate and the down-rate (say, \(\varLambda \) and M) are resampled every \(\varDelta >0\) time units (and these sampled values then apply to all edges).

In more detail, our findings are the following. The focus is on the probabilistic properties of the process Y(t) that records the number of edges present as a function of time. For both models mentioned above we manage to uniquely characterize its transient and stationary behavior, albeit in a somewhat implicit way: for the first model in terms of a pde for the corresponding probability generating function (pgf), for the second model in terms of a recursion for the pgf. Then we use these characterizations to point out how transient and stationary means can be computed. The next step is to consider scaling limits; under a particular scaling, the process Y(t) satisfies a functional central limit theorem. More specifically, after centering and scaling it converges to an Ornstein-Uhlenbeck (ou) process; interestingly, in [13] it is shown that for certain dynamic Erdős-Rényi graphs that a particular clique-complex related quantity (the ‘Betti number’) is described by an ou process as well. Finally we discuss for both models the corresponding sample-path large deviations, characterizing the models’ rare-event behavior. In Sect. 4, the results are illustrated by numerical examples.

2 Erdős-Rényi Graphs Under Regime Switching

In this section we consider the following model. Let \((X(t))_{t\geqslant 0}\) be an irreducible continuous-time Markov process, typically referred to as the regime process or background process, living on the state space \(\{1,\ldots ,d\}\). The transition rate matrix corresponding to \((X(t))_{t\geqslant 0}\) is denoted by \(Q=(q_{ij})_{i,j=1}^d\) and the corresponding invariant distribution by the (column) vector \({\varvec{\pi }}\). As before, we consider the situation of N possible edges. Let \(\mu _i\geqslant 0\) be the hazard rate of an existing edge becoming inactive when the regime process is in state i; likewise, \(\lambda _i\geqslant 0\) is the hazard rate corresponding with a non-existing edge becoming active. Due to the common regime process the edges are reacting to, the number of links present (denoted by \((Y(t))_{t\geqslant 0}\)) evolves according to an interesting dynamic structure.

2.1 Generating Function

We start our exposition by studying the (transient and stationary) pgf s

$$ \phi _i(t,z):={\mathbb E} \left( z^{Y(t)}\,1_{\{X(t)\,=\,i\}}\right) ,\,\, \phi _i(z):={\mathbb E} \left( z^{Y}\,1_{\{X\,=\,i\}}\right) . $$

We do so by first analyzing \(p_i(m, t):={\mathbb P}(Y(t) = m, X(t) = i),\) by following classical procedures; later we also point out how \(p_i(m):={\mathbb P}(Y = m, X = i)\) can be found. Setting up the Kolmogorov equations, with \(q_i:=-q_{ii}>0\),

$$\begin{aligned} p_i(m,t+\varDelta t)= & {} \sum _{j\not =i} p_j(m,t) q_{ji}\,\varDelta t \\&+\, p_i(m+1,t) \mu _i (m+1)\varDelta t + p_i(m-1,t) \lambda _i(N-m+1)\varDelta t \\&+\, p_i(m,t)\big (1- q_i\varDelta t - \mu _i\,m\,\varDelta t-\lambda _i\,(N-m)\,\varDelta t\big )+o(\varDelta t), \end{aligned}$$

leading to the linear system of differential equations

$$\begin{aligned} p_i'(m,t)= & {} \sum _{j=1}^d p_j(m,t)q_{ji} + p_i(m+1,t) \mu _i\,(m+1) \\&+\,p_i(m-1,t) \lambda _i\,(N-m+1) -p_i(m,t)\mu _i\,m-p_i(m,t)\lambda _i\,(N-m), \end{aligned}$$

where \(p_i(-1,t)\) and \(p_i(N+1,t)\) are set to 0. Multiplying by \(z^m\) and summing over \(m=0\) up to N, we arrive at the pde

$$\begin{aligned} \frac{\partial }{{\partial t}}\phi _i(t,z)= & {} \sum _{j=1}^d \phi _j(t,z)q_{ji}+\mu _i (1-z)\frac{\partial }{{\partial z}}\phi _i(t,z)+\\&\lambda _iN(z-1) \phi _i(t,z)+\lambda _iz(1-z)\frac{\partial }{{\partial z}}\phi _i(t,z). \end{aligned}$$

In stationarity, the left-hand side of the previous display can be equated to 0, thus leading to an ode. We obtain

$$ 0=\sum _{j=1}^d \phi _j(z)q_{ji}+\mu _i (1-z)\phi _i'(z)+\lambda _iN(z-1) \phi _i(z)+\lambda _iz(1-z)\phi _i'(z). $$

2.2 Moments

Following a standard procedure, we can find explicit expressions for all (factorial) moments. To this end, we define \(e_{i,k}:= {\mathbb E}((Y)_k1_{\{X=i\}})\), with \((x)_k\) denoting \(x(x-1)\cdots (x-k+1)\). We obtain the factorial moments by differentiating with respect to z and plugging in \(z=1\): in self-evident matrix/vector notation, with \(\varLambda :=\mathrm{diag}\{{\varvec{\lambda }}\}\) and \(M:=\mathrm{diag}\{{\varvec{\mu }}\}\),

$$ {\varvec{0}}^\mathrm{T} = {\varvec{e}}_1^\mathrm{T}Q - {\varvec{e}}_1^\mathrm{T}M +{\varvec{\pi }}^\mathrm{T}\varLambda N - {\varvec{e}}_1^\mathrm{T}\varLambda . $$

This leads to \({\mathbb E}Y = {\varvec{e}}_1^\mathrm{T}{\varvec{1}}\), with \({\varvec{e}}_1^\mathrm{T}= N\cdot {\varvec{\pi }}^\mathrm{T}\varLambda (\varLambda +M-Q)^{-1};\) observe that the mean is proportional to N, as expected. This procedure provides a recursion for all factorial moments: by differentiating k times and inserting \(z=1\), we obtain, for \(k=2,3,\ldots ,N\),

$$ {\varvec{0}}^\mathrm{T} = {\varvec{e}}_k^\mathrm{T}Q - k\,{\varvec{e}}_k^\mathrm{T}M +kN\,{\varvec{e}}_{k-1}^\mathrm{T}\varLambda -k\, {\varvec{e}}_k^\mathrm{T}\varLambda - k(k-1)\,{\varvec{e}}_{k-1}^\mathrm{T}\varLambda , $$

and consequently

$$ {\varvec{e}}_k^\mathrm{T} = k\,(N-k+1)\cdot {\varvec{e}}_{k-1}^\mathrm{T} \,\varLambda (k\varLambda +kM-Q)^{-1}. $$

Observe that this recursion can be explicitly solved, as we know \({\varvec{e}}_1^\mathrm{T}\); the following result now straightforwardly follows.

Proposition 1

For \(k=1,\ldots ,N\),

$$ {\varvec{e}}_k^\mathrm{T} = k!\,(N)_k \cdot {\varvec{\pi }}^\mathrm{T} \varLambda (\varLambda +M-Q)^{-1}\varLambda (2\varLambda +2M-Q)^{-1}\cdots \varLambda (k\varLambda +kM-Q)^{-1}, $$

whereas \({\varvec{e}}_k^\mathrm{T} =0\) for \(k=N+1,N+2,\ldots \).

Following standard techniques, we can now evaluate all stationary probabilities as well. First, \(p_i(N)\) follows from the identity \({e}_{i,N} = {\mathbb E}((Y)_N1_{\{X=i\}})= N! \,p_i(N).\) We can recursively find the other probabilities \(p_i(m)\); applying

$$ e_{i,N-1} = {\mathbb E}((Y)_{N-1}1_{\{X=i\}})= (N-1)! \, p_i(N-1) + N!\, p_i(N), $$

we can express \(p_i(N-1)\) in terms of \(p_i(N)\) (and \({e}_{i,N-1}\) and \({e}_{i,N}\)). In general \(p_i(m)\) can be found from \(p_i(m+1),\ldots ,p_i(N)\) using

$$ e_{i,m} = \sum _{k=m}^N (k)_m p_i(k). $$

Remark 1

In addition, the transient factorial moments \({\mathbb E}((Y(t))_k\,1_{\{X(t)=i\}})\) can be (recursively) found; in every step of the recursion a system of linear differential equations (rather than a linear-algebraic equation) needs to be solved; see [12] for a similar procedure in the context of infinite-server queues under regime switching.

2.3 Diffusion Results Under Scaling

In this subsection we impose the scaling \(Q\mapsto N^\delta Q\), entailing that the regime process is sped up by a factor \(N^\delta \), with the objective to prove a functional central limit theorem for the resulting limiting process. To get a feeling for how this scaling affects the system’s behavior, we first compute the mean and variance of the stationary number of edges. To this end, we use the following lemma, which is proven in the appendix. In the sequel \(D:=({\varvec{1}}{\varvec{\pi }}^T-Q)^{-1}-{\varvec{1}}{\varvec{\pi }}^T\) denotes the deviation matrix. Also \(x^\star := {\varvec{x}}^\mathrm{T}{\varvec{\pi }}\) for \({\varvec{x}}\in {\mathbb R}^d\) and \(\varGamma :=\mathrm{diag}\{{\varvec{\gamma }}\}=\varLambda +M\). Let \({\varvec{\gamma }}:={\varvec{\lambda }}+{\varvec{\mu }}\) be componentwise positive.

Lemma 1

Define \(F_{N,k}:= (k\,\varGamma -NQ)^{-1}\) for \(k\in {\mathbb N}\). Then, as \(N\rightarrow \infty \),

$$ F_{N,k} = \frac{1}{k}\frac{1}{\gamma ^\star } {\varvec{1}}\,{\varvec{\pi }}^\mathrm{T}+ \frac{1}{N}E+O(N^{-2}),\,\,E:=\left( I-\frac{1}{\gamma ^\star }{\varvec{1}}\,{\varvec{\pi }}^\mathrm{T}\varGamma \right) D\left( I- \frac{1}{\gamma ^\star } {\varvec{\gamma }}\,{\varvec{\pi }}^\mathrm{T}\right) . $$

Let us first evaluate the mean of Y under this scaling; in the steps below we use \({\varvec{\pi }}^\mathrm{T}\varLambda \, {\varvec{1}}=\lambda ^\star \) and \(D{\varvec{1}}={\varvec{0}}\). From the above lemma, we find, with \(\bar{\varrho }:=\lambda ^\star /\gamma ^\star \),

$$\begin{aligned} {\mathbb E}\,Y= & {} N {\varvec{\pi }}^\mathrm{T} \varLambda (\varLambda +M-N^\delta Q)^{-1}{\varvec{1}} = N {\varvec{\pi }}^\mathrm{T} \varLambda F_{N^\delta ,1}{\varvec{1}} \\= & {} N {\varvec{\pi }}^\mathrm{T} \varLambda \left( \frac{1}{\gamma ^\star } {\varvec{1}}{\varvec{\pi }}^\mathrm{T}{\varvec{1}}+ {N}^{-\delta }\,E{\varvec{1}}+O(N^{-2\delta })\right) =N\,\bar{\varrho }+ O(N^{1-\delta }). \end{aligned}$$

Along the same lines,

$$ ({\mathbb E}Y)^2 = N^2\bar{\varrho }\,^2 - N^{2-\delta }\frac{2}{\gamma ^\star } {\varvec{\pi }}^\mathrm{T} (\varLambda -\bar{\varrho }\,\varGamma ) D\, \bar{\varrho }\,\varGamma {\varvec{1}} +o(N^{\mathrm{max}\{1,2-\delta \}}). $$

In addition, ignoring sublinear terms,

$$\begin{aligned} {\mathbb E}Y(Y-1)= & {} 2N(N-1)\,{\varvec{\pi }}^\mathrm{T} \varLambda (\varLambda +M-N^\delta Q)^{-1}\varLambda (2\varLambda +2M-N^\delta Q)^{-1}{\varvec{1}}\\= & {} 2N(N-1)\,{\varvec{\pi }}^\mathrm{T} \varLambda \, F_{N^\delta ,1} \, \varLambda \,F_{N^\delta ,2}{\varvec{1}}\\= & {} 2N(N-1) \,{\varvec{\pi }}^\mathrm{T} \varLambda \, \left( \frac{1}{\gamma ^\star } {\varvec{1}}\,{\varvec{\pi }}^\mathrm{T}+ \frac{1}{N^\delta }E\right) \varLambda \left( \frac{1}{2\gamma ^\star } {\varvec{1}}\,{\varvec{\pi }}^\mathrm{T}+ \frac{1}{N^\delta }E\right) {\varvec{1}}. \end{aligned}$$

Using the following equalities

$$\begin{aligned} {\varvec{\pi }}^\mathrm{T} \varLambda \, \left( \frac{1}{\gamma ^\star } {\varvec{1}}\,{\varvec{\pi }}^\mathrm{T}\right) \varLambda \left( \frac{1}{2\gamma ^\star } {\varvec{1}}\,{\varvec{\pi }}^\mathrm{T}\right) {\varvec{1}}= & {} \frac{\bar{\varrho }\,^2}{2},\\ {\varvec{\pi }}^\mathrm{T} \varLambda \, E\varLambda \left( \frac{1}{2\gamma ^\star } {\varvec{1}}\,{\varvec{\pi }}^\mathrm{T}\right) {\varvec{1}}= & {} \frac{1}{2\gamma ^\star } {\varvec{\pi }}^\mathrm{T} (\varLambda -\bar{\varrho }\,\varGamma ) D (\varLambda -\bar{\varrho }\,\varGamma ) {\varvec{1}},\\ {\varvec{\pi }}^\mathrm{T} \varLambda \, \left( \frac{1}{\gamma ^\star } {\varvec{1}}\,{\varvec{\pi }}^\mathrm{T}\right) \varLambda E{\varvec{1}}= & {} -\frac{1}{\gamma ^\star } {\varvec{\pi }}^\mathrm{T} (\varLambda -\bar{\varrho }\,\varGamma ) D \, \bar{\varrho }\,\varGamma {\varvec{1}}, \end{aligned}$$

we arrive at

$$ {\mathbb E}Y(Y-1) =N(N-1)\,\bar{\varrho }\,^2+N^{2-\delta }\frac{1}{\gamma ^\star } {\varvec{\pi }}^\mathrm{T} (\varLambda -\bar{\varrho }\,\varGamma ) D (\varLambda -3\,\bar{\varrho }\,\varGamma ) {\varvec{1}}+o(N^{\mathrm{max}\{1,2-\delta \}}). $$

By virtue of the identity \({\mathbb V}\mathrm{ar} \,Y= {\mathbb E}Y(Y-1) +{\mathbb E}Y-({\mathbb E}Y)^2\), we thus find

$$\begin{aligned} {\mathbb V}\mathrm{ar} \,Y = N\,\bar{\varrho }(1-\bar{\varrho }) + N^{2-\delta }\,v\,+o(N^{\mathrm{max}\{1,2-\delta \}}), \end{aligned}$$
(1)

with

$$ v:=\frac{1}{\gamma ^\star } {\varvec{\pi }}^\mathrm{T} (\varLambda -\bar{\varrho }\,\varGamma ) D (\varLambda -\bar{\varrho }\,\varGamma ) {\varvec{1}}. $$

It can be checked that this formula is symmetric, in the sense that it is invariant under swapping \({\varvec{\lambda }}\) and \({\varvec{\mu }}\), which is in line with \({\mathbb V}\mathrm{ar}\,Y={\mathbb V}\mathrm{ar}\,(N-Y)\); note that \(\varLambda -\bar{\varrho }\,\varGamma = (1-\bar{\varrho })\varLambda -\bar{\varrho }M\).

Upon inspecting the asymptotic shape of \({\mathbb V}\mathrm{ar}\,Y\), we observe a dichotomy. For \(\delta >1\) the regime process jumps so fast that all edges essentially behave independently, experiencing an ‘effective up-rate’ of \(\lambda ^\star \), and an ‘effective down-rate’ of \(\mu ^\star \), so that in this regime Y is approximated with a Binomial random variable with parameters N and \(\bar{\varrho }\). For \(\delta <1\) the regime process is relatively slow, and hence affects the variance (which is, as a result, superlinear in N).

We now prove a functional central limit theorem. For the moment we focus on the case \(\delta =1\); in Remark 3 we comment on what happens when \(\delta >1\) or \(\delta <1\). Let \(P_1(\cdot )\) and \(P_2(\cdot )\) be two independent unit-rate Poisson processes. With \(Z_i(s):=1_{\{X(s)=i\}}\), and \(Y(0)=0\) (remarking that any other starting point can be dealt with similarly),

$$\begin{aligned} Y(t) = P_1\left( \sum _{i=1}^d \int _0^t \lambda _i Z_i(s)(N- Y(s))\mathrm{d}s\right) -P_2\left( \sum _{i=1}^d \int _0^t \mu _iZ_i(s) Y(s)\mathrm{d}s\right) . \end{aligned}$$
(2)

The first step is to verify that Y(t)/N converges to y(t), defined as the solution of the integral equation

$$ y(t) = \lambda ^\star \int _0^t (1-y(s))\mathrm{d}s -\mu ^\star \int _0^t y(s)\mathrm{d}s, $$

i.e., \(y(t) = \varrho (t):=\bar{\varrho }\cdot (1-e^{-\gamma ^\star t}).\) Define

$$\begin{aligned} \bar{Y}(t):= \frac{Y(t) - N\varrho (t)}{\sqrt{N}}; \end{aligned}$$
(3)

our objective is to prove that \(\bar{Y}(\cdot )\) converges to a Gaussian process (and we identify this process). As we follow [2, Sect. 5], which in turn uses intermediate results of [10], we restrict ourselves to the most important steps.

We know from (2) that, for some martingale K(t),

$$ \mathrm{d}Y(t) = {\varvec{\lambda }}^T {\varvec{Z}}(t) (N-Y(t)) \mathrm{d}t - {\varvec{\mu }}^\mathrm{T} {\varvec{Z}}(t) Y(t) \mathrm{d}t +\mathrm{d}K(t), $$

and therefore

$$ \mathrm{d}\bar{Y}(t) = \sqrt{N}\big ((1-\varrho (t)){\varvec{\lambda }}^\mathrm{T}-\varrho (t){\varvec{\mu }}^\mathrm{T}\big ){\varvec{Z}}(t) \mathrm{d}t -{\varvec{\gamma }}^\mathrm{T}{\varvec{Z}}(t) \bar{Y}(t)\mathrm{d}t +\frac{\mathrm{d}K(t)}{\sqrt{N}} -\sqrt{N}\varrho '(t)\mathrm{d}t. $$

Now define \(W(t):=e^{Z_+(t)}\bar{Y}(t),\) where \({Z}_+(t):=\int _0^t {\varvec{\gamma }}^\mathrm{T}{\varvec{Z}}(s)\mathrm{d}s,\) so that,

$$ \mathrm{d}W(t) =e^{Z_+(t)}\left( \sqrt{N}\big ((1-\varrho (t)){\varvec{\lambda }}^\mathrm{T}-\varrho (t){\varvec{\mu }}^\mathrm{T}\big ){\varvec{Z}}(t) \mathrm{d}t +\frac{\mathrm{d}K(t)}{\sqrt{N}} -\sqrt{N}\varrho '(t)\mathrm{d}t \right) . $$

Observing that \(\big ((1-\varrho (t)){\varvec{\lambda }}^\mathrm{T}-\varrho (t){\varvec{\mu }}^\mathrm{T}\big ){\varvec{\pi }}= \varrho '(t),\) and recalling that \({\varvec{\gamma }}={\varvec{\lambda }}+{\varvec{\mu }}\), the equality in the previous display simplifies to

$$ \mathrm{d}W(t) = e^{Z_+(t)}\left( \sqrt{N}\big ({\varvec{\lambda }}^\mathrm{T}-\varrho (t){\varvec{\gamma }}^\mathrm{T}\big )({\varvec{Z}}(t) -{\varvec{\pi }}) \mathrm{d}t +\frac{\mathrm{d}K(t) }{\sqrt{N}} \right) . $$

We now consider the two terms in the previous display separately. As was established in [2, 10], for the first term, as \(N\rightarrow \infty \),

$$ \int _0^\cdot \sqrt{N} e^{Z_+(s)} \big ({\varvec{\lambda }}^\mathrm{T}-\varrho (s){\varvec{\gamma }}^\mathrm{T}\big )({\varvec{Z}}(s) -{\varvec{\pi }}) \mathrm{d}s\rightarrow \int _0^\cdot e^{\gamma ^\star s}\mathrm{d} G(s), $$

where \(G(\cdot )\) satisfies

$$\begin{aligned} \langle G\rangle _t = g(t):=2\int _0^t {\varvec{\pi }}^\mathrm{T} (\varLambda -\varrho (s)\varGamma ) D (\varLambda -\varrho (s)\varGamma ) {\varvec{1}}\mathrm{d}s. \end{aligned}$$
(4)

Also as in [2, 10], the second term obeys, as \(N\rightarrow \infty \),

$$ \int _0^\cdot \frac{1}{\sqrt{N}} e^{Z_+(s)} \mathrm{d}K(s) \rightarrow \int _0^\cdot e^{\gamma ^\star s}\mathrm{d} H(s), $$

where \(H(\cdot )\) satisfies (using the relation between \(K(\cdot )\) and the Poisson processes \(P_1(\cdot )\) and \(P_2(\cdot )\))

$$\begin{aligned} \langle H\rangle _t = h(t):= \int _0^t \lambda ^\star (1-\varrho (s))\mathrm{d}s + \int _0^t \mu ^\star \varrho (s)\mathrm{d}s. \end{aligned}$$
(5)

Combining the two terms studied above, it thus follows that, as \(N\rightarrow \infty \), \(W(\cdot )\) weakly converges to \(W_\infty (\cdot )\), which is the solution to the stochastic differential equation, with \(B(\cdot )\) a standard Brownian motion,

$$\begin{aligned} \mathrm{d}W_\infty (t) =e^{\gamma ^\star t} \sqrt{g'(t)+h'(t)} \,\mathrm{d}B(t). \end{aligned}$$
(6)

Translating this back in terms of a stochastic differential equation, again mimicking the line of reasoning of [2, 10], we obtain the following result.

Theorem 1

\(\bar{Y}(\cdot )\) converges weakly to \(\bar{Y}_\infty (\cdot )\), which is the solution to the stochastic differential equation

$$\begin{aligned} \mathrm{d}\bar{Y}_\infty (t)= - \gamma ^\star \, \bar{Y}_\infty (t)\,\mathrm{d}t + \sqrt{g'(t)+h'(t)} \,\mathrm{d}B(t), \end{aligned}$$
(7)

with \(g(\cdot )\) and \(h(\cdot )\) given by (4) and (5), respectively.

Remark 2

Using the behavior of \(g'(t)\) and \(h'(t)\) for t large, we conclude that for large values of t (‘in stationarity’), this stochastic differential equation reads

$$ \mathrm{d}\bar{Y}_\infty (t)= - \gamma ^\star \, \bar{Y}_\infty (t)\,\mathrm{d}t + \sqrt{2\gamma ^\star \,\bar{\varrho }\,(1-\bar{\varrho })+2\gamma ^\star \,v}\,\mathrm{d}B(t), $$

which defines an ou process with mean 0 and variance \(\bar{\varrho }\,(1-\bar{\varrho }) + v\); note that this aligns with what we found, plugging in \(\delta =1\), in (1).

Remark 3

When \(\delta <1\), the \(\sqrt{N}\) in the definition of (2) needs to be replaced by \(N^{\delta /2}\); it is readily checked that in the limiting stochastic differential equation (7) we then just have \(g'(t)\) below the square-root sign. On the contrary, if \(\delta >1\) then the definition of (2) remains unchanged, but below the square-root sign in (7) we only have \(h'(t)\).

2.4 Large Deviations Results Under Scaling

Where we above discussed the diffusion behavior of the process under study, we now consider rare events. We again focus on the scaling corresponding to \(\delta =1\), following the setup of [11]. Intuitively, the rare-event behavior is decomposed into the effect of the regime process, and that of the edge dynamics conditional on the regime process.

Let \({\varvec{g}}(\cdot )\) be in \(U_T\), defined as the set of non-negative d-dimensional functions such that the \(g_i(s)\) sum to 1, for all \(s\in [0,T]\). Then

$$ {\mathbb J}_T({\varvec{g}}) := \int _0^T \sup _{{\varvec{u}}\geqslant {\varvec{0}}}\left( -\sum _{i=1}^d \frac{(Q{\varvec{u}})_i}{u_i}g_i(s)\right) \mathrm{d}s. $$

In addition,

$$ \varLambda _{x,{\varvec{g}}}(\vartheta ):= \sum _{i=1}^d g_i\left( x\mu _i(e^{-\vartheta }-1)+(1-x)\lambda _i(e^{\vartheta }-1)\right) . $$

Based on the findings in [11], one anticipates a sample-path ldp (of ‘Mogulskii type’; cf. [4, Theorem 5.2]), with local rate function

$$ I_{x,{\varvec{g}}}(y):=\sup _\vartheta \left( \vartheta y - \varLambda _{x,{\varvec{g}}}(\vartheta )\right) . $$

This concretely means that, with \(Y^\circ (t):= N^{-1}Y(t)\) and \(t\in [0,T]\), and under mild regularity conditions on the set A,

$$ \lim _{N\rightarrow \infty }\frac{1}{N} \log {\mathbb P}( Y^\circ (\cdot ) \in A) =-\inf _{f\in A} {\mathbb I}_T(f), $$

with

$$ {\mathbb I}_T(f):=\inf _{{\varvec{g}}(\cdot )\in U_T}\left( \int _0^T I_{f(s),{\varvec{g}}(s)}(f'(s))\mathrm{d}s+{\mathbb J}_T({\varvec{g}})\right) . $$

A formal derivation of this ldp is beyond the scope of this paper.

3 Erdős-Rényi Graphs with Resampling

An alternative dynamic Erdős-Rényi model (in discrete time) can be defined as follows; we refer to it as a Erdős-Rényi graph with resampling. Let the N edges alternate between two states: the edge has the value 0 when the corresponding edge is absent and 1 when it exists. In slot m, let the transition matrix of the presence of any of the N edges be given by

$$ \left( \begin{array}{cc} P_m&{}1-P_m\\ 1-R_m&{}R_m \end{array}\right) , $$

where the sequence \((P_m,R_m)_{m\in {\mathbb N}}\) consists of i.i.d. vectors in \((0,1)^2\); we note that \(P_m\) and \(R_m\) (for a given time m, that is) are not necessarily assumed independent. It is stressed that the samples in slot m, i.e., \(P_m\) and \(R_m\), hold for any of the edges—as a consequence, the individual edges (each of them alternating between absent and present) evolve dependently, as intended.

In this section we find the counterparts for the resampling model of all results that we derived for the regime switching model of Sect. 2. To make notation compact, let (PR) denote a generic sample of \((P_m,R_m)\).

3.1 Generating Function

Let us now analyze the object \(\varphi _k(z) := {\mathbb E}\left( z^{Y_{m}}\,|\, Y_{m-1}=k\right) .\) Realize that \(Y_{m}\) is the sum of (i) the edges that were present at time \(m-1\) and still are at m, and (ii) the edges that were not there at \(m-1\) but do appear at m. Both obey a binomial distribution (with appropriately chosen parameters). More precisely,

$$ \varphi _k(z) ={\mathbb E}\left( \sum _{\ell =0}^{N-k} \left( {\begin{array}{c}N-k\\ \ell \end{array}}\right) (1-P_m)^\ell P_m^{N-k-\ell } z^\ell \cdot \sum _{\ell =0}^{k} \left( {\begin{array}{c}k\\ \ell \end{array}}\right) R_m^\ell (1-R_m)^{k-\ell } z^\ell \right) , $$

which simplifies to

$$ {\mathbb E}\left( \left( (1-P_m)z+P_m\right) ^{N-k}\cdot \left( R_mz+1-R_m\right) ^k\right) . $$

Now consider the stationary random variable Y, through its z-transform \(\varphi (z):= {\mathbb E}\,z^{Y}.\) Based on the above computation, we have found the following fixed-point equation:

$$\begin{aligned} \varphi (z) = {\mathbb E}\left( ((1-P)z +P)^N \,\varphi \left( \frac{Rz+1-R}{(1-P)z+P}\right) \right) . \end{aligned}$$
(8)

3.2 Moments

In this subsection, we compute the mean, variance and correlation in stationarity.

Mean. Let us first compute \({\mathbb E}\,Y\), by differentiating both sides to z and plugging in \(z=1\). To this end, we define

$$ \psi _1(z):= ((1-P)z +P)^N,\,\,\,\psi _2(z):= \varphi \left( \frac{Rz+1-R}{(1-P)z+P}\right) . $$

We first compute a number of quantities that we need in the sequel. It takes routine calculations to conclude that

$$\begin{aligned} \psi _1'(z)= & {} (1-P)N ((1-P)z +P)^{N-1},\\ \psi _1''(z)= & {} (1-P)^2 N(N-1) ((1-P)z +P)^{N-2},\\ \psi '_2(z)= & {} \frac{P+R-1}{((1-P)z+P)^2} \varphi '\left( \frac{Rz+1-R}{(1-P)z+P}\right) , \end{aligned}$$

and

$$\begin{aligned} \psi _2''(z)= & {} -2\frac{(P+R-1)(1-P)}{((1-P)z+P)^3}\varphi '\left( \frac{Rz+1-R}{(1-P)z+P}\right) \\&+ \,\frac{(P+R-1)^2}{((1-P)z+P)^4} \varphi ''\left( \frac{Rz+1-R}{(1-P)z+P}\right) . \end{aligned}$$

As a consequence,

$$ \psi '_1(1) = (1-P)N,\,\,\,\psi ''_1(1)=(1-P)^2N(N-1),\,\,\,\psi '_2(1)=(P+R-1)\varphi '(1), $$

and

$$ \psi _2''(1)= -2(P+R-1)(1-P)\varphi '(1)+(P+R-1)^2\varphi '' (1). $$

Regarding the first moment of Y, we obtain the equation \(\alpha :=\varphi '(1) = {\mathbb E} \,\psi _1'(1) + {\mathbb E} \,\psi _2'(1),\) or equivalently \(\alpha = N(1-{\mathbb E}\,P)+\alpha ({\mathbb E}\,P+{\mathbb E}\,R-1),\) and hence

$$\begin{aligned} \alpha =N \,\frac{1-{\mathbb E}\,P}{2-{\mathbb E}\,P-{\mathbb E}\,R}. \end{aligned}$$
(9)

Variance. We now evaluate the quantity

$$ \beta :={\mathbb E}\,Y(Y-1) =\varphi ''(1)= {\mathbb E} \,\psi _1''(1) + 2\, {\mathbb E} \,\psi _1'(1)\psi _2'(1)+ {\mathbb E} \,\psi _2''(1). $$

We thus obtain that \(\beta \) equals

$$ N(N-1)\,{\mathbb E}\left( (1-P)^2\right) + 2( N-1) \,\alpha \, {\mathbb E}\left( (P+R-1)(1-P)\right) +\beta \,{\mathbb E}\left( (P+R-1)^2\right) , $$

and therefore

$$ \beta = \frac{N(N-1)\,{\mathbb E}\left( (1-P)^2\right) + 2( N-1) \,\alpha \, {\mathbb E}\left( (P+R-1)(1-P)\right) }{1- {\mathbb E}\left( (P+R-1)^2\right) }. $$

As a consequence, \({\mathbb V}\mathrm{ar}\,Y\) equals

$$ \alpha -\alpha ^2+\frac{N(N-1)\,{\mathbb E}\left( (1-P)^2\right) + 2( N-1) \,\alpha \, {\mathbb E}\left( (P+R-1)(1-P)\right) }{1- {\mathbb E}\left( (P+R-1)^2\right) }. $$

It takes an elementary but tedious computation to verify that if P and R equal (deterministically) p and r, respectively, then this variance reduces to \(N\pi _0\pi _1\), as desired.

We also conclude that \({\mathbb V}\mathrm{ar}\,Y\) grows essentially quadratically in N. Indeed, it follows by standard computations that, with \(\bar{P}:=1-P\) and \(\bar{R}:=1-R\),

$$\begin{aligned} {\mathbb V}\mathrm{ar}\,Y = \gamma _1 N^2 +\gamma _2 N, \end{aligned}$$
(10)

where

$$ \gamma _1 = \frac{{\mathbb E} (\bar{R}^2) ({\mathbb E}\,\bar{P})^2 - 2 \,{\mathbb E} (\overline{PR}) {\mathbb E}\,\bar{P}\,{\mathbb E}\,\bar{R} + {\mathbb E} (\bar{P}^2) ({\mathbb E}\,\bar{R})^2 }{\left( 1- {\mathbb E}\left( (\bar{P}+\bar{R}-1)^2\right) \right) \left( {\mathbb E}\,\bar{P}+{\mathbb E}\,\bar{R}\right) ^2}, $$

and

$$ \gamma _2 = \frac{-\,{\mathbb E} (\bar{R}^2) \,{\mathbb E}\,\bar{P} + 2 \,{\mathbb E}\,\bar{P}\,{\mathbb E}\,\bar{R} -{\mathbb E} (\bar{P}^2) \,{\mathbb E}\,\bar{R} }{\left( 1- {\mathbb E}\left( (\bar{P}+\bar{R}-1)^2\right) \right) \left( {\mathbb E}\,\bar{P}+{\mathbb E}\,\bar{R}\right) }. $$

Notice that \(\gamma _1\) and \(\gamma _2\) are symmetric in P and R, as desired, and observe that \(\gamma _1\ge 0\) (with equality only if P and R are deterministic). We conclude that no standard CLT applies (which would require that \({\mathbb V}\mathrm{ar}\,Y\) grows linearly in N) unless P and R are deterministic.

Correlation. We now focus on computing the limit of covariance \({\mathbb C}\mathrm{ov} (Y_m,Y_{m+1})\) as \(m\rightarrow \infty .\) Observe that

$$ \lim _{m\rightarrow \infty } {\mathbb C}\mathrm{ov} (Y_m,Y_{m+1}) = \lim _{m\rightarrow \infty } \sum _{k=0}^N k \,{\mathbb E}(Y_{m+1}\,|\, Y_m = k) \,{\mathbb P}(Y_m = k) - ({\mathbb E}\,Y)^2, $$

which, in self-evident notation, reads

$$ \sum _{k=0}^N k \,{\mathbb E}({\mathbb B}\mathrm{in} (k,R))\,{\mathbb P}(Y= k) +\sum _{k=0}^N k\,{\mathbb E}({\mathbb B}\mathrm{in} (N-k,1-P) ) \,{\mathbb P}(Y= k) - ({\mathbb E}\,Y)^2. $$

This reduces to

$$ \,{\mathbb E}R\,\sum _{k=0}^N k^2 \,{\mathbb P}(Y= k) +(1-\,{\mathbb E}P)\sum _{k=0}^N k(N-k) \,{\mathbb P}(Y= k) - ({\mathbb E}\,Y)^2, $$

so that we obtain

$$ \lim _{m\rightarrow \infty } {\mathbb C}\mathrm{ov} (Y_m,Y_{m+1}) =({\mathbb E}P+{\mathbb E}R-1) {\mathbb E}(Y^2) + (1-\,{\mathbb E}P)N \,{\mathbb E}\,Y - ({\mathbb E}\,Y)^2, $$

which we can evaluate from the expressions for \({\mathbb E}\,Y\) and \({\mathbb V}\mathrm{ar}\,Y\).

3.3 Diffusion Results Under Scaling

We now consider the following scaling: for some \(\delta >0\) we put

$$\begin{aligned} P= 1-\eta /N^\delta ,\,\,\,\,R= 1-\zeta /N^\delta , \end{aligned}$$
(11)

where \(\eta \) and \(\zeta \) are non-negative random variables. The resulting model has some built-in ‘inertia’: for N large, the process has the inclination to stay in the same configuration. The mean number of vertices is \(N\,\bar{\varrho },\) with

$$\bar{\varrho }:=\frac{{\mathbb E}\,\eta }{{\mathbb E}\,\eta +{\mathbb E}\,\zeta },$$

irrespective of the value of \(\delta \). When analyzing the variance, however, the revealing issue is that the value of \(\delta \) has crucial impact. More specifically, a minor computation tells us that \({\mathbb V}\mathrm{ar}\,Y\) essentially reads

$$ N\,\bar{\varrho }\,(1-\,\bar{\varrho }\,)+N^{2-\delta } \frac{{\mathbb E} (\zeta ^2) ({\mathbb E}\,\eta )^2 - 2 \,{\mathbb E} (\eta \zeta ) {\mathbb E}\,\eta \,{\mathbb E}\,\zeta + {\mathbb E} (\eta ^2) ({\mathbb E}\,\zeta )^2 }{2({\mathbb E}\,\eta +{\mathbb E}\,\zeta )^3} . $$

Note that, due to the inertia that we incorporated, the variance is smaller than in the unscaled model, where the variance was effectively proportional to \(N^2\). Observe from the above expression that there is a dichotomy that resembles the one we came across in Sect. 2, with some sort of transition at \(\delta = 1\). For \(\delta >1\) the standard deviation scales as \(\sqrt{N}\), whereas for \(\delta <1\) it scales as \(N^{1-\delta /2}\). An intuitive explanation is that in the regime of relatively few transitions (i.e., \(\delta >1\)) the system’s inertia is so strong that its steady-state essentially behaves as an Erdős-Rényi graph with the probability that an edge exists being given by \(\bar{\varrho }\). In the regime with relatively many transitions (i.e., \(\delta <1\)), on the contrary, the (co-)variances play a role, in the sense that the increased variability caused by the resampling has impact; the limiting object is not of Erdős-Rényi-type.

Along the same lines, an elementary computation yields that the covariance between the numbers of edges at two subsequent epochs (in stationarity) behaves as

$$ {\mathbb V}\mathrm{ar}\,Y\left( 1-\frac{{\mathbb E}\,\eta +{\mathbb E}\,\zeta }{N^\delta }\right) ; $$

this correlation coefficient essentially reads \(1-({{\mathbb E}\,\eta +{\mathbb E}\,\zeta }){N^{-\delta }}\) (for N large).

A Related Continuous-Time Model. In the remainder of this subsection we consider a specific explicit continuous-time model in which we can embed the discrete-time model discussed above, and in particular the scaling (11). To this end, we first describe the model without scaling, and then include the scaling.

Let, at time s, \(M(s)\geqslant 0\) be the hazard rate of an existing vertex becoming inactive; likewise, \(\varLambda (s)\geqslant 0\) is the hazard rate corresponding with a non-existing vertex becoming active. Here M(s) and \(\varLambda (s)\) are piecewise constant stochastic processes: for some \(\varDelta >0\),

$$ \varLambda (s) = \varLambda _i \,1_{\{(i-1)\varDelta \leqslant s< i \varDelta \}},\qquad M(s) =M_i \,1_{\{(i-1)\varDelta \leqslant s < i \varDelta \}}, $$

where \((M_i,\varLambda _i)_{i\in {\mathbb N}}\) is a sequence of i.i.d. bivariate random vectors such that both \( {\mathbb V}\mathrm{ar}\,\varLambda \) and \( {\mathbb V}\mathrm{ar}\, M\) are finite. Let Y(t) be the number of vertices at time t, and Y its stationary counterpart. As it turns out, we can reuse quite a few results from the previous subsections, using the identification \(Y(m\varDelta ) = Y_m.\) In particular, it is seen that \(\varphi (z) :={\mathbb E}\, z^Y\) satisfies (8), with

$$ P := \frac{M}{\varLambda +M}+\frac{\varLambda }{\varLambda +M}e^{-(\varLambda +M)\varDelta } ,\,\,R:=\frac{\varLambda }{\varLambda +M}+\frac{M}{\varLambda +M}e^{-(\varLambda +M)\varDelta }. $$

We thus obtain from (9)

Similarly, we can compute the variance by (10).

Now we describe how to scale this model. The idea is to scale \(\varDelta \mapsto 1/N^{\delta }\), and to consider the regime in which we let N grow large, i.e., the transition rates are frequently resampled (and simultaneously the number of potential edges N grows). It is immediate that P and R fulfill (11) with \(\eta =\varLambda \) and \(\zeta =M.\) We obtain that \({\mathbb E}\,Y\) tends to \(\bar{\varrho }:={\mathbb E}\,\varLambda /{\mathbb E}\,\varGamma \), where \(\varGamma :=\varLambda +M\). In addition, \( {\mathbb V}\mathrm{ar}\, Y\) satisfies the expansion \(N\,\bar{\varrho }\,(1-\,\bar{\varrho }\,)+N^{2-\delta } v+o(N^{\max \{1,2-\delta \}})\), where

$$\begin{aligned} v:= & {} \frac{1}{2\,{\mathbb E}\,\varGamma } \left( \,\bar{\varrho }\,^2\, {\mathbb V}\mathrm{ar}\, M - 2\,\bar{\varrho }\,(1-\,\bar{\varrho }\,)\, {\mathbb C}\mathrm{ov}\,(\varLambda ,M) + (1-\,\bar{\varrho }\,)^2\, {\mathbb V}\mathrm{ar}\, \varLambda \right) \\ {}= & {} \frac{1}{2\,{\mathbb E}\,\varGamma } {\mathbb V}\mathrm{ar}\left( \varLambda -\,\bar{\varrho }\,\varGamma \right) . \end{aligned}$$

The proof of a functional central limit theorem is very similar to the one for the regime switching model in Sect. 2; we therefore restrict ourselves to the key steps. With \(P_1(\cdot )\) and \(P_2(\cdot )\) as before,

$$ Y(t) = P_1\left( \int _0^t \varLambda (s) (N-Y(s))\mathrm{d}s\right) - P_2\left( \int _0^t M(s) Y(s) \mathrm{d}s\right) , $$

so that, for some martingale K(t),

$$ \mathrm{d}Y(t) =\varLambda (t) (N-Y(t))\mathrm{d}t-M(t) Y(t)\mathrm{d}t +\mathrm{d} K(t). $$

Then \(\bar{Y}(t)\) is defined as in (3), with \(\varrho (t):=\bar{\varrho }\cdot (1-\exp (-t\,{\mathbb E}\,\varGamma )).\) We define, with \(\varGamma (s) =\varLambda (s)+M(s)\),

$$ W(t):=e^{\varGamma _+(t)}\bar{Y}(t),\,\,\,\,\text{ with }\,\,\, \varGamma _+(t):=\int _0^t \varGamma (s)\mathrm{d}s. $$

After a few steps, this leads to the stochastic differential equation,

$$ \mathrm{d}W(t) = e^{\varGamma _+(t)} \left( \sqrt{N}\left( (\varLambda (t)-{\mathbb E}\,\varLambda )- \varrho (t)(\varGamma (t)-{\mathbb E}\,\varGamma )\right) \mathrm{d}t+\frac{\mathrm{d}K(t)}{\sqrt{N}}\right) . $$

Consider the two terms in the previous display. For the first term, as \(N\rightarrow \infty \),

$$ \int _0^\cdot \sqrt{N} e^{ \varGamma _+(s)} \big ((\varLambda (s)-{\mathbb E}\,\varLambda )- \varrho (s)(\varGamma (s)-{\mathbb E}\,\varGamma )\big ) \mathrm{d}s\rightarrow \int _0^\cdot e^{s\,{\mathbb E}\,\varGamma }\mathrm{d} G(s), $$

where \(G(\cdot )\) satisfies

$$\begin{aligned} \langle G\rangle _t = g(t):=\int _0^t {\mathbb V}\mathrm{ar}\, (\varLambda -\varrho (s)\varGamma ) \mathrm{d}s; \end{aligned}$$
(12)

to see this note that, almost surely, uniformly on compacts, as \(N\rightarrow \infty \),

$$ e^{ \varGamma _+(s)}= \exp \left( \frac{1}{N}\sum _{i=1}^{sN} (\varLambda _i+M_i)\right) \rightarrow \exp \left( s\,{\mathbb E}\,\varGamma \right) , $$

and use this in combination with the (classical) functional central limit theorem for the random walk with i.i.d. increments [14, Theorem 4.3.5]. For the second term, as \(N\rightarrow \infty \), due to the definition of the martingale \(K(\cdot )\),

$$ \int _0^\cdot \frac{1}{\sqrt{N}}e^{ \varGamma _+(s)} \mathrm{d}K(s) \rightarrow \int _0^\cdot e^{\gamma ^\star s}\mathrm{d} H(s), $$

where \(H(\cdot )\) is such that

$$\begin{aligned} \langle H\rangle _t = h(t):={\mathbb E}\,\varLambda \int _0^t (1-\varrho (s))\mathrm{d}s +{\mathbb E}\,M \int _0^t \varrho (s)\mathrm{d}s. \end{aligned}$$
(13)

Combining the two terms studied above, it thus follows that, as \(N\rightarrow \infty \), \(W(\cdot )\) weakly converges to \(W_\infty (\cdot )\), which is the solution to the stochastic differential equation (6), but now with the \(g(\cdot )\) and \(h(\cdot )\) given by (12) and (13), respectively. We obtain the following result.

Theorem 2

\(\bar{Y}(\cdot )\) converges weakly to \(\bar{Y}_\infty (\cdot )\), which is the solution to the stochastic differential equation (7), with \(g(\cdot )\) and \(h(\cdot )\) given by (12) and (13), respectively.

Remark 4

For large t (‘in stationarity’), this stochastic differential equation essentially behaves as

$$ \mathrm{d}\bar{Y}_\infty (t)= - {\mathbb E}\,\varGamma \cdot \bar{Y}_\infty (t)\,\mathrm{d}t + \sqrt{2\,{\mathbb E}\,\varGamma \cdot \bar{\varrho }(1-\bar{\varrho })+2\,{\mathbb E}\,\varGamma \cdot v}\,\mathrm{d}B(t), $$

corresponding with an ou process with mean 0 and variance \(\bar{\varrho }\,(1-\bar{\varrho }) + v\). Note that this is in line with what we found, plugging in \(\delta =1\), in the expansion \(N\,\bar{\varrho }\,(1-\,\bar{\varrho }\,)+N^{2-\delta } v+o(N^{\max \{1,2-\delta \}})\). Regarding the cases \(\delta <1\) and \(\delta >1\) a reasoning similar to that in Remark 3 applies.

3.4 Large Deviations Results Under Scaling

The above computations focused on the mean, variance, and correlation under the scaling proposed. We now consider rare events. Another straightforward calculation yields for the cumulant function, assuming Nx to be integer,

$$\begin{aligned}&{\log {\mathbb E}\,\exp \left( \vartheta (Y_{m} - Y_{m-1})\,|\,Y_{m-1} = Nx\right) }\\&=\log {\mathbb E}\left( \left( e^{-\vartheta }(1-R_m)+R_m)\right) ^{Nx} \left( e^{\vartheta }(1-P_m)+P_m)\right) ^{N(1-x)}\right) , \end{aligned}$$

which, for \(\delta = 1\), converges to

$$\begin{aligned} \varLambda _x(\vartheta ):= & {} \log {\mathbb E}\exp \left( {x\zeta (e^{-\vartheta } -1) +(1-x)\eta (e^\vartheta -1)} \right) \\= & {} \log M\left( x (e^{-\vartheta } -1),(1-x)(e^\vartheta -1) \right) , \end{aligned}$$

where \(M(\cdot ,\cdot )\) is the joint moment generating function of the random variables \(\zeta \) and \(\eta \) (assuming that it exists). One thus finds a sample-path ldp where the local rate function is given by

$$ I_x(y):=\sup _\vartheta \left( \vartheta y -\varLambda _x(\vartheta ) \right) . $$

More precisely, with \(Y^\circ (t):= N^{-1}Y_{\lfloor Nt\rfloor }\) and \(t\in [0,T]\), and under mild regularity conditions on the set A,

$$ \lim _{N\rightarrow \infty }\frac{1}{N} \log {\mathbb P}( Y^\circ (\cdot ) \in A) =-\inf _{f\in A} {\mathbb I}_T(f),\,\,\,\text{ with }\,\,\,{\mathbb I}_T(f):=\int _0^T I_{f(s)}(f'(s))\mathrm{d}s. $$
Fig. 1.
figure 1

Left panel: histogram of \(\bar{Y}\) for situation (A). Right panel: histogram of \(\bar{Y}\) for situation (B). In both cases we took \(N=45\).

4 Numerical Illustration

In this section we include a number of illustrative examples that assess the applicability of the diffusion limits. We consider two situations; in both cases we take \(\delta =1\). (A) In the first situation we consider the regime switching model of Sect. 2. The background process has two states, with \(q_{12}=2\) and \(q_{21}=3\); in addition \(\lambda _1=0.3,\) \(\lambda _2=0.5\), \(\mu _1=1\), and \(\mu _2=0.1\). Using the formulae we derived in Sect. 2, we find \({\mathbb E}\,Y=0.762\,N\) and \({\mathbb V}\mathrm{ar}\,Y= 0.182\,N\). (B) The second situation corresponds to the resampling model of Sect. 3. More, specifically, M has a uniform distribution on [0, 3] and \(\varLambda \) a uniform distribution on [0, 5]. It is readily checked that \({\mathbb E}\,Y= 0.625\,N\) and \({\mathbb V}\mathrm{ar}\, Y= 0.308\,N\).

In Fig. 1 histograms are presented for the random variable

$$ \bar{Y}:= \frac{Y- {\mathbb E}\,Y}{\sqrt{{\mathbb V}\mathrm{ar}\,Y}}. $$

The number of experiments the estimates are based upon equals the number of this lncs volume. Each simulation experiment starts with an empty system, and is then run for a sufficiently long time such that the process has reached equilibrium. The red curves in Fig. 1 correspond to the density of the standard Normal distribution. The figures confirm the convergence to the Normal distribution.

In Fig. 2 typical sample paths are depicted, illustrating the ou-like mean-reverting behavior. The red curves correspond to the mean of Y(t).

Fig. 2.
figure 2

Left panel: sample path of \(Y(\cdot )\) for situation (A). Right panel: sample path of \(Y(\cdot )\) for situation (B). In both cases we took \(N=45\).

5 Discussion and Concluding Remarks

In this paper we have discussed distributional properties of the number of edges in a dynamic Erdős-Rényi graph. We have considered two variants: one with the underlying mechanism being based on regime switching, and the other in which the transition probabilities are resampled at equidistant points in time. For both models we have succeeded in obtaining fairly explicit results for various transient and stationary quantities. Under a specific scaling a functional central limit theorem was established.

There is an interesting relation between the models considered in this paper and two-node closed queueing networks. In such closed networks a fixed number of jobs, say N, move between an active state (‘in service’) and an inactive state (‘waiting’). Such models (but without regime switching or resampling) have been intensively studied in the literature in the context of so-called Engset models [5]; see e.g. [3] and references therein.

Topics for future research may relate to other graph metrics than the total number of edges. In the introduction, we mentioned that [13] considers the behavior of the Betti number, but one could also think of e.g. the evolution of the number of wedges or triangles in the random graph. In addition, one may wonder under what conditions the dynamic random graph in which the edges (independently) alternate between present and absent is almost surely connected; one would expect that if this alternating process is ‘sufficiently fast’ and the stationary up-probability is larger than \(\log n/n\), this should be the case.