1 Introduction

When considering a population process on a graph, the underlying network is typically assumed to be static: the network structure (i.e., the set of links that connect the nodes) is assumed to be constant over time. In many real-life situations, however, links may be temporarily inactive, entailing that the underlying structure should instead be considered as dynamic. At a conceptual level, such a system can be seen as a network of queues, where the links’ availability fluctuates in time. The main objective of this paper is to study the performance of such a queueing network, with links alternating between being ‘up’ and ‘down’. Leading examples in which our model can be used include communication networks, road traffic networks, various physics-motivated networks, and chemical reaction networks.

At a somewhat more detailed level, the network can be described as follows. The network is a graph with nodes and links, along which clients travel. At any node, external arrivals occur according to a Poisson process with a node-specific rate. Service times at the nodes are exponentially distributed (with a node-specific parameter); when a customer has been served at a node, he selects a next node through some routing mechanism (where it is also an option to leave the network). Suppose the client resides at node i and he wants to be routed to node j; assume that each link’s up- and down-times are exponentially distributed. Then, depending on the situation at hand, the following two options arise. If the link from i to j is up, then he jumps from node i to node j. If, on the contrary, the link from i to j is down, then either the client is lost (which happens with a node-specific probability), or he waits an exponentially distributed amount of time at node i and tries again.

The queueing mechanism studied in this paper is infinite-server, making our analysis particularly useful for situations in which there is no (or hardly any) interference between the clients at each individual queue, in the sense that they can be served essentially in parallel. It is noted that in this paper we use queueing-theoretic terminology, but infinite-server queues are frequently used in other domains as well. As a model in which particles move on a dynamically evolving graph, it can be seen as an object relevant to statistical physics (cf., for instance, the model considered in [11]), but there are applications in chemical reaction networks [4], (cell) biology [27], and population dynamics [22] as well. In operations research, the infinite-server model we have defined can be used to study e.g. the numbers of clients simultaneously using (somewhat larger) segments in a road traffic network, or numbers of clients simultaneously visiting connected websites.

For this class of model, we are interested in various performance measures. The most important one is the joint distribution of the (time-dependent and stationary) queue lengths at all nodes, together with the number of lost clients (i.e., clients who leave the network because of link failures).

\(\circ \) :

In the first part of the paper we derive a series of exact results. (i) Our first class of results is in terms of a system of coupled partial differential equations for the probability generating function pertaining to the joint queue length distribution. (ii) In the second place, this system of differential equations can be used to recursively determine all (time-dependent and stationary) moments. (iii) Thirdly, we assess the impact of the network’s down-times on the service quality that is perceived by its users.

\(\circ \) :

Then we consider scaling limits: by scaling the external arrival rates and the up- and down-times, we present a diffusion limit. This result entails that the joint queue-length process weakly converges to a mean-reverting Gaussian process (viz. a multivariate Ornstein-Uhlenbeck process). An important feature of the scaling chosen is that the speed at which the external arrival rates are scaled may differ from the speed at which the up- and down-times are scaled. This creates the flexibility to cover networks in which the alternation between up- and down times is relatively slow (think of road networks) or relatively fast (think of the channel conditions in a wireless network); also time-scale separation ideas (as often relied on in chemical reaction networks) can thus be modeled.

The model in this paper can be seen as an instance of a stochastic process (viz. a queueing process) on a dynamically evolving graph. The literature on such models is still at its infancy. Where static random graphs form a classical topic in probability theory, dating back to the pioneering work of Erdős and Rényi [12] and Gilbert [15], only recently the behavior of randomly evolving graphs has received substantial attention; see e.g. [16, 17, 23, 30] for a few examples. Examples of papers on random processes on (dynamic) random graphs are [3, 6, 7]. The systematic study of queueing processes on such a randomly evolving graph has hardly been looked at, a notable exception being the recent study [13]. The model considered in [13] complements the one studied in the present work. Most notably, the framework of [13] in particular facilitates modelling the effect of nodes going down every now and then, where the present paper has a focus on links going down. The immediate consequence of this difference in modelling, is that in the framework of [13] diffusion limits do not apply, due to the instantaneous downward jumps of the network population vector at epochs that a node fails.

There is also a relation with the classical work [25], where the Poisson-arrival-location model (PALM) is introduced. In this model customers arrive according to an inhomogeneous Poisson process and move independently through the network according to some random location process (with a fixed routing matrix). A consequence of the way the model is constructed is that, for instance, the number of customers at each node follows a Poisson distribution. The major difference with our model, is that in our setup the topology of the network is determined by a modulating process (meaning that the routing matrix is random); consequently the positions of different clients (during their path through the network) are in our model no longer independent, thereby also destroying the ‘Poisson properties’. Observe that it is this dependence structure that considerably complicates the analysis. It also explains why we pursue scaling limits for obtaining insight in the network population distribution (which is obviously not needed in the setup of [25], as there closed-form expressions are available).

Our analysis will be based on casting our model as a network of infinite-server queues under Markov modulated arrival and service rates. Explicit results on (single-node) Markov modulated infinite-server queues (primarily in terms of differential equations for the probability generating function, and the corresponding moments) can be found in e.g. [5, 10, 14, 19, 26]. Diffusion limits for such single-node systems have been derived in e.g. [2, 9]; we also refer to [18] for a recent contribution with such diffusion results for a broad class of networks of Markov modulated infinite-server queues. For general background on queueing networks, we refer to [20, 21, 28].

This paper is organized as follows. In Sect. 2 we describe our model. Section 3 presents our analysis, in terms of results exact results for the probability generating function and moments; we restrict ourselves to the case that clients who wish to jump but the corresponding link is down, are lost with probability 1. Section 4 concerns the weak convergence to a Gaussian process, for the same model. In Sect. 5 we consider a number of extensions, including the one in which blocked customers are not necessarily lost but retry. In Sect. 6 we discuss a number of special cases for which the calculations can be done explicitly. Concluding remarks are found in Sect. 7.

2 Model Description

In this section we first provide a detailed model description, and then introduce quantities of our interest.

The network that we consider consists of n nodes that are connected through \(\bar{n}:={ {n}\atopwithdelims (){2}}\) links. Let \(\lambda _i\) be the rate of the Poissonian arrival process at node i. The time spent at node i is exponentially distributed with parameter \(\mu _i\) (where we discuss in Sect. 5 how our setup extends to the case of phase-type service times). After having been served at node i, the probability that the served customer wishes to jump to node j (where \(j\not =i\)) is \(p_{ij}\), where \(p_{i0}\) is the probability of leaving the network. We obviously assume that \(\sum _{j\not =i}p_{ij}=1\), and we write \(\mu _{ij}=\mu _i p_{ij}\). It is noted that this setup does not necessarily mean that we assume that the network be a complete graph; if a node pair (ij) is not connected, we are to set the corresponding \(\mu _{ij}\) equal to 0. Observe that the dynamics as described above entail that the number of clients evolves as an infinite-server queue: the clients are served in parallel, and hence do not interact. We assume that the routing mechanism gives rise to an irreducible structure, entailing that the \(\mu _{ij}\) are such that for a client residing at a specific node with positive probability it visits any other node before leaving the network. In addition, for at least one node i it holds that \(\mu _{i0}\) is strictly positive, thus guaranteeing that the network is stable. The arrival processes and service/routing processes are assumed independent.

We now describe how the links alternate between being ‘up’ and ‘down’. To this end, we let the underlying graph dynamics be determined by a K-dimensional background process \(({\varvec{X}}(t))_{t\geqslant 0}\), assumed to be independent of the arrival processes and the serving/routing processes, that is defined as follows. The \(\bar{n}\) links are partitioned into K mutually disjoint sets, which are denoted by \(A_1, \ldots , A_K\), which we refer to as blocks. All links that lie in a specific block, say \(A_k\), alternate between ‘up’ and ‘down’ simultaneously; \(X_k(t)=1\) means that at time t the links in block k are ‘up’, and 0 otherwise. We define

$$\begin{aligned} Q^{(k)}=\left( \begin{array}{rr}-q^{(k)}_0&{}q^{(k)}_0\\ q^{(k)}_1&{}-q^{(k)}_1\end{array}\right) ; \end{aligned}$$

the down-time (up-time, respectively) of block k is exponentially distributed with parameter \(q^{(k)}_0\) (\(q^{(k)}_1\)). The ‘graph process’ is given through

$$\begin{aligned} ({\varvec{X}}(t) )_{t\geqslant 0}= (X_1(t),\ldots , X_K(t))_{t\geqslant 0}, \end{aligned}$$

which attains values in \(\{0,1\}^K.\) The two extreme scenarios are on one hand the case that we have just one block consisting of all \(\bar{n}\) links, or on the other hand the case that we have \(\bar{n}\) independently evolving blocks that consist of one link each. The transition rate matrix of \({\varvec{X}}(\cdot )\) is of dimension \(\bar{K}\times \bar{K}\) with \(\bar{K}:=2^K\), and given by

$$\begin{aligned} {\varvec{Q}}:=\bigoplus _{k=1}^KQ^{(k)}=\sum _{k=1}^K I_{2^{k-1}}\otimes Q^{(k)}\otimes I_{2^{K-k}}, \end{aligned}$$

where \(I_n\) denotes the n-dimensional identity matrix and \(B_1 \otimes B_2\) denotes the Kronecker product of the two matrices \(B_1\) and \(B_2\). We let \(q_{k\ell }\) be the \((k,\ell )\)-th entry of \({\varvec{Q}}\).

We now explain what happens to a client who wants to jump from i to j when the link is not present. As long as the link is down, at any attempt the client is lost with probability \(f_{ij}\in [0,1]\), and he remains at the node with probability \(1-f_{ij}\). While being at the node the mechanism that we defined above is in place: after an exponentially distributed amount of time with mean \(\mu _{ij}^{-1}\) (with \(j=0,\ldots ,n\)) he wishes to jump to node j. To keep the notation compact, in Sects. 3 and 4 we assume that \(f_{ij}=1\) for all \(i,j=1,\ldots ,n\) (i.e., all clients are lost who wish to jump from i to j when the link between i and j is absent); in Sect. 5 we point out how to adapt the results to include situations with \(f_{ij}\in [0,1).\)

In this paper a key role is played by the n-dimensional queue length process

$$\begin{aligned} ({\varvec{M}}(t))_{t\geqslant 0} = (M_1(t),\ldots , M_n(t))_{t\geqslant 0}, \end{aligned}$$

where \(M_i(t)\in {\mathbb {N}}_0\) represents the number of clients in the queue at node i at time t. Our objective is to characterize the distribution of \({\varvec{M}}(t)\); as we will see below, this is possible, albeit in implicit terms, viz. in terms of a partial differential equation for the corresponding joint probability generating function. Observe that by itself \(({\varvec{M}}(t))_{t\geqslant 0}\) is not a Markov process, but the joint process \(({\varvec{M}}(t),{\varvec{X}}(t))_{t\geqslant 0}\) is.

As we want to keep track of \({\varvec{M}}(t)\) as well as the number of lost clients, we work with the probability generating function

$$\begin{aligned} {\varphi }_k(w,{\varvec{z}},t) = {\mathbb {E}}\, \big [w^{L(t)}z_1^{M_1(t)}\cdots z_n^{M_n(t)} 1_{\{{\varvec{X}}(t) = k\}}\big ], \end{aligned}$$

with L(t) defined as the number of lost clients due to a link being ‘down’ during the interval [0, t] and k be an element in \(\{0,1\}^K\).

Remark 1

In the model described all links are bidirectional: if the link between i and j is ‘down’, then clients can jump neither form i to j nor from j to i. The unidirectional variant of our model works in the precise same way; then there are \(n(n-1)\) (instead of \(\frac{1}{2}n(n-1)\)) possible links. \(\square \)

3 Prelimit Results

In this section we first set up a system of coupled partial differential equations for \({\varvec{\varphi }}(w,{\varvec{z}},t)\) (i.e., the \(2^K\)-dimensional vector with elements \({ \varphi }_{k}(w,{\varvec{z}},t)\)). We then point out how these can be used to determine moments. The next subsection presents ways to quantify the effect of the graph dynamics on the performance as perceived by the network’s users.

3.1 Partial Differential Equations

The main idea is to express \({\varvec{\varphi }}(w,{\varvec{z}},t+\Delta t)\), for \(\Delta t\) small, in terms of \({\varvec{\varphi }}(w,{\varvec{z}},t)\). We follow the precise same procedure as in e.g. [24]: we first set up the Kolmogorov equations for the state \((L(t),{\varvec{M}}(t)) = (m_0,\ldots ,m_n)\) at time \(t\geqslant 0\), then multiply with \(w^{m_0}z_1^{m_1}\cdots z_n^{m_n}\), and sum over \(m_0,\ldots ,m_n\). We recognize probability generating functions and their derivatives. More specifically, with \({\mathbb {I}}(i,j,k)\) being 1 if the link (ij) is ‘up’ when \({\varvec{X}}(\cdot )\) is in state k and 0 otherwise, we thus obtain,

$$\begin{aligned} {\varphi }_{k}(w,{\varvec{z}},t+\Delta t)=\,&{\varphi }_{k}(w,{\varvec{z}},t)+\sum _{i=1}^n{ \varphi }_{k}(w,{\varvec{z}},t)(z_i-1)\cdot \lambda _i \,\Delta t\,\\&+\sum _{i=1}^n\sum _{ {j=1,}j\not =i}^n\frac{\partial { \varphi }_{k}(w,{\varvec{z}},t)}{\partial z_i}\left( {z_j}-{z_i}\right) \cdot {\mathbb {I}}(i,j,k)\cdot \mu _{ij} \,\Delta t \,\\&+\sum _{i=1}^n\sum _{{j=1,}j\not =i}^n\frac{\partial { \varphi }_{k}(w,{\varvec{z}},t)}{\partial z_i}\left( w-{z_i}\right) \cdot \big (1-{\mathbb {I}}(i,j,k)\big )\cdot \mu _{ij} \,\Delta t \,\\&+\sum _{i=1}^n\frac{\partial { \varphi }_{k}(w,{\varvec{z}},t)}{\partial z_i}\left( 1-{z_i}\right) \cdot \mu _{i0} \,\Delta t\,\\&+\sum _{\ell \not =k}{ \varphi }_{\ell }(w,{\varvec{z}},t)\cdot q_{\ell k}\,\Delta t-\sum _{\ell \not =k}{ \varphi }_{k}(w,{\varvec{z}},t)\cdot q_{k\ell }\,\Delta t+o(\Delta t). \end{aligned}$$

The next step is to subtract \({\varphi }_{k}(w,{\varvec{z}},t)\) from both sides, divide by \(\Delta t\), and send \(\Delta t\downarrow 0\). In matrix-vector form, the resulting system of coupled partial differential equations reads, with \({\mathbb {I}}_{\bar{K}}(i,j):=\mathrm{diag}\{{\mathbb {I}}(i,j,1),\ldots ,{\mathbb {I}}(i,j,\bar{K})\}\) and \({\mathbb {J}}_{\bar{K}}(i,j):=I_{\bar{K}}-{\mathbb {I}}_{\bar{K}}(i,j)\), as follows. The function \({\varvec{\varphi }}({\varvec{z}})\) denotes the probability generating function of the stationary counterpart \({\varvec{M}}\) of \(({\varvec{M}}(t))_{t\geqslant 0}\).

Proposition 1

The joint probability generating function \({\varvec{\varphi }}(w,{\varvec{z}},t)\) satisfies

$$\begin{aligned} \frac{\partial {\varvec{\varphi }}(w,{\varvec{z}},t)}{\partial t} =&\,\sum _{i=1}^n{\varvec{\varphi }} (w,{\varvec{z}},t)\,\lambda _i(z_i-1) \,+\sum _{i=1}^n\sum _{{j=1,}j\not =i}^n\frac{\partial {\varvec{\varphi }}(w,{\varvec{z}},t)}{\partial z_i}{\mathbb {I}}_{\bar{K}}(i,j)\,\mu _{ij}\left( {z_j}-{z_i}\right) \,\\&+\sum _{i=1}^n\sum _{{j=1,}j\not =i}^n\frac{\partial {\varvec{\varphi }}(w,{\varvec{z}},t)}{\partial z_i}{\mathbb {J}}_{\bar{K}}(i,j)\,\mu _{ij}\left( w-{z_i}\right) \,\\&+\sum _{i=1}^n \frac{\partial {\varvec{\varphi }}(w,{\varvec{z}},t)}{\partial z_i}\,\mu _{i0} (1-z_i)+{\varvec{\varphi }}(w,{\varvec{z}},t)\, {\varvec{Q}}. \end{aligned}$$

The probability generating function of the stationary counterpart \({\varvec{\varphi }}({\varvec{z}})\) satisfies

$$\begin{aligned} \varvec{0} =&\,\sum _{i=1}^n{\varvec{\varphi }} ({\varvec{z}})\,\lambda _i(z_i-1) \,+\sum _{i=1}^n\sum _{j=1,j\not =i}^n\frac{\partial {\varvec{\varphi }}({\varvec{z}})}{\partial z_i}{\mathbb {I}}_{\bar{K}}(i,j)\,\mu _{ij}\left( {z_j}-{z_i}\right) \,\\&+\sum _{i=1}^n\sum _{j=1,j\not =i}^n\frac{\partial {\varvec{\varphi }}({\varvec{z}})}{\partial z_i}{\mathbb {J}}_{\bar{K}}(i,j)\,\mu _{ij}\left( 1-{z_i}\right) \,\\&+\sum _{i=1}^n \frac{\partial {\varvec{\varphi }}({\varvec{z}})}{\partial z_i}\,\mu _{i0} (1-z_i)+{\varvec{\varphi }}({\varvec{z}})\, {\varvec{Q}}. \end{aligned}$$

3.2 First Moment

In this section we exploit Proposition 1 to determine the first moments; we first point out how this procedure works for the stationary queue length \({\varvec{M}}\), but later indicate how the corresponding transient moments can be found. We let \({\varvec{X}}\) denote the stationary version of the background process.

Define, for \(i=1,\ldots ,n\),

$$\begin{aligned} {\varvec{v}}_i:= ({\mathbb {E}} M_i 1_{\{{\varvec{X}}=1\}},\ldots ,{\mathbb {E}} M_i 1_{\{{\varvec{X}}=\bar{K}\}})= \lim _{{\varvec{z}}\uparrow {\varvec{1}}} \frac{\partial {\varvec{\varphi }}({\varvec{z}})}{\partial z_i}. \end{aligned}$$

Let \({\varvec{\pi }}\) be the invariant probability measure of \({\varvec{Q}}\), i.e., the \(\bar{K}\)-dimensional row-vector such that \({\varvec{\pi }}{\varvec{Q}}={\varvec{0}}\) and whose entries sum to 1. By differentiating the differential equation featuring in Proposition 1 with respect to \(z_i\) and letting \({\varvec{z}}\uparrow {\varvec{1}}\), we obtain, for \(i=1,\ldots ,n\),

$$\begin{aligned} {\varvec{0}} = {\varvec{\pi }}\lambda _i -\sum _{j=1,j\not =i}^n{\varvec{v}}_i \,\mu _{ij}+ \sum _{j=1,j\not =i}^n{\varvec{v}}_j \,\mu _{ji}{\mathbb {I}}_{\bar{K}}(j,i)-{\varvec{v}}_i\,\mu _{i0}+{\varvec{v}}_i \,{\varvec{Q}}. \end{aligned}$$

We now explain how to set up a computational procedure with which the \({\varvec{v}}_i\) can be found. The n sets of \(\bar{K}\)-dimensional systems of linear equations can be cast into a single set of \(n\bar{K}\) linear equations (in equally many unknowns). Let \({\varvec{v}}\equiv ({\varvec{v}}_1,\ldots ,{\varvec{v}}_n)\). Also, let \({\varvec{\lambda }}\) the row-vector \((\lambda _1,\ldots ,\lambda _n)\), and

$$\begin{aligned} \nu _i:=\sum _{j=1,j\not =i}^n \mu _{ij}. \end{aligned}$$

In addition, we define the matrices

$$\begin{aligned} {\mathscr {M}}_+:=\left( \begin{array}{cccc} \nu _1 I_{\bar{K}}&{}&{}&{}\\ &{}\nu _2 I_{\bar{K}}&{}&{}\\ &{}&{}\ddots &{}\\ &{}&{}&{}\nu _n I_{\bar{K}}\end{array}\right) ,\,\, {\mathscr {M}}_-:=\left( \begin{array}{cccc} 0 &{}\mu _{12}{\mathbb {I}}_{\bar{K}}(1,2)&{}\ldots &{}\mu _{1n}{\mathbb {I}}_{\bar{K}}(1,n)\\ \mu _{21}{\mathbb {I}}_{\bar{K}}(1,2)&{}0&{}&{}\mu _{2n}{\mathbb {I}}_{\bar{K}}(2,n)\\ \vdots &{}&{}\ddots &{}\\ \mu _{n1}{\mathbb {I}}_{\bar{K}}(n,1)&{}\mu _{n2}{\mathbb {I}}_{\bar{K}}(n,2)&{}&{}0\end{array}\right) ,\end{aligned}$$

and

$$\begin{aligned}{\mathscr {M}}_0:=\left( \begin{array}{cccc} \mu _{10} I_{\bar{K}}&{}&{}&{}\\ &{}\mu _{20} I_{\bar{K}}&{}&{}\\ &{}&{}\ddots &{}\\ &{}&{}&{}\mu _{n0} I_{\bar{K}}\end{array}\right) ,\,\, {\mathscr {Q}}:=\left( \begin{array}{cccc} {\varvec{Q}}&{}&{}&{}\\ &{}{\varvec{Q}}&{}&{}\\ &{}&{}\ddots &{}\\ &{}&{}&{}{\varvec{Q}}\end{array}\right) . \end{aligned}$$

We thus arrive at the linear system

$$\begin{aligned} {\varvec{\lambda }} \otimes {\varvec{\pi }} ={\varvec{v}}( {\mathscr {M}}_+-{\mathscr {M}}_-+{\mathscr {M}}_0-{\mathscr {Q}}), \end{aligned}$$

so that \({\varvec{v}}= ({\varvec{\lambda }} \otimes {\varvec{\pi }}) {\mathscr {N}}^{-1},\) with \({\mathscr {N}}:= {\mathscr {M}}_+-{\mathscr {M}}_-+{\mathscr {M}}_0-{\mathscr {Q}}.\)

The transient first moment follows immediately by solving the corresponding system of linear differential equations. Let \(({\varvec{M}}(0), X(0))=({\varvec{m}},k_0)\), and let \({\varvec{e}}_k\) the k-th unit vector. Then, in self-evident notation, and with \({\varvec{\pi }}(t)={\varvec{e}}_{k_0}^\mathrm{T} \,\mathrm{e}^{{\varvec{Q}}t}\),

$$\begin{aligned} {\varvec{v}}'(t) = {\varvec{\lambda }} \otimes {\varvec{\pi }}(t) - {\varvec{v}}(t) {\mathscr {N}}, \end{aligned}$$

which is solved by

$$\begin{aligned} {\varvec{v}}(t) ={\varvec{m}} \,\mathrm{e}^{-{\mathscr {N}}t} +\int _0^t ({\varvec{\lambda }} \otimes {\varvec{\pi }}(s)) \, \mathrm{e}^{-{\mathscr {N}}(t-s)}\mathrm{d}s. \end{aligned}$$
(1)

This approach can be extended in a straightforward fashion to also include the mean of the number of clients lost.

Remark 2

The relation (1) can be regarded as the expected-value version of the distributional identity

$$\begin{aligned} {\varvec{M}}(t) {\mathop {=}\limits ^\mathrm{d}}{\varvec{m}} \,\mathrm{e}^{-{\mathscr {N}}t} +\int _0^t ({\varvec{\lambda }} \otimes {\varvec{X}}(s)) \, \mathrm{e}^{-{\mathscr {N}}(t-s)}\mathrm{d}s, \end{aligned}$$

cf. the relation used for our model’s single-queue counterpart in [8]. The above distributional equality has an insightful interpretation: the first term on the right-hand side corresponds to the contribution to \({\varvec{M}}(t)\) of clients that were already present at time 0, whereas the second term represents the contribution of arrivals in [0, t] which are then appropriately ‘thinned’.

As pointed out in [8], this representation in principle also provides a method to evaluate the covariance matrix of \({\varvec{M}}(t)\), using the law of total variance; in Sect. 3.3 we will rely on an alternative approach, though. \(\square \)

Remark 3

In the fully symmetric situation, the formulas simplify considerably. Let \(\lambda \) be the arrival rate at each of the n nodes. The service rate is \(\sigma :=\nu +\mu _0\), where the client leaves the network with probability \(\mu _0/\sigma \) and wants to move to another node (which is then picked uniformly at random) with probability \(\nu /\sigma .\) Let all blocks alternate independently between being ‘up’ and ‘down’. The up- and down rates are denoted by \(q_0\) and \(q_1\), respectively (i.e., the down-time is exponentially distributed with mean \(q_0^{-1}\), and the up-time exponentially distributed with mean \(q_1^{-1}\)). Assume that the queues start empty at time 0, while all links are in stationary state (i.e., each of them is ‘up’ with probability \(\pi := q_0/(q_0+q_1)\)). Then, for each of the nodes the mean number of clients present satisfies

$$\begin{aligned} v'(t) = \lambda + (n-1)\,v(t) \frac{\nu \pi }{n-1} - v(t)\sigma = \lambda -(\nu (1-\pi )+\mu _0)v(t). \end{aligned}$$

We thus find that

$$\begin{aligned} v(t) =\frac{\lambda }{\nu (1-\pi )+\mu _0} \left( 1-\mathrm{e}^{-(\nu (1-\pi )+\mu _0)t}\right) , \end{aligned}$$

which converges to \(v:= \ {\lambda }/({\nu (1-\pi )+\mu _0})\) as \(t\rightarrow \infty \). \(\square \)

3.3 Higher Moments

We now point out how (mixed) higher moments can be evaluated. We work here, for obvious reasons, with the factorial moments, from which the regular moments can be recovered in an evident manner. We recall the standard notation \((m)_r:=m!/(m-r)!\) for \(m,r\in {\mathbb {N}}\) with \(m\geqslant r\); this notation is the so-called Pochhammer symbol. The objective here is to compute, for \({\varvec{r}}\equiv (r_1,\ldots ,r_n)\) with \(r_i\in {\mathbb {N}}_0\),

$$\begin{aligned} \psi _k({\varvec{r}},t):={\mathbb {E}}\left( \left( \prod _{i=1}^{n}(M_i(t))_{r_i} \right) 1_{\{{\varvec{X}}(t)=k\}}\right) = \lim _{w\uparrow 1,{\varvec{z}}\uparrow {\varvec{1}}}\frac{\partial ^{r_1+\cdots +r_n}\varphi _k(w,{\varvec{z}},t)}{\partial z_1^{r_1}\cdots \partial z_n^{r_n}}. \end{aligned}$$

We will show that the \(\psi _k({\varvec{r}},t)\) can be recursively evaluated. To this end, we first introduce the following differential operator: for \(f: {\mathbb {R}}\times {\mathbb {R}}^n\times {\mathbb {R}}\mapsto {\mathbb {R}}\),

$$\begin{aligned} {\mathbb {D}}_{\varvec{r}}[f(w,{\varvec{z}},t)]:= \frac{\partial ^{r_1+\cdots +r_n}f(w,{\varvec{z}},t)}{\partial z_1^{r_1}\cdots \partial z_n^{r_n}}. \end{aligned}$$

Now the idea is to impose the operator \({\mathbb {D}}_{\varvec{r}}[\cdot ]\) on both sides of the partial differential equation given in Proposition 1. In addition considering the limit \(w\uparrow 1,{\varvec{z}}\uparrow {\varvec{1}}\), we thus obtain, with \({\varvec{e}}_i\) the i-th n-dimensional unit vector, and \(\mu _{ijk}^+:= {\mathbb {I}}(i,j,k)\,\mu _{ij}\) and \(\mu _{ijk}^-:= (1-{\mathbb {I}}(i,j,k))\,\mu _{ij}\),

$$\begin{aligned} \lim _{w\uparrow 1,{\varvec{z}}\uparrow {\varvec{1}}}{\mathbb {D}}_{\varvec{r}}[ \frac{\partial {\varphi }_k(w,{\varvec{z}},t)}{\partial t} ]&=\lim _{w\uparrow 1,{\varvec{z}}\uparrow {\varvec{1}}}\Bigg (\sum _{i=1}^nr_i{\mathbb {D}}_{{\varvec{r}}-{\varvec{e}}_i}[{\varphi }_k(w,{\varvec{z}},t)]\lambda _i \,1_{\{r_i\not = 0\}}\\&+\sum _{i=1}^n\sum _{j=1,j\not =i}^n r_j {\mathbb {D}}_{{\varvec{r}}-{\varvec{e}}_j+{\varvec{e}}_i} [{\varphi }_k(w,{\varvec{z}},t)]\mu _{ijk}^+ \,1_{\{r_j\not = 0\}}+ \\&-\sum _{i=1}^n\sum _{j=1,j\not =i}^n r_i {\mathbb {D}}_{{\varvec{r}}} [{\varphi }_k(w,{\varvec{z}},t)]\mu _{ijk}^+ \\&-\sum _{i=1}^n\sum _{j=1,j\not =i}^n r_i {\mathbb {D}}_{{\varvec{r}}} [{\varphi }_k(w,{\varvec{z}},t)]\mu _{ijk}^-\\&-\sum _{i=1}^n r_i {\mathbb {D}}_{\varvec{r}}[ {\varphi }_k(w,{\varvec{z}},t)] \mu _{i0}+\sum _{\ell =1}^{\bar{K}}{\mathbb {D}}_{\varvec{r}}[ {\varphi }_\ell (w,{\varvec{z}},t)] q_{\ell k} \Bigg ); \end{aligned}$$

these computations rely on the evident relation

$$\begin{aligned} \lim _{z\uparrow 1} \frac{\partial ^r}{\partial z^r}f(z)(z-1) = \lim _{z\uparrow 1}rf^{(r-1)}(z)= rf^{(r-1)}(1). \end{aligned}$$

Observe that in the above differential equation the indicator functions \(1_{\{r_i\not = 0\}}\) (first term on the right-hand side) and \(1_{\{r_j\not = 0\}}\) (second term on the right-hand side) can be left out: if the indicator function is 0, the corresponding term equals 0 anyway.

Consequently, the transient mixed reduced moments can be alternatively expressed in compact notation as follows. Here it is used that \(\mu _{ijk}^++\mu _{ijk}^-=\mu _{ij}\).

Proposition 2

For \({\varvec{r}}\in {\mathbb {N}}_0^n\), \(k\in \{1,\ldots ,\bar{K}\}\) and \(t\geqslant 0\),

$$\begin{aligned} \frac{\partial \psi _k({\varvec{r}},t)}{\partial t}=\,&\sum _{i=1}^nr_i\psi _k({\varvec{r}}-{\varvec{e}}_i,t)\lambda _i+ \sum _{i=1}^n\sum _{j=1,j\not =i}^n r_j \psi _k({\varvec{r}}-{\varvec{e}}_j+{\varvec{e}}_i,t)\mu _{ijk}^+\,\nonumber \\&-\sum _{i=1}^n\sum _{j=1,j\not =i}^n r_i \psi _k({{\varvec{r}}},t)\,\mu _{ij}- \sum _{i=1}^n r_i \psi _k({{\varvec{r}}},t)\,\mu _{i0}+ \sum _{\ell =1}^{\bar{K}} \psi _\ell ({\varvec{r}},t) q_{\ell k}. \end{aligned}$$
(2)

To obtain the stationary mixed reduced moments, one has to set the left-hand side equal to 0.

The reduced moments can be determined recursively, by solving a non-homogeneous system of linear differential equations. To verify this claim, define \(\xi ({\varvec{r}})\) as the sum of the entries of \({\varvec{r}}\), i.e., \(r_1+\cdots +r_n.\) Let \({\mathscr {S}}_r\) be all vectors \({\varvec{r}}\) such that \(\xi ({\varvec{r}})=r.\) Observe that

$$\begin{aligned} \xi ({\varvec{r}}) = \xi ({\varvec{r}}-{\varvec{e}}_j+{\varvec{e}}_i) = \xi ({\varvec{r}}-{\varvec{e}}_i)+1. \end{aligned}$$

The idea now is to use (2) to evaluate \(( \psi _1({\varvec{r}},t),\ldots , \psi _{\bar{K}}({\varvec{r}},t))\) recursively: for \({\varvec{r}}\) such that \(\xi ({\varvec{r}})=r\) this vector is computed using the corresponding expressions for \({\varvec{r}}\) such that \(\xi ({\varvec{r}})=r-1\). In more detail, this procedure works as follows.

\(\circ \) :

For \(r=0\), we find the \(\psi _k({\varvec{0}},t)={\mathbb {P}}(X(t)=k)= (\mathrm{e}^{{\varvec{Q}}t})_{k_0,k}.\)

\(\circ \) :

For \(r=1\), we find the \(\psi _k({\varvec{e}}_i,t)\) (for \(i=1,\ldots ,n\)) by appealing to Eq. (2), using the \(\psi _k({\varvec{0}},t)\) that we found for \(r=0\). This amounts to solving \(\bar{K} n\) coupled linear differential equations (where it can be checked that this system is equivalent to the one set up in Sect. 3.2).

\(\circ \) :

We now consider \(r=2\). It is readily checked that \(\#\{{\mathscr {S}}_2\} = n+\frac{1}{2}n(n-1)=\frac{1}{2}n(n+1)\), and that there are equally many equations of the type (2). As a consequence, using the \(\psi _k({\varvec{e}}_i,t)\) that we found for \(r=1\), Eq. (2) reveals that we have to solve a non-homogeneous system of \(\bar{K}\cdot \frac{1}{2}n(n+1)\) linear differential equations.

\(\circ \) :

One can proceed in a similar way with \(r\geqslant 3\); e.g. for \(\xi ({\varvec{r}})=3\) we have that

$$\begin{aligned} \#\{{\mathscr {S}}_3 \}= n + 2\cdot \textstyle \frac{1}{2} n(n-1) +\textstyle \frac{1}{6}n(n-1)(n-2)= \textstyle \frac{1}{6}n(n+1)(n+2). \end{aligned}$$

The cases with \(\xi ({\varvec{r}})\geqslant 3\) are solved very similarly to the case \(\xi ({\varvec{r}})=2\). Using (2) it can be inductively verified that in the r-th step we have a system of \(K_r:=\bar{K}\cdot \#\{{\mathscr {S}}_r \} \) non-homogeneous linear differential equations, where

$$\begin{aligned} \#\{{\mathscr {S}}_r \} ={{n+r-1}\atopwithdelims (){r}}= \frac{1}{r!}\cdot n(n+1)\cdots (n+r-1). \end{aligned}$$

For the stationary reduced means a similar recursive procedure can be set up; in the r-th step a \(K_r\)-dimensional system of linear equations needs to be solved.

The above procedure can be extended in an evident way to include also the number of customers lost, focusing on the object

$$\begin{aligned} \bar{\psi }_k({\varvec{r}},t):={\mathbb {E}}\left( \left( (L(t))_{r_0}\prod _{i=1}^{n}(M_i(t))_{r_i}\right) 1_{\{{\varvec{X}}(t)=k\}}\right) = \lim _{w\uparrow 1,{\varvec{z}}\uparrow {\varvec{1}}}\frac{\partial ^{r_0+\cdots +r_n}\varphi _k(w,{\varvec{z}},t)}{\partial w^{r_0}\partial z_1^{r_1}\cdots \partial z_n^{r_n}}, \end{aligned}$$

with \({\varvec{r}}\equiv (r_0,\ldots ,r_n).\)

3.4 User-Perceived Performance

In this subsection we study the impact of the links’ outages on the performance as perceived by the users. As it turns out, this can be done relying on classical arguments. We start by analyzing the fraction of clients that are lost (i.e., clients who leave the network because of a link being down, and not because of a service completion), denoted by \(\omega \). To this end, let \(\omega _{ik}\) be the probability that a client who enters the network at node i while the background process is in state k, is lost. Observe that \({\mu _{ijk}^+}/{\sigma _{ik}}\) is the probability of a client jumping from node i to node j when the background process is in k; \({q_{k\ell }} /{\sigma _{ik}}\) and \({\mu _{ijk}^-}/{\sigma _{ik}}\) can be interpreted analogously. Then, with \(q_k:=-q_{kk}\) and \(\sigma _{ik}:=\nu _i+\mu _{i0}+q_k\), by conditioning on the first jump,

$$\begin{aligned} \omega _{ik} = \sum _{j\not =i} \left( \frac{\mu _{ijk}^+}{\sigma _{ik}}\right) \omega _{jk} +\sum _{\ell \not =k} \left( \frac{q_{k\ell }}{\sigma _{ik}}\right) \omega _{i\ell }+ \sum _{j\not =i} \left( \frac{\mu _{ijk}^-}{\sigma _{ik}}\right) , \end{aligned}$$

or, in a more convenient form,

$$\begin{aligned} -\sum _{j\not =i} {\mu _{ijk}^-} = -\sigma _{ik}\omega _{ik} + \sum _{j\not =i} {\mu _{ijk}^+} \omega _{jk} +\sum _{\ell \not =k} q_{k\ell } \omega _{i\ell }. \end{aligned}$$

These equations constitute an \(n\bar{K}\)-dimensional diagonally dominant system of linear equations (actually even strictly diagonally dominant as there is at least one i such that \(\mu _{i0}>0\)), which is known to yield a unique solution. Let, as before, \(\pi _\ell \) denote the stationary probability that the background process \({\varvec{X}}(\cdot )\) is in state \(\ell \) (i.e., \({\varvec{\pi }}\) solves \({\varvec{\pi }}{\varvec{Q}} = {\varvec{0}}\)). Then, with \(\bar{\lambda } := \sum _{i=1}^n \lambda _i\), the loss probability equals

$$\begin{aligned} \omega = \sum _{k=1}^{\bar{K}} \pi _k\left( \frac{1}{\bar{\lambda }} {\sum _{i=1}^n \lambda _i \omega _{ik}}\right) . \end{aligned}$$

As an aside we mention that the loss probability \(\omega \) can alternative be evaluated, using the methodology of the previous subsections, as

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{{\mathbb {E}}\,L(t)}{\bar{\lambda }t}. \end{aligned}$$

Along the same lines, we can determine the mean time (to be denoted by \(\tau \)) the job remains in the network, jointly with the client being eventually lost (i.e., leaving the network because of a link failure). Define \(\tau _{ik}\) to be this quantity for a client who enters the network at node i while the background process is in state k. Then, again conditioning on the first jump,

$$\begin{aligned} \tau _{ik} =\sum _{j\not =i} \left( \frac{\mu _{ijk}^+}{\sigma _{ik}}\right) \left( \frac{1}{\sigma _{ik}}+ \tau _{jk}\right) +\sum _{\ell \not =k} \left( \frac{q_{k\ell }}{\sigma _{ik}}\right) \left( \frac{1}{\sigma _{ik}}+ \tau _{i\ell }\right) + \sum _{j\not =i} \left( \frac{\mu _{ijk}^-}{\sigma _{ik}}\right) \left( \frac{1}{\sigma _{ik}}\right) . \end{aligned}$$

Again this system of linear equations is diagonally dominant. As above,

$$\begin{aligned} \tau = \sum _{k=1}^{\bar{K}} \pi _k\left( \frac{1}{\bar{\lambda }} {\sum _{i=1}^n \lambda _i \tau _{ik}}\right) . \end{aligned}$$

4 Scaling Limit: Functional Central Limit Theorem

In this section we study the system under a particular scaling, under which there is convergence to a Gaussian process (viz. a multivariate Ornstein-Uhlenbeck process). We consider the following scaling: \({\varvec{\lambda }}\mapsto N{\varvec{\lambda }}\) and \(q_i^{(k)}\mapsto N^\alpha q_i^{(k)}\) for some \(\alpha >0\) (for \(k=1,\ldots ,K\) and \(i=0,1\)). The model is thus parametrized by N; to stress the dependence on N, we throughout write \(\mathbf{M}^{(N)}(t)\) and \({\varvec{X}}^{(N)}(t)\). We focus on a functional central limit theorem for a centered and appropriately scaled version of \(\mathbf{M}^{(N)}(t)\). For the moment we concentrate on the (more involved) case \(\alpha =1\); in Remark 5 we point out how this provides us with the limit result for \(\alpha \in (0,\infty )\setminus \{1\}\) as well.

A full proof is (far) beyond the scope of this paper. For a rigorous derivation of the functional central limit theorem, based on martingale arguments in combination with the continuous mapping theorem, for a class of network models that is substantially broader than the one studied in this paper, we refer to [18]. Below we by and large follow the structure that we used in [9] for a single Markov modulated infinite-server queue (with a crucial step stemming from [18]), and therefore we restrict ourselves to highlighting the main steps.

\(\circ \) Deviation matrix. We now introduce some notions that we need across this subsection. For ease, we define them here for the unscaled system, but for the scaled system they can be adapted in a straightforward manner.

Define by \(p_{k\ell }(t):={\mathbb {P}}({\varvec{X}}(t)=\ell \,|\,{\varvec{X}}(0)=k) = (\mathrm{e}^{{\varvec{Q}}t})_{k\ell }\) the transition probabilities of \({\varvec{X}}(\cdot ).\) The equilibrium probability that block m is ‘up’ is given by \(\pi ^{(m)} = q_0^{(m)}/(q_0^{(m)}+q_1^{(m)})\), for \(m=1,\ldots ,K\). An important role in the analysis is played by the deviation matrix D (of dimension \(\bar{K}\times \bar{K}\)), whose \((k,\ell )\)-th entry is given by

$$\begin{aligned} D_{k\ell }=\int _0^\infty (p_{k\ell }(t) -\pi _\ell )\mathrm{d}t = \int _0^\infty ((\mathrm{e}^{{\varvec{Q}}t})_{k\ell }-\pi _\ell )\mathrm{d}t, \end{aligned}$$

with \({\varvec{\pi }}\), as before, the solution to \({\varvec{\pi }}{\varvec{Q}} = {\varvec{0}}\) with entries summing to 1.

Let \(U_k\) be the set of blocks that is ‘up’ when \({\varvec{X}}(t)=k\), and \(D_k:=\{1,\ldots ,K\}\setminus U_k.\) Define, with \(q^{(m)}:= q_0^{(m)}+q_1^{(m)}\),

$$\begin{aligned} p_{00}^{(m)}(t) :=1-\pi ^{(m)} +\pi ^{(m)} \mathrm{e}^{-q^{(m)} t},\,\,\, p_{11}^{(m)}(t) := \pi ^{(m)} +(1-\pi ^{(m)} )\mathrm{e}^{-q^{(m)} t} ; \end{aligned}$$

in addition \(p_{01}^{(m)}(t):=1- p_{00}^{(m)}(t)\) and \(p_{10}^{(m)}(t):=1-p_{11}^{(m)}(t)\). Then, as is readily verified,

$$\begin{aligned}&p_{k\ell }(t)&=\left( \prod _{m\in U_k\cap U_\ell } p_{11}^{(m)}(t) \right) \left( \prod _{m\in U_k\cap D_\ell } p_{10}^{(m)}(t)\right) \nonumber \\&\quad \times \left( \prod _{m\in D_k\cap U_\ell } p_{01}^{(m)}(t)\right) \left( \prod _{m\in D_k\cap D_\ell } p_{00}^{(m)}(t)\right) , \end{aligned}$$
(3)

and

$$\begin{aligned} \pi _\ell =\left( \prod _{m\in U_\ell }\pi ^{(m)} \right) \left( \prod _{m\in D_\ell } (1-\pi ^{(m)} )\right) . \end{aligned}$$
(4)

From the explicit expressions for the \(p_{ij}^{(m)}(t)\), we conclude that \(p^{(m)}_{j0}(t)-(1-\pi ^{(m)})\) and \(p^{(m)}_{j1}(t)- \pi ^{(m)}\) can be written as a linear combination of terms of the type

$$\begin{aligned} \exp \left( -t\sum _{m\in S} q^{(m)}\right) , \end{aligned}$$

where S is a non-empty set. When subtracting (4) from (3), this entails that, for non-empty sets \(S_{m'}(k,\ell )\) that depend on \(m',\) k, and \(\ell \),

$$\begin{aligned} p_{k\ell }(t)-\pi _\ell = \sum _{m'} \alpha _{m'}(k,\ell ) \exp \left( -t\sum _{m\in S_{m'}(k,\ell )} q^{(m)}\right) , \end{aligned}$$

for coefficients \(\alpha _{m'}(k,\ell )\) that are straightforward to evaluate but that do not allow an explicit expression. As a consequence,

$$\begin{aligned} D_{k\ell } = \sum _{m'} \Bigg (\frac{\alpha _{m'}(k,\ell )}{\displaystyle \sum _{m\in S_{m'}(k,\ell )} q^{(m)}}\Bigg ). \end{aligned}$$

As an example, we work out the D matrix for the case of one block (i.e., \(K=1\), or, equivalently, \(\bar{K}=2\)). Put \(q:=q^{(1)}\), \(q_i:=q_i^{(1)}\) (for \(i=0,1\)), and \(\pi :=\pi _1\). Then

$$\begin{aligned} D=\int _0^\infty \left( \begin{array}{cc}\pi \mathrm{e}^{-q t}&{}-\pi \mathrm{e}^{-q t}\\ -(1-\pi )\mathrm{e}^{-q t}&{}(1-\pi )\mathrm{e}^{-q t}\end{array}\right) \mathrm{d}t=\frac{1}{q^2} \left( \begin{array}{cc}q_0&{}-q_0\\ -q_1&{}q_1\end{array}\right) . \end{aligned}$$

\(\circ \) Time-changed Poisson process representation. In this section we repeatedly use the following representation. We throughout use the definition \(\mu _{ijk}:= \mu _{ij} {\mathbb {I}}(i,j,k)\) if \(j=1,\ldots ,n\) and

$$\begin{aligned} \mu _{i0k} := \mu _{i0} + \sum _{j=1,j\not =i}^{n} \mu _{ij} (1- {\mathbb {I}}(i,j,k)). \end{aligned}$$

With all \(P_{ij}(\cdot )\) (for \(i,j=0,\ldots , n\)) independent unit rate Poisson processes, it is directly verified that with the above definition of the rates \(\mu _{ijk}\) the numbers of customers in the respective queues satisfy

$$\begin{aligned} M^{(N)}_i(t)&= P_{0i}(N\lambda _i t) \,+\, \sum _{j=1,j\not =i}^n P_{ji}\left( \int _0^t M^{(N)}_j(s)\sum _{k=1}^{\bar{K}}\mu _{jik} Z_k^{(N)}(s)\mathrm{d}s\right) \,\nonumber \\&\,\quad -\sum _{j=0,j\not =i}^n P_{ij}\left( \int _0^t M^{(N)}_i(s)\sum _{k=1}^{\bar{K}}\mu _{ijk} Z_k^{(N)}(s)\mathrm{d}s\right) , \end{aligned}$$
(5)

with \(Z_k^{(N)}(t) := 1_{\{{\varvec{X}}^{(N)}(t)=k\}}\). This type of Poisson processes with random time-change, and their applications in obtaining scaling-limits, have been described in great detail in e.g. [1].

\(\circ \) SDE for centered and normalized system. The idea is to first set up an SDE for \({\varvec{M}}^{(N)}(t)\), which is then translated into an SDE for its centered and normalized version \(\tilde{\varvec{M}}^{(N)}(t)\).

For \(i=1,\ldots ,n\), by (5),

$$\begin{aligned} \mathrm{d}M^{(N)}_i(t) =&\, N\lambda _i \mathrm{d}t +\sum _{j=1,j\not =i}^n \sum _{k=1}^{\bar{K}} M^{(N)}_j(t) \mu _{jik} Z_k^{(N)}(t)\,\mathrm{d}t \,\\&-M_i^{(N)}(t) \sum _{j=0,j\not =i}^n \sum _{k=1}^{\bar{K}}\mu _{ijk} Z_k^{(N)}(t) \,\mathrm{d}t+\mathrm{d}\kappa _i^{(N)}(t), \end{aligned}$$

for some n-dimensional martingale \({\varvec{\kappa }}^{(N)}(\cdot ).\) In the functional central limit theorem, fluctuations around an average are considered; this n-dimensional average vector \({\varvec{\varrho }}(t)\) solves the following system of linear differential equations:

$$\begin{aligned} \varrho _i'(t)=\lambda _i + \sum _{j=1,j\not =i}^n \varrho _j(t)\sum _{k=1}^{\bar{K}} \mu _{jik} \pi _k -\varrho _i(t) \sum _{j=0,j\not =i}^n \sum _{k=1}^{\bar{K}} \mu _{ijk} \pi _k , \end{aligned}$$

for \(i=1,\ldots ,n\). As shown in [18, Sect. 3], this \({\varvec{\varrho }}(\cdot )\) can be considered as the fluid limit [29] corresponding to \({\varvec{M}}^{(N)}(\cdot )\).

Keeping in mind we aim at deriving results for the central-limit regime, we consider a centered and normalized process whose i-th component is defined by

$$\begin{aligned} \tilde{M}_i^{(N)}(t) :={N}^{-1/2}\cdot \big (M_i^{(N)}(t) - N\varrho _i(t)\big ). \end{aligned}$$

It is direct that

$$\begin{aligned} \mathrm{d}\tilde{M}^{(N)}_i(t) =&\, {N}^{1/2}\lambda _i \mathrm{d}t +{N}^{-1/2}\sum _{j=1,j\not =i}^n M^{(N)}_j(t)\sum _{k=1}^{\bar{K}} \mu _{jik} Z^{(N)}_k(t)\, \mathrm{d}t \,\\&-{N}^{-1/2}M_i^{(N)}(t) \sum _{j=0,j\not =i}^n \sum _{k=1}^{\bar{K}}\mu _{ijk} Z_k^{(N)} (t)\, \mathrm{d}t-N^{1/2}\varrho '_i(t)\mathrm{d}t+{N}^{-1/2} \mathrm{d}\kappa _i^{(N)}(t). \end{aligned}$$

The idea now is to plug in the differential equation that is obeyed by \({\varvec{\varrho }}(t)\); the resulting stochastic differential equation resembles the one featuring in [9]: omitting a few elementary steps, using the compact notation \(\bar{Z}_k^{(N)}(t):= Z^{(N)}_k(t)-\pi _k\),

$$\begin{aligned} \mathrm{d}\tilde{M}^{(N)}_i(t) =&\,N^{-1/2}\sum _{j=1,j\not =i}^n \sum _{k=1}^{\bar{K}} \mu _{jik}\big (M_j^{(N)}(t)Z^{(N)}_k(t)- N\varrho _j(t) \pi _k\big ) \mathrm{d}t\,\\&-N^{-1/2}\sum _{j=0,j\not =i}^n \sum _{k=1}^{\bar{K}} \mu _{ijk}\big (M_i^{(N)}(t)Z^{(N)}_k(t)- N\varrho _i(t) \pi _k\big ) \mathrm{d}t+{N}^{-1/2}\mathrm{d}\kappa _i^{(N)}(t)\\ =&\,\sum _{j=1,j\not =i}^n \sum _{k=1}^{\bar{K}} \tilde{M}^{(N)}_j(t)\mu _{jik} Z^{(N)}_k(t)\,\mathrm{d}t - \sum _{j=0,j\not =i}^n \sum _{k=1}^{\bar{K}} \tilde{M}_i^{(N)}(t)\mu _{ijk} Z^{(N)}_k(t)\, \mathrm{d}t\,\\&+\sqrt{N}\left( \sum _{j=1,j\not =i}^n \sum _{k=1}^{\bar{K}}\varrho _j(t) \mu _{jik} \bar{Z}_k^{(N)}(t)\,\mathrm{d}t-\sum _{j=0,j\not =i}^n \sum _{k=1}^{\bar{K}}\varrho _i(t) \mu _{ijk} \bar{Z}_k^{(N)}(t)\,\mathrm{d}t\right) \\&+{N}^{-1/2}\mathrm{d}\kappa _i^{(N)}(t), \end{aligned}$$

or in integral form

$$\begin{aligned} \tilde{M}^{(N)}_i(t) =&\, \int _0^t\sum _{j=1,j\not =i}^n \sum _{k=1}^{\bar{K}} \tilde{M}^{(N)}_j(s)\mu _{jik} Z^{(N)}_k(s)\,\mathrm{d}s \\&-\int _0^t \sum _{j=0,j\not =i}^n \sum _{k=1}^{\bar{K}} \tilde{M}_i^{(N)}(s)\mu _{ijk} Z^{(N)}_k(s)\, \mathrm{d}s\,\\&+\sqrt{N}\int _0^t\left( \sum _{j=1,j\not =i}^n \sum _{k=1}^{\bar{K}}\varrho _j(s) \mu _{jik} \bar{Z}_k^{(N)}(s)\mathrm{d}s \right. \\&\left. -\sum _{j=0,j\not =i}^n \sum _{k=1}^{\bar{K}}\varrho _i(s) \mu _{ijk} \bar{Z}_k^{(N)}(s)\mathrm{d}s\right) \mathrm{d}s\,\\&+{N}^{-1/2}\kappa _i^{(N)}(t). \end{aligned}$$

\(\circ \) A simplification. The next step is to verify that as \(N\rightarrow \infty \), \(\tilde{M}^{(N)}_i(t) -\check{M}^{(N)}_i(t)\) converges to the zero process as \(N\rightarrow \infty \) (cf. [18, Lemma 4.4]); here \(\check{M}^{(N)}_i(t)\) is defined as \(\tilde{M}^{(N)}_i(t)\) in the previous display, but now with the \(Z_k^{(N)}(t)\) in the first two terms in the right-hand side replaced by \(\pi _k.\) To this end, observe that [18, Thm. 5.2] entails that

$$\begin{aligned} \int _0^t \tilde{M}_i^{(N)}(t) \big (Z^{(N)}_k(t)-\pi _k \big )\mathrm{d}t = \int _0^t \left( \frac{M_i^{(N)}(t)-N\varrho _i(t)}{N}\right) \sqrt{N}\big (Z^{(N)}_k(t)-\pi _k \big )\mathrm{d}t \rightarrow 0, \end{aligned}$$

using the law-of-large-numbers property that \(M_i^{(N)}(t)/N\) converges in probability to \(\varrho _i(t)\).

We have thus arrived at, with \(\bar{\mu }_{ij}:=\sum _{k=1}^{\bar{K}}\mu _{ijk}\pi _k\), the following system of SDE’s:

$$\begin{aligned} \mathrm{d}\check{M}^{(N)}_i(t) =&\, \sum _{j=1,j\not =i}^n \check{M}^{(N)}_j(t)\bar{\mu }_{ji}\,\mathrm{d}t - \sum _{j=0,j\not =i}^n \check{M}_i^{(N)}(t)\bar{\mu }_{ij} \, \mathrm{d}t\,\\&+\sqrt{N}\left( \sum _{j=1,j\not =i}^n \sum _{k=1}^{\bar{K}}\varrho _j(t) \mu _{jik} \bar{Z}_k^{(N)}(t)\,\mathrm{d}t-\sum _{j=0,j\not =i}^n \sum _{k=1}^{\bar{K}}\varrho _i(t) \mu _{ijk} \bar{Z}_k^{(N)}(t)\,\mathrm{d}t\right) \\&+{N}^{-1/2}\mathrm{d}\kappa _i^{(N)}(t). \end{aligned}$$

\(\circ \) Functional central limit theorem. The following steps echo those in [9, Sect. 4]. They rely on the transformation

$$\begin{aligned} {\varvec{Y}}^{(N)}(t) = \exp \big (-{\mathscr {M}}t \big ) \check{\varvec{M}}^{(N)}(t), \end{aligned}$$

with for \(i\not =j\) the (ij)-th entry of the \((n\times n)\)-dimensional matrix \({\mathscr {M}}\) being given by \(\bar{\mu }_{ji}\), whereas the (ii)-th entry is \(-\sum _{j=0,j\not =i}^n\bar{\mu }_{ij}\). It thus follows that

$$\begin{aligned} \mathrm{d}{\varvec{Y}}^{(N)}(t) = \exp \big (-{\mathscr {M}}t \big ) \left( \sqrt{N} {\mathscr {M}}^\circ (t) \bar{\varvec{Z}}^{(N)}(t)\mathrm{d}t+N^{-1/2} \mathrm{d}{\varvec{\kappa }}^{(N)}(t)\right) , \end{aligned}$$

where the (ik)-th entry of the \((n\times \bar{K})\)-dimensional matrix \({\mathscr {M}}^\circ (t)\) is given by

$$\begin{aligned} \left( {\mathscr {M}}^\circ (t)\right) _{ik}:=\sum _{j=1,j\not =i}^n \varrho _j(t) \mu _{jik} -\sum _{j=0,j\not =i}^n \varrho _i(t) \mu _{ijk}. \end{aligned}$$

In the next step we analyze the terms \(\sqrt{N} {\mathscr {M}}^\circ (t) \bar{\varvec{Z}}^{(N)}(t)\) and \(N^{-1/2} \mathrm{d}{\varvec{\kappa }}^{(N)}(t)\) separately. As in [9], with \({\varvec{G}}^{(N)}(t):=\sqrt{N} {\mathscr {M}}^\circ (t)\, \bar{\varvec{Z}}^{(N)}(t)\), it follows that \({\varvec{G}}^{(N)}(\cdot ) \rightarrow {\varvec{G}}(\cdot )\) as \(N\rightarrow \infty \), where \({\varvec{G}}(\cdot )\) satisfies

$$\begin{aligned} \langle {\varvec{G}}\rangle _t ={\varvec{V}}(t):=\int _0^t {\mathscr {M}}^\circ (s)\,\Sigma \,({\mathscr {M}}^\circ (s))^\mathrm{T}\mathrm{d}s, \end{aligned}$$

with \(\Sigma :=\mathrm{diag}\{{\varvec{\pi }}\}D +D^\mathrm{T}\mathrm{diag}\{{\varvec{\pi }}\}.\) It entails that \({\varvec{G}}^{(N)}(\cdot )\rightarrow {\mathscr {M}}^\circ (\cdot ) \bar{\varvec{B}}(\cdot )\), with \(\bar{\varvec{B}}(\cdot )\) a \(\bar{K}\)-dimensional zero-mean Brownian motion with covariance matrix \(\Sigma \).

Using the precise same argumentation as in [9], for independent standard Brownian motions \(B_{ij}(\cdot )\), with \(i,j=0,\ldots ,n\),

$$\begin{aligned} \frac{1}{\sqrt{N} }{\kappa }^{(N)}_i(\cdot ) \rightarrow \sqrt{\lambda _i} B_{0i} (\cdot )+\sum _{j=1,j\not =i}^n\sqrt{\varrho _j(\cdot ) \bar{\mu }_{ji}} B_{ji}(\cdot )- \sum _{j=0,j\not =i}^n\sqrt{\varrho _i(\cdot )\bar{\mu }_{ij}}B_{ij}(\cdot ), \end{aligned}$$

cf. (5); the processes \(B_{ij}(\cdot )\) are independent of \(\bar{\varvec{B}}(\cdot )\).

Now recall the relation between \({\varvec{Y}}^{(N)}(t)\) and \(\check{\varvec{M}} ^{(N)}(t)\), and the fact that \(\tilde{\varvec{M}}^{(N)}(\cdot )-\check{\varvec{M}}^{(N)}(\cdot )\) converges to the zero process as \(N\rightarrow \infty \). Based on the weak convergence results established above, we thus obtain the following functional central limit theorem. It states that the process under study converges to a (multivariate) process of Ornstein-Uhlenbeck type.

Proposition 3

As \(N\rightarrow \infty \), \(\tilde{\varvec{M}}^{(N)}(\cdot )\) weakly converges to \(\tilde{\varvec{M}}(\cdot )\), satisfying the following system of coupled stochastic differential equations: for \(i=1,\ldots ,n\),

$$\begin{aligned} \mathrm{d}\tilde{M}_i(t) =&\,\sum _{j=1,j\not =i}^n \tilde{M}_j(t)\bar{\mu }_{ji}\,\mathrm{d}t - \sum _{j=0,j\not =i}^n \tilde{M}_i(t)\bar{\mu }_{ij} \, \mathrm{d}t\,\\&+\sqrt{\lambda _i}\, \mathrm{d}B_{0i} (t)+\sum _{j=1,j\not =i}^n\sqrt{\varrho _j(t) \bar{\mu }_{ji}} \mathrm{d}B_{ji}(t) \\&-\sum _{j=0,j\not =i}^n\sqrt{\varrho _i(t)\bar{\mu }_{ij}} \mathrm{d}B_{ij}(t)+({\mathscr {M}}^\circ (t)\mathrm{d} \bar{\varvec{B}}(t))_i. \end{aligned}$$

Remark 4

The distribution of \(\tilde{\varvec{M}}(t)\) (for a given value of \(t\geqslant 0\), that is) can be explicitly found from known results for multivariate Ornstein-Uhlenbeck processes. If \(\tilde{\varvec{M}}(0)\) is constant, then it is an n-dimensional Normal distribution with mean \({\varvec{0}}\) and covariance matrix

$$\begin{aligned} {\mathbb {C}}\mathrm{ov}(\tilde{\varvec{M}}(t),\tilde{\varvec{M}}(t))=\int _0^t \mathrm{e}^{{\mathscr {M}}(t-s)}{\mathscr {M}}^\circ (s)\,\Sigma \, ({\mathscr {M}}^\circ (s))^\mathrm{T}\big (\mathrm{e}^{{\mathscr {M}} (t-s)}\big )^\mathrm{T}\mathrm{d}s + \int _0^t\bar{\Sigma }(s)\mathrm{d}s, \end{aligned}$$

where \(\bar{\Sigma }_{ij}(s)=0\) for \(i\not =j\) and

$$\begin{aligned} \bar{\Sigma }_{ii}(s)=\lambda _i +\sum _{j=1,j\not =i}^n\varrho _j(s)\bar{\mu }_{ji}- \sum _{j=0,j\not =i}^n\varrho _i(s)\bar{\mu }_{ij}. \end{aligned}$$

Observe that, as \(\bar{\Sigma }_{ii}(s)=\varrho _i'(s)\) by definition, \(\int _0^t \bar{\Sigma }_{ii}(s)\mathrm{d}s =\varrho _i(t).\) With standard theory on multivariate Ornstein-Uhlenbeck processes also the \({\mathbb {C}}\mathrm{ov}(\tilde{\varvec{M}}(t),\tilde{\varvec{M}}(t+u))\), i.e., the covariance matrix pertaining to the system’s time-dependent behavior, can be determined. \(\square \)

Remark 5

As we mentioned, the above result corresponds to the case \(\alpha = 1\). Precisely following the line of reasoning of [9, 18], for arbitrary \(\alpha >0\), we have to define \(\tilde{M}_i^{(N)}(t)\) through

$$\begin{aligned} \tilde{M}_i^{(N)}(t) :={N}^{-\beta }\cdot \big (M_i^{(N)}(t) - N\varrho _i(t)\big ), \end{aligned}$$

with \(\beta := \max \{\tfrac{1}{2}, 1-\tfrac{\alpha }{2}\}.\) Then the recipe is to go through precisely the same steps as in the proof for \(\alpha =1\), but it will turn out that for \(\alpha >1\) a specific part of the resulting SDE cancels, whereas for \(\alpha <1\) another part cancels.

More specifically, it can be argued that for \(\alpha >1\) the limiting system of differential equations reduces, for \(i=1,\ldots ,n\), to

$$\begin{aligned} \mathrm{d}\tilde{M}_i(t) =&\,\sum _{j=1,j\not =i}^n \tilde{M}_j(t)\bar{\mu }_{ji}\,\mathrm{d}t - \sum _{j=0,j\not =i}^n \tilde{M}_i(t)\bar{\mu }_{ij} \, \mathrm{d}t\,\\&+\sqrt{\lambda _i} \mathrm{d}B_{0i} (t)+\sum _{j=1,j\not =i}^n\sqrt{\varrho _j(t) \bar{\mu }_{ji}} \mathrm{d}B_{ji}(t)-\sum _{j=0,j\not =i}^n\sqrt{\varrho _i(t)\bar{\mu }_{ij}} \mathrm{d}B_{ij}(t). \end{aligned}$$

Observe that this entails that for \(\alpha >1\) the limiting system depends on the service rates \(\mu _{ijk}\) only through their time averaged counterparts \(\bar{\mu }_{ij}\); this reflects the relatively fast alternating link state process. The system essentially behaves as a network of non-modulated infinite-server queues; in particular, Remark 4 indicates that the centered and normalized versions of the individual queues, i.e., the processes \(\tilde{M}_i^{(N)}(\cdot )\), become independent as \(N\rightarrow \infty \).

For \(\alpha \in (0,1)\) the limiting system of differential equations becomes, for \(i=1,\ldots ,n\),

$$\begin{aligned} \mathrm{d}\tilde{M}_i(t) =&\,\sum _{j=1,j\not =i}^n \tilde{M}_j(t)\bar{\mu }_{ji}\,\mathrm{d}t - \sum _{j=0,j\not =i}^n \tilde{M}_i(t) \bar{\mu }_{ij} \, \mathrm{d}t\,+({\mathscr {M}}^\circ (t)\mathrm{d} \bar{\varvec{B}}(t))_i. \end{aligned}$$

In this case, the link state process is relatively slow, such that the scaling limit contains detailed information on the transition rates. In this regime, the individual queues are not asymptotically independent. \(\square \)

Remark 6

At the expense of some additional notation and administration, the loss process \(L^{(N)}(t)\) can be added to vector \({\varvec{M}}^{(N)}(t)\), in that a functional central limit theorem for the centered and normalized version of \((L^{(N)}(t),{\varvec{M}}^{(N)}(t))\) can be established using the same techniques. \(\square \)

5 Extensions, Ramifications

In this section we discuss two extensions. In the first subsection we describe how to adapt the model to incorporate phase-type distributed up- and down-times and phase-type service times. In the second subsection we point out how to adapt the model so as to cover the situation in which blocked customers (i.e., customers wishing to jump from i to j when the link between i and j is down) potentially retry.

5.1 Phase-Type Distributions

In case the up- and down-times are of phase-type, this is easily incorporated in that transition rate matrix \({\varvec{Q}}.\) The background process now keeps tracks of each of the links being up or down, but in addition, it gives the phase of the current up- or down-time. If the up-time (down-time, respectively) of the link between i and j is phase type of degree \(\delta ^{(\mathrm{u})}_{ij}\) (\(\delta ^{(\mathrm{d})}_{ij},\) respectively), then the dimension of \(X(\cdot )\) is

$$\begin{aligned} \prod _{i\not =j} (\delta _{ij}^{(\mathrm{u})}+\delta _{ij}^{(\mathrm{d})}). \end{aligned}$$

Likewise, the service times can be made phase-type, by keeping track of an infinite-server queue of all clients at any specific node being in a specific phase of the phase-type service time.

5.2 Model in Which Blocked Customers Retry

The previous two section considered the case that all \(f_{ij}\) are equal to 1. It is not hard to generalize the results to the situation in which \(f_{ij}\in [0,1)\) are allowed as well, but it comes at the price of having to introduce a substantial amount of additional notation. For this reason, we restrict ourselves in the subsection of just pointing out how the results have to be adapted to accommodate \(f_{ij}\in [0,1)\).

It is readily verified that now the joint probability generating function \({\varvec{\varphi }}(w,{\varvec{z}},t)\) satisfies

$$\begin{aligned} \frac{\partial {\varvec{\varphi }}(w,{\varvec{z}},t)}{\partial t} =&\,\sum _{i=1}^n{\varvec{\varphi }} (w,{\varvec{z}},t)\,\lambda _i(z_i-1) \,+\sum _{i=1}^n\sum _{j\not =i}^n\frac{\partial {\varvec{\varphi }}(w,{\varvec{z}},t)}{\partial z_i}{\mathbb {I}}_{\bar{K}}(i,j)\,\mu _{ij}\left( {z_j}-{z_i}\right) \,\\&+\sum _{i=1}^n\sum _{j\not =i}^n\frac{\partial {\varvec{\varphi }}(w,{\varvec{z}},t)}{\partial z_i}{\mathbb {J}}_{\bar{K}}(i,j)\,\mu _{ij}\,f_{ij}\,\left( w-{z_i}\right) \,\\&+\sum _{i=1}^n \frac{\partial {\varvec{\varphi }}(w,{\varvec{z}},t)}{\partial z_i}\,\mu _{i0} (1-z_i)+{\varvec{\varphi }}(w,{\varvec{z}},t)\, {\varvec{Q}}. \end{aligned}$$

The probability generating function of the stationary counterpart \({\varvec{M}}\) follows as before, i.e., by plugging in \(w=1\) and equating the right-hand side to \({\varvec{0}}.\)

The time-dependent moments can be evaluated from the next result; again, the stationary counterpart follows by equating the right-hand side to \({\varvec{0}}\).

Proposition 4

For \({\varvec{r}}\in {\mathbb {N}}_0^n\), \(k\in \{1,\ldots ,\bar{K}\}\) and \(t\geqslant 0\),

$$\begin{aligned} \frac{\partial \psi _k({\varvec{r}},t)}{\partial t}=\,&\sum _{i=1}^nr_i\psi _k({\varvec{r}}-{\varvec{e}}_i,t)\lambda _i+ \sum _{i=1}^n\sum _{j=1,j\not =i}^n r_j \psi _k({\varvec{r}}-{\varvec{e}}_j+{\varvec{e}}_i,t)\mu _{ijk}^+\,\nonumber \\ \,&-\sum _{i=1}^n\sum _{j=1,j\not =i}^n r_i \psi _k({{\varvec{r}}},t)\,(\mu _{ijk}^++f_{ij}\mu _{ijk}^-) \nonumber \\ \,&- \sum _{i=1}^n r_i \psi _k({{\varvec{r}}},t)\,\mu _{i0}+ \sum _{\ell =1}^{\bar{K}} \psi _\ell ({\varvec{r}},t) q_{\ell k}. \end{aligned}$$
(6)

An interesting special is case is \(f_{ij}=0\) for all ij, i.e., the case in which there are no clients lost. If in addition full symmetry is assumed, cf. Remark 3, the mean allows an explicit expression. It is directly seen that for each of the links the mean number of clients present at time t, denoted as before by v(t), satisfies

$$\begin{aligned} v'(t) = \lambda + (n-1)\,v(t) \frac{\nu \pi }{n-1} - v(t)\left( (n-1)\, \frac{\nu \pi }{n-1}+\mu _0\right) = \lambda -\mu _0v(t). \end{aligned}$$
(7)

It thus follows that

$$\begin{aligned} v(t) =\frac{\lambda }{\mu _0} \left( 1-\mathrm{e}^{-\mu _0t}\right) , \end{aligned}$$

which converges to \(v:= {\lambda }/{\mu _0}\) as \(t\rightarrow \infty \). Observe that v(t) is in this case not affected by \(\pi \), due to the fact that parts of the in-flux and out-flux cancel, as observed in (7).

Regarding the functional central limit theorem, the result for \(f_{ij}=1\) carries over to that for \(f_{ij}\in [0,1)\), but with

$$\begin{aligned}\mu _{i0k} := \mu _{i0} + \sum _{j=1,j\not =i}^{n} \mu _{ij} f_{ij}(1- {\mathbb {I}}(i,j,k)).\end{aligned}$$

Recall that in this model with a probability \(1-f_{ij}\) a customer wishing to jump from node i to node j retries when the link is not available (and hence stays at node i).

6 Examples

In this section we work out a couple of relevant examples, starting with a tandem network in which the link between the nodes is subject to failure. In the first subsection we consider the case that all blocked customers are lost (i.e., \(f_{12}=1\)), whereas in the second subsection all blocked customers retry (i.e., \(f_{ij}=0\)). Section 6.3 presents the FCLT for the two-node tandem. In Sect. 6.4 we consider the FCLT for the case of a symmetric fully connected n-node network in which blocked customers are lost (where it is noted that the case in which they retry is dealt with fully analogously); Sect. 6.5 deals with its ring-shaped counterpart.

6.1 Two-Node Tandem, with Blocked Customers Being Lost

We consider a two-node tandem, where traffic arrives at the first node, is sent to the second node after having been served at the first node, and leaves the network after having been served at the second node. Jobs arrive at the first node according to a Poisson process with rate \(\lambda \) and have exponentially distributed service times with mean \(\mu _i\) at node i (\(i=1,2\)). The link between node 1 and 2 is up (down, respectively) during an exponentially distributed time with mean \(q_1^{-1}\) (\(q_0^{-1}\), respectively). In this subsection we consider the case that \(f_{12}=1\): clients who wish to jump from node 1 to node 2 when the link is down, are lost. In this case,

$$\begin{aligned} {\varvec{Q}} = \left( \begin{array}{rr}-q_0&{}q_0\\ q_1&{}-q_1\end{array}\right) . \end{aligned}$$

Define by \(v_{ij}(t)\) the mean number of clients at node i (\(i=1,2\)) when the background process is in state j (\(j=0,1\)). Using our expression for the transient first moment, with \(p_0(t)\) (\(p_1(t)\), respectively) the probability the link is down (up, respectively),

$$\begin{aligned} v_{10}'(t)&= \lambda p_0(t) + q_1v_{11}(t) - q_0v_{10}(t) -\mu _1 v_{10}(t),\\ v_{11}'(t)&= \lambda p_1(t) + q_0v_{10}(t) - q_1v_{11}(t) - \mu _1v_{11}(t),\\ v_{20}'(t)&= q_1v_{21}(t) - q_0v_{20}(t) - \mu _2v_{20}(t),\\ v_{21}'(t)&= \mu _1v_{11}(t)+q_0 v_{20}(t) - q_1v_{21}(t) - \mu _2v_{21}(t). \end{aligned}$$

This system can be solved in closed form, realizing that the first two differential equations can be solved in isolation first (leading to explicit expressions for \(v_{10}(t)\) and \(v_{11}(t)\)), and then the last two differential equations (using the found expression for \(v_{11}(t)\)). As these are standard computations involving systems of non-homogeneous linear differential equations, we do not include the expressions here.

The stationary expectations can be found along the same lines. Let \(\Delta _a\) be a (two-dimensional) diagonal matrix, whose diagonal elements are all equal to \(a\in {\mathbb {R}}.\) Then the steady-state means are, with \(\pi _0=1-\pi _1 = q_1/(q_0+q_1)\) and \({\varvec{\pi }}=(\pi _0,\,\pi _1)\),

$$\begin{aligned} (v_{10}, v_{11}) = \lambda \, {\varvec{\pi }} (\Delta _{\mu _1}-{\varvec{Q}})^{-1},\,\,\, (v_{20}, v_{21}) = (0,\,\mu _1v_{11}) (\Delta _{\mu _2}-{\varvec{Q}})^{-1}. \end{aligned}$$

The number of clients lost per unit of time is \(\mu _1 v_{10}.\)

6.2 Two-Node Tandem, with Blocked Customer Retrying

The model in which \(f_{12}=0\) has more intricate interactions, as the link between the nodes being down has impact on the number of clients at node 1. It means that in the set of differential equations that we set up for \(f_{12}=1\), the first one has to be replaced by \(v_{10}'(t) = \lambda p_0(t) + q_1v_{11}(t) - q_0v_{10}(t).\) Again the time-dependent means can be found in closed form by solving two 2-dimensional systems of linear differential equations, and the stationary means by solving two pairs of linear equations. As it turns out, however, we can also explicitly find the distribution of the stationary number of clients residing at node 1, as follows; the resulting formulae reveal the effect of the link failures on the performance experienced at this node.

Let \(\varphi _0(z)\) be the probability generating function of the stationary number of customers at node 1 jointly with the event that the link is down, and \(\varphi _1(z)\) its counterpart jointly with the link being up. The following (differential) equations apply:

$$\begin{aligned} \lambda (z-1)\varphi _0(z) -q_0 \varphi _0(z)+ q_1\varphi _1(z)&=0,\\ \lambda (z-1)\varphi _1(z) -\mu _1(z-1)\varphi '_1(z) -q_1 \varphi _1(z)+ q_0\varphi _0(z)&=0. \end{aligned}$$

Inserting \(\varphi _0(z)= q_1 \varphi _1(z)/(q_0+\lambda (1-z))\) into the second equation, we obtain

$$\begin{aligned} \lambda (z-1)\varphi _1(z) -\mu _1(z-1)\varphi '_1(z) -q_1 \varphi _1(z)+q_0\frac{q_1\varphi _1(z)}{(q_0+\lambda (1-z))}=0, \end{aligned}$$

or, equivalently,

$$\begin{aligned} \frac{\varphi '_1(z)}{\varphi _1(z)} = \frac{\lambda }{\mu _1}\left( 1+\frac{q_1}{q_0+\lambda (1-z)}\right) . \end{aligned}$$

Up to an additive constant, we thus obtain that

$$\begin{aligned} \log \varphi _1(z) = \frac{\lambda }{\mu _1} z- \frac{q_1}{\mu _1}\log (q_0+\lambda (1-z)), \end{aligned}$$

and using that \(\varphi _1(1)=\pi _1\),

$$\begin{aligned} \varphi _1(z) = \pi _1 \exp \left( \frac{\lambda }{\mu _1}(z-1)\right) \left( \frac{q_0}{q_0+\lambda (1-z)}\right) ^{q_1/\mu _1}. \end{aligned}$$

Using the relation between \(\varphi _0(z)\) and \(\varphi _1(z)\), we find that the transform of the stationary number at the first node equals

$$\begin{aligned} \varphi _0(z)+\varphi _1(z)= & {} \exp \left( \frac{\lambda }{\mu _1}(z-1)\right) \left( \pi _0\left( \frac{q_0}{q_0+\lambda (1-z)}\right) ^{q_1/\mu _1+1} \right. \\&\left. +\pi _1 \left( \frac{q_0}{q_0+\lambda (1-z)}\right) ^{q_1/\mu _1} \right) . \end{aligned}$$

This expression has the following nice interpretation. Let A be Poisson with mean \(\lambda /\mu _1\), and let B with probability \(\pi _0\) be a negative binomial random variable with parameters \(r:=q_1/\mu _1+1\) and \(p:= q_0/(q_0+\lambda )\) and with probability \(\pi _1\) a negative binomial random variable with parameters \(r-1\) and p. Then the stationary number of customers at the first node is distributed as the sum of two independent random variables A and B (which are both non-negative and integer-valued). Note that if the link would never be down (i.e., \(\pi _1=1\) and \(q_1=\infty \)), the number of customers at node 1 is just Poisson with mean \(\lambda /\mu _1\) (like in the ordinary M/M/\(\infty \) queue); the additional random variable B (which is a mixture of two negative binomial random variable) thus represents the effect of the link failures.

A numerical illustration is shown in Fig. 1, where we have estimated the stationary random variable \({\varvec{M}}\) by simulation. We have fixed the parameters \(\lambda = 20, \mu _1 =3, \mu _2 = 2, q_0=1\) and \(f=0\), and have varied the parameter \(q_1\). This experiment visualizes the impact of the link failures on the random variable \({\varvec{M}}\); the left graph corresponds with the upstream queue, and the right graph to the downstream queue. We choose \(q_1=0.01\), 0.5, 1, and 3. The simulated numbers in the left panel align with the distribution identified above.

Fig. 1
figure 1

Stationary probability density function

6.3 Functional Central Limit Theorem for Two-Node Tandem

In this example we derive the FCLT for the two-node tandem. Clients wishing to jump from node 1 to node 2 while the link is down are lost with probability \(f:=f_{12}\), and stay at node 1 with probability \(1-f.\) First we determine the fluid limit (which is to be used as the ‘centering function’ in our FCLT). We use the same notation as in the above examples, and in addition we define \(\kappa :=\mu _1(\pi +(1-\pi )f)\) with \(\pi :=\pi _1=q_0/(q_0+q_1)\). Then

$$\begin{aligned} \varrho '_1(t)&=\lambda -\kappa \varrho _1(t), \,\,\, \varrho _2'(t) = \mu _1\pi \varrho _1(t) -\mu _2\varrho _2(t). \end{aligned}$$

Assuming the system starts empty, we thus obtain

$$\begin{aligned} \varrho _1(t)=\frac{\lambda }{\kappa }(1-\mathrm{e}^{-\kappa t}),\,\,\varrho _2(t) = \frac{\mu _1\pi \lambda }{\kappa }\left( \frac{1-\mathrm{e}^{-\mu _2 t}}{\mu _2} -\frac{\mathrm{e}^{-\kappa t}-\mathrm{e}^{-\mu _2 t}}{\mu _2-\kappa }\right) . \end{aligned}$$

It can be checked that

$$\begin{aligned} {\mathscr {M}}= \left( \begin{array}{cc} -\kappa &{}0\\ \mu _1\pi &{}-\mu _2\end{array}\right) ,\,\,\, {\mathscr {M}}^\circ (s) =\left( \begin{array}{cc} -\varrho _1(s)\mu _1 f&{}-\varrho _1(s)\mu _1\\ -\varrho _2(s)\mu _2&{}\varrho _1(s)\mu _1-\varrho _2(s)\mu _2\end{array}\right) . \end{aligned}$$

In addition, with \(q:=q_0+q_1\),

$$\begin{aligned} \Sigma := \frac{1}{q^2}\left( \begin{array}{cc} 2(1-\pi )q_0 &{} -(1-\pi )q_0 - \pi q_1\\ -(1-\pi )q_0 - \pi q_1 &{} 2\pi q_1\end{array}\right) . \end{aligned}$$

With

$$\begin{aligned} \mathrm{e}^{{\mathscr {M}}t} = \left( \begin{array}{cc} \mathrm{e}^{-\kappa t}&{}0\\ \mu _1\pi (\mathrm{e}^{-\mu _2 t}-\mathrm{e}^{-\kappa t})/(\kappa -\mu _2)&{}\mathrm{e}^{-\mu _2 t}\end{array}\right) , \end{aligned}$$

the matrix \({\mathbb {C}}\mathrm{ov}(\tilde{\varvec{M}}(t),\tilde{\varvec{M}}(t))\) can be evaluated using the expressions presented in Remark 4.

We now present a number of figures that illustrate the applicability of the diffusion limit as an approximation to the original population process. First we consider the scaling parameter \(N=100\) and the parameters \(\lambda =25\), \(\mu _1=10\), \(\mu _2=20,\) and \(f=0\). In the two simulations we varied the transition rates: they are \(q_0=0.2\), \(q_1=0.6\) in the left sample path, and \(q_0=30\), \(q_1=20\) in the right sample path. We pick \(\alpha =1\), so that the arrival rate \(\lambda ^{(N)} \) is \(N\lambda \), whereas transition rates are set to \(q_0^{(N)} = N q_0\) and \(q_1^{(N)}=Nq_1\). The red curves appearing in Fig. 2 above correspond to the functions \(N\varrho _1(\cdot )\) and \(N\varrho _2(\cdot )\), where \(\varrho _1(\cdot )\) and \(\varrho _2(\cdot )\) are the two ‘centering functions’ that were computed above. The blue and green curves are the corresponding sample paths.

Fig. 2
figure 2

Sample paths and centering functions

In Fig. 3 histograms are presented for the centered and scaled population process in each queue. The parameters \(\lambda \), \(\mu _1\), \(\mu _2\) and f are chosen as above; the transition rates of the background process are \(q_0=30\) and \(q_1=20\). We consider \(t=2\) and \(N=60\). The graphs show that the Gaussian limiting distribution provides an accurate fit; the dotted curves correspond to the zero-mean Gaussian distribution with the variance in line with the diffusion limit.

Fig. 3
figure 3

Histograms for centered and scaled stationary population

6.4 Functional Central Limit Theorem for Symmetric Fully Connected One-Block Network

In this subsection we consider the functional central limit theorem for a network with just a single block (i.e., all links alternate between being ‘up’ and ‘down’ simultaneously), and all parameters chosen symmetrically; blocked customers are assumed lost (but the case in which they retry works analogously).

More concretely, the situation considered is the following. The arrival rate at each node is \(N\lambda \). The down-time of all links is exponentially distributed with mean \((Nq_0)^{-1}\), whereas the up-time is exponentially distributed with mean \((Nq_1)^{-1}\). As in Remark 3, the service rate is \(\sigma :=\nu +\mu _0\); after service completion a client leaves the network with probability \(\mu _0/\sigma \) and wants to move to another node (picked uniformly at random) with probability \(\nu /\sigma .\) Define \(\pi :=q_0/(q_0+q_1).\)

First we find the ‘centering function’ \(\varrho (\cdot )\):

$$\begin{aligned} \varrho '(t) = \lambda + (n-1)\varrho (t) \frac{\nu \pi }{n-1} - \varrho (t)\sigma , \end{aligned}$$

solved by, assuming the queues start empty and defining \(\kappa :=\nu (1-\pi )+\mu _0\),

$$\begin{aligned} \varrho (t) =\frac{\lambda }{\kappa }\left( 1-\mathrm{e}^{-\kappa t}\right) . \end{aligned}$$

Recalling that \(\bar{\mu }_{ij}:=\sum _{k=1}^{\bar{K}}\mu _{ijk}\pi _k\), for \(i\not =j\) and \(i,j\in \{1,\ldots ,n\}\), we find that \({\mathscr {M}(i,j)=\bar{\mu }_{ji}} = \nu \pi /(n-1)\). In addition, \(\mathscr {M}(i,i)=\bar{\mu }_{i0}=-\sigma \). As a consequence, with \(E_n\) an \(n\times n\) all-ones matrix,

$$\begin{aligned} {\mathscr {M}}=\omega _1 E_n+ \omega _2I_n,\,\,\,\, \omega _1:= \frac{\nu \pi }{n-1},\,\,\,\omega _2:=-\left( \frac{\nu \pi }{n-1}+\sigma \right) . \end{aligned}$$

It is readily checked that

$$\begin{aligned} \mathrm{e}^{{\mathscr {M}}t} = \left( E_n\left( \frac{\mathrm{e}^{\omega _1nt}-1}{n}\right) +I_n\right) \,\mathrm{e}^{\omega _2t}. \end{aligned}$$

The matrix \({\mathscr {M}}^\circ (s)\) is an \((n\times 2)\)-dimensional matrix whose entries in the first column (which are corresponding to the links being down) are all \(m_0(s):= -\varrho (s)\sigma \), and whose entries in the second column (which are corresponding to the links being up) are all \(m_1(s):= -\varrho (s)\mu _0\). The matrix \(\Sigma \) is as in Sect. 6.3.

Using the expressions from Remark 4, we obtain, with \({\varvec{1}}_n\) the n-dimensional all-ones vector and \({\varvec{m}}(s):=(m_0(s),m_1(s))^\mathrm{T}\),

$$\begin{aligned} {\mathbb {C}}\mathrm{ov}(\tilde{\varvec{M}}(t),\tilde{\varvec{M}}(t))=\int _0^t \mathrm{e}^{{\mathscr {M}}(t-s)}{\varvec{1}}_n ({\varvec{m}}(s))^\mathrm{T}\,\Sigma \,{\varvec{m}}(s)\,{\varvec{1}}_n^\mathrm{T}\,\big (\mathrm{e}^{{\mathscr {M}}(t-s)}\big )^\mathrm{T}\mathrm{d}s + \mathrm{diag}\{{\varvec{\varrho }}(t)\}. \end{aligned}$$
(8)

The next step is to explicitly evaluate the integral. To this end, we first observe that \(\mathrm{e}^{{\mathscr {M}}t}\,{\varvec{1}}_n =\mathrm{e}^{(\omega _1 n+\omega _2)t} \,{\varvec{1}}_n=\mathrm{e}^{-\kappa t}\,{\varvec{1}}_n.\)

We now evaluate \(({\varvec{m}}(s))^\mathrm{T}\,\Sigma \,{\varvec{m}}(s)\). With \({\varvec{x}}\in {\mathbb {R}}^{\bar{K}}\), \(\alpha \in {\mathbb {R}}\), and \({\varvec{1}}\equiv {\varvec{1}}_{\bar{K}}\),

$$\begin{aligned} ({\varvec{x}}- \alpha {\varvec{1}})^\mathrm{T}\Sigma ({\varvec{x}}- \alpha {\varvec{1}})&= ({\varvec{x}}- \alpha {\varvec{1}})^\mathrm{T}(\mathrm{diag}\{\varvec{\pi }\}D + D^\mathrm{T} \mathrm{diag}\{\varvec{\pi }\}) ({\varvec{x}}- \alpha {\varvec{1}})\\&= ({\varvec{x}}- \alpha {\varvec{1}})^\mathrm{T}\mathrm{diag}\{\varvec{\pi }\}D {\varvec{x}} + {\varvec{x}}^\mathrm{T} D^\mathrm{T} \mathrm{diag}\{\varvec{\pi }\} ({\varvec{x}}- \alpha {\varvec{1}}) \end{aligned}$$

due to \(D{\varvec{1}}={\varvec{0}}.\) In addition, \({\varvec{\pi }}^\mathrm{T}D={\varvec{1}}^\mathrm{T} \mathrm{diag}\{\varvec{\pi }\} D={\varvec{0}}\), so that

$$\begin{aligned} ({\varvec{x}}- \alpha {\varvec{1}})^\mathrm{T}\Sigma ({\varvec{x}}- \alpha {\varvec{1}})= {\varvec{x}}^\mathrm{T} \mathrm{diag}\{\varvec{\pi }\}D {\varvec{x}} + {\varvec{x}}^\mathrm{T} D^\mathrm{T} \mathrm{diag}\{\varvec{\pi }\} {\varvec{x}}= {\varvec{x}}^\mathrm{T}\Sigma {\varvec{x}}. \end{aligned}$$

We therefore have that, when evaluating \(({\varvec{m}}(s))^\mathrm{T}\,\Sigma \,{\varvec{m}}(s)\), we can replace \({\varvec{m}}(s)\) by \({\varvec{m}}(s) +\varrho (s)\mu _0\,{\varvec{1}}\), and consequently

$$\begin{aligned} ({\varvec{m}}(s))^\mathrm{T}\,\Sigma \,{\varvec{m}}(s)&= q^{-2}{(-\varrho (s)\nu ,\,0)}\left( \begin{array}{cc} 2(1-\pi )q_0 &{} -(1-\pi )q_0 - \pi q_1\\ -(1-\pi )q_0 - \pi q_1 &{} 2\pi q_1\end{array}\right) {\left( \begin{array}{c} -\varrho (s)\nu \\ 0\end{array}\right) }\\&= 2 q^{-2} \,(\varrho (s))^2\,\nu ^2(1-\pi )q_0 =2q_0q_1\,(\varrho (s))^2\,\nu ^2/q^3. \end{aligned}$$

Noting that \({\varvec{1}}_n\,{\varvec{1}}_n^\mathrm{T}=E_n\), we conclude that (8) can be written as \(\xi _n(t) E_n+\mathrm{diag}\{{\varvec{\varrho }}(t)\}\), where

$$\begin{aligned} \xi _n(t):=&\,\,2 q_0q_1\frac{\lambda ^2\nu ^2}{\kappa ^2q^3} \, \int _0^t (1-2\mathrm{e}^{-\kappa s}+\mathrm{e}^{-2\kappa s}) \mathrm{e}^{-2\kappa (t-s)}\mathrm{d}s\\ =&\,\,2 q_0q_1 \frac{\lambda ^2\nu ^2}{\kappa ^2q^3} \left( \frac{1-\mathrm{e}^{-2\kappa t}}{2\kappa }-2\mathrm{e}^{-\kappa t}\frac{1-\mathrm{e}^{-\kappa t}}{\kappa }+t\,\mathrm{e}^{-2\kappa t}\right) . \end{aligned}$$

With \(\varrho :=\lambda /\kappa \), we also obtain

$$\begin{aligned} \lim _{t\rightarrow \infty } {\mathbb {C}}\mathrm{ov}(\tilde{\varvec{M}}(t),\tilde{\varvec{M}}(t))= q_0q_1 \frac{\lambda ^2\nu ^2}{\kappa ^3q^3} \,E_n+ \mathrm{diag}\{{\varvec{\varrho }}\}. \end{aligned}$$

6.5 Functional Central Limit Theorem for Symmetric Ring-Shaped One-Block Network

The setting considered is the same as in the previous subsection, with the only exception that a job served at queue m moves to queue \(m+1\) (where \(n+1\) is to be understood as 1). More specifically, the service rate is \(\sigma :=\nu +\mu _0\); after service completion a client leaves the network with probability \(\mu _0/\sigma \) and wants to move to the next node with probability \(\nu /\sigma .\) We concentrate on the case that \(f_{i,i+1} = f_{n,1}=1\) (i.e., during outages jobs that wish to jump to the next queue are lost), but we remark that the case of retry can be handled analogously. The shape of the centering function \(\varrho (\cdot )\) is as in the previous subsection, with the same \(\kappa =\nu (1-\pi )+\mu _0.\)

It is verified that \(\bar{\mu }_{i,i+1}=\bar{\mu }_{n,1}=\nu \pi \) (for \(i=1,\ldots n-1\)) and \(\bar{\mu }_{ii}=-(\nu \pi +\sigma )\). As a consequence, with \(F_n\) denoting an \(n\times n\) matrix with ones on the subdiagonal and at entry (1, n),

$$\begin{aligned} {\mathscr {M}}=\omega _1 F_n +\omega _2 I_n,\,\,\,\omega _1:=\nu \pi ,\,\,\;\omega _2=-(\nu \pi +\sigma ). \end{aligned}$$

The matrix \({\mathscr {M}}^\circ (s)\) is, analogously to what we found in Sect. 6.4, an \((n\times 2)\)-dimensional matrix whose entries in the first column are all \(m_0(s):= -\varrho (s)\sigma \), and whose entries in the second column are all \(m_1(s):= -\varrho (s)\mu _0\). The matrix \(\Sigma \) is as defined in Sect. 6.3.

Observe that \(F_n{\varvec{1}}_n={\varvec{1}}_n\) (as \(F_n\) is a permutation matrix), and therefore \(F_n^{\,k}{\varvec{1}}_n={\varvec{1}}_n\), so that

$$\begin{aligned} \mathrm{e}^{\omega _1 F_n t} {\varvec{1}}_n = \sum _{k=0}^\infty \frac{F_n^{\,k}{\varvec{1}}_n}{k!}(\omega _1 t)^k = \mathrm{e}^{\omega _1 t}{\varvec{1}}_n. \end{aligned}$$

This allows us to conclude that \(\mathrm{e}^{{\mathscr {M}}t} {\varvec{1}}_n=\mathrm{e}^{-\sigma t}{\varvec{1}}_n\). The remaining computations are as in the previous subsection.

7 Concluding Remarks

In this paper we have considered networks of infinite-server queues with faulty links. Clients that wish to jump from one queue to another while the required link is down are with a given probability lost (and otherwise stay at the origin node to retry after an exponentially distributed amount of time). For this model we derived prelimit results (in terms of differential equations uniquely characterizing the probability generating function, as well as a recursion by which all moments can be determined) as well as a functional central limit theorem (after appropriately scaling the arrival rates and the links’ failure and repair rates).

This work is among the first papers on queueing processes on dynamically evolving random graphs. Several alternative models can be considered; we mention a few here. (i) In our work all queues were of infinite-server type. In many applications, one would rather be interested in the queueing discipline being single-server of many-server. (ii) Our probabilistic analysis covers means and diffusion limits, but extreme behavior (‘far away from the mean’) is not included. Such a rare-event analysis sheds light on the probability that the queueing process attains values in remote sets. (iii) In dynamically evolving networks, typically measures are taken when links fail; think of rerouting mechanisms. This makes the systematic study of the efficacy of such rerouting protocols a relevant topic for further study.