1 Introduction

In the queueing literature, one has traditionally studied queues with Poisson input. The Poisson assumption typically facilitates explicit analysis, but it does not always align well with actual data; see, for example, [11] and references therein. More specifically, statistical studies show that in many practical situations, Poisson processes underestimate the variability of the queue’s input stream. This observation has motivated research on queues fed by arrival processes that better capture the burstiness observed in practice.

The extent to which burstiness takes place can be measured by the dispersion index, i.e., the ratio of the variance of the number of arrivals in a given interval, and the corresponding expected value. In arrival streams that display burstiness, the dispersion index is larger than unity (as opposed to Poisson processes, for which it is equal to unity), a phenomenon that is usually referred to as overdispersion. It is desirable that the arrival process of the queueing model takes the observed overdispersion into account. One way to achieve this is to make use of Cox processes, which are Poisson processes, conditional on the stochastic time-dependent intensity. It is an immediate consequence of the law of total variance that Cox processes do have a dispersion index larger than unity. Therefore, this class of processes makes for a good candidate to model overdispersed input processes.

In this paper we contribute to the development of queueing models fed by input streams that exhibit overdispersion. We analyze infinite-server queues driven by a particular Cox process, in which the rate is a (stochastic) shot-noise process.

The shot-noise process we use is one in which there are only upward jumps (or shots) that arrive according to a homogeneous Poisson process. Furthermore, we employ an exponential ‘response’ or ‘decay’ function, which encodes how quickly the process will decline after a jump. In this case, the shot-noise process is a Markov process; see [16, p. 393]. There are several variations on shot-noise processes; see, for example, [10] for a comprehensive overview.

It is not a novel idea to use a shot-noise process as stochastic intensity. For instance, in insurance mathematics, the authors of [5] use a shot-noise-driven Cox process to model the claim count. They assume that disasters happen according to a Poisson process, and each disaster can induce a cluster of arriving claims. The disaster corresponds to a shot upwards in the claim intensity. As time passes, the claim intensity process decreases, as more and more claims are settled.

Another example of shot-noise arrival processes is found in the famous paper [13], where it is used to model the occurrences of earthquakes. The arrival process considered in [13] has one crucial difference with the one used in this paper: it makes use of Hawkes processes [9], which do have a shot-noise structure, but have the special feature that they are self-exciting. More specifically, in Hawkes processes, an arrival induces a shot in the arrival rate, whereas in our shot-noise-driven Cox model these shots are merely exogenous. The Hawkes process is less tractable than the shot-noise-driven Cox process. A very recent effort to analyze \(\cdot /G/\infty \) queues that are driven by a Hawkes process has been made in [8], where a functional central limit theorem is derived for the number of jobs in the system. In this model, obtaining explicit results (in a non-asymptotic setting), as we are able to do in the shot-noise-driven Cox variant, is still an open problem.

In order to successfully implement a theoretical model, it is crucial to have methods to estimate its parameters from data. The shot-noise-driven Cox process is attractive since it has this property. Statistical methods that filter the unobservable intensity process, based on Markov Chain Monte Carlo (MCMC) techniques, have been developed; see [3] and references therein. By filtering, they refer to the estimation of the intensity process in a given time interval, given a realized arrival process. Subsequently, given this ‘filtered path’ of the intensity process, the parameters of the shot-noise process can be estimated by a Monte Carlo version of the expectation maximization (EM) method. Furthermore, the shot-noise-driven Cox process can also be easily simulated; see, for example, the thinning procedure described in [12].

In this paper we study networks of infinite-server queues with shot-noise-driven Cox input. We assume that the service times at a given node are i.i.d. samples from a general distribution. The output of a queue is routed to a next queue, or leaves the network. Infinite-server queues have the inherent advantage that jobs do not interfere with one another, which considerably simplifies the analysis. Furthermore, infinite-server systems are frequently used to produce approximations for corresponding finite-server systems. In the network setting, we can model queueing systems that are driven by correlated shot-noise arrival processes. With regards to applications, such a system could, for example, represent the call centers of a fire department and police department in the same town.

The contributions and organization of this paper are as follows. In this paper we derive exact and asymptotic results. The main result of the exact analysis is Theorem 4.6, where we find the joint Laplace transform of the numbers of jobs in the queues of a feedforward network, jointly with the shot-noise-driven arrival rates. We build up toward this result as follows. In Sect. 2 we introduce notation and we state the important Lemma 2.1 that we repeatedly rely on. Then we derive exact results for the single infinite-server queue with a shot-noise arrival rate in Sect. 3.1. Subsequently, in Sect. 3.2, we show that after an appropriate scaling the number of jobs in the system satisfies a functional central limit theorem (Theorem 3.4); the limiting process is an Ornstein–Uhlenbeck (OU) process driven by a superposition of a Brownian motion and an integrated OU process. We then extend the theory to a network setting in Sect. 4. Before we consider full-blown networks, we first consider a tandem system consisting of an arbitrary number of infinite-server queues in Sect. 4.1. Then it is argued in Sect. 4.2 that a feedforward network can be seen as a number of tandem queues in parallel. We analyze two different ways in which dependency can enter the system through the arrival process. Firstly, in Model (M1), parallel service facilities are driven by a multidimensional shot-noise process in which the shots are simultaneous (which includes the possibility that all shot-noise processes are equal). Secondly, in Model (M2), we assume that there is one shot-noise arrival intensity that generates simultaneous arrivals in all tandems. In Sect. 5 we finish with some concluding remarks.

2 Notation and preliminaries

Let \((\Omega , {\mathcal {F}}, \{{\mathcal {F}}_t\}_{t\ge 0},{{\mathrm{{\mathbb {P}}}}})\) be a probability space, in which the filtration \(\{{\mathcal {F}}_t\}_{t\ge 0}\) is such that \(\Lambda (\cdot )\) is adapted to it. A shot-noise process is a process that has random jumps at Poisson epochs, and a deterministic ‘response’ or ‘decay’ function, which governs the behavior of the process. See [16, Section 8.7] for a brief account of shot-noise processes. The shot noise that we use in this paper has the following representation:

$$\begin{aligned} \Lambda (t) = \Lambda (0)e^{-rt} + \sum _{i=1}^{P_B(t)} B_i e^{-r(t-t_i)}, \end{aligned}$$
(1)

where the \(B_i\ge 0\) are i.i.d. shots from a general distribution, the decay function is exponential with rate \(r>0\), \(P_B\) is a homogeneous Poisson process with rate \(\nu \), and the epochs of the shots, that arrived before time t, are labeled \(t_1,t_2,\ldots ,t_{P_B(t)}\).

As explained in the introduction, the shot-noise process serves as a stochastic arrival rate to a queueing system. It is straightforward to simulate a shot-noise process; for an illustration of a sample path, consider Fig. 1. Using the thinning method for nonhomogeneous Poisson processes [12], and using the sample path of Fig. 1 as the arrival rate, one can generate a corresponding sample path for the arrival process, as is displayed in Fig. 2. Typically, most arrivals occur shortly after peaks in the shot-noise process in Fig. 1, as expected.

Fig. 1
figure 1

Sample path of shot-noise process

Fig. 2
figure 2

A realization of arrival process corresponding to the sample path of the arrival rate process in Fig. 1

We write \(\Lambda \) (i.e., without argument) for a random variable with distribution equal to that of \(\lim _{t\rightarrow \infty }\Lambda (t)\). We now present well-known transient and stationary moments of the shot-noise process, see Appendix 2 and, for example [16]: with B distributed as \(B_1\),

$$\begin{aligned} {{\mathrm{{\mathbb {E}}}}}\Lambda (t) = \Lambda (0)e^{-rt} + \frac{\nu {{\mathrm{{\mathbb {E}}}}}B}{r}(1-e^{-rt})&, \quad {{\mathrm{{\mathbb {E}}}}}\Lambda =\frac{\nu {{\mathrm{{\mathbb {E}}}}}B}{r},\nonumber \\ {{\mathrm{{\mathbb {V}}ar}}}\Lambda (t)= \frac{\nu {{\mathrm{{\mathbb {E}}}}}B^2}{2r}(1-e^{-2rt})&, \quad {{\mathrm{{\mathbb {V}}ar}}}\Lambda = \frac{\nu {{\mathrm{{\mathbb {E}}}}}B^2}{2r}, \nonumber \\ {{\mathrm{{\mathbb {C}}ov}}}(\Lambda (t),\Lambda (t+\delta ))= e^{-r\delta } {{\mathrm{{\mathbb {V}}ar}}}\Lambda (t)&. \end{aligned}$$
(2)

We remark that, for convenience, we throughout assume \(\Lambda (0)=0\). The results can be readily extended to the case in which \(\Lambda (0)\) is a nonnegative random variable, at the cost of a somewhat more cumbersome notation.

In the one-dimensional case, we denote \( \beta (s) = {{\mathrm{{\mathbb {E}}}}}e^{-s B} \), and in the multidimensional case, where \(s=(s_1,s_2,\ldots ,s_d)\), for some integer \(d\ge 2\), now denotes a vector, we write

$$\begin{aligned} \beta (s) = {{\mathrm{{\mathbb {E}}}}}e^{-\sum _i s_i B_i}. \end{aligned}$$

The following lemma will be important for the derivation of the joint transform of \(\Lambda (t)\) and the number of jobs in system, in both single- and multi-node cases.

Lemma 2.1

Let \(\Lambda (\cdot )\) be a shot-noise process. Let \(f:{{\mathrm{{\mathbb {R}}}}}\times {{\mathrm{{\mathbb {R}}}}}^{d}\rightarrow {{\mathrm{{\mathbb {R}}}}}\) be a function which is piecewise continuous in its first argument, with at most a countable number of discontinuities. Then it holds that

Proof

See Appendix 1. \(\square \)

3 A single infinite-server queue

In this section we study the \(M_\mathrm{S}/G/\infty \) queue. This is a single infinite-server queue, of which the arrival process is a Cox process driven by the shot-noise process \(\Lambda (\cdot )\), as defined in Sect. 2. First we derive exact results in Sect. 3.1, where we find the joint transform of the number of jobs in the system and the shot-noise rate, and derive expressions for the expected value and variance. Subsequently, in Sect. 3.2, we derive a functional central limit theorem for this model.

3.1 Exact analysis

We let \(J_i\) be the service requirement of the i-th job, where \(J_1,J_2,\ldots \) are assumed to be i.i.d.; in the sequel J denotes a random variable that is equal in distribution to \(J_1\). Our first objective is to find the distribution of the number of jobs in the system at time t, in the sequel denoted by N(t). This can be found in several ways; because of the appealing underlying intuition, we here provide an argument in which we approximate the arrival rate on intervals of length \(\Delta \) by a constant, and then let \(\Delta \downarrow 0\).

This procedure works as follows. We let \(\Lambda (t)=\Lambda (\omega ,t)\) be an arbitrary sample path of the driving shot-noise process. Given \(\Lambda (t)\), the number of jobs that arrived in the interval \([k\Delta ,(k+1)\Delta )\) and are still in the system at time t has a Poisson distribution with parameter \({{\mathrm{{\mathbb {P}}}}}(J>t-(k\Delta +\Delta U_k))\cdot \Delta \Lambda (k\Delta ) + o(\Delta )\), where \(U_1, U_2,\ldots \) are i.i.d. standard uniform random variables. Summing over k yields that the number of jobs in the system at time t has a Poisson distribution with parameter

$$\begin{aligned} \sum _{k=0}^{t/\Delta -1} {{\mathrm{{\mathbb {P}}}}}(J>t-(k\Delta +\Delta U_k)) \Delta \Lambda (k\Delta ) + o(\Delta ), \end{aligned}$$

which converges, as \(\Delta \downarrow 0\), to

(3)

The argument above is not new: a similar observation was mentioned in, for example [6], for deterministic rate functions. Since \(\Lambda (\cdot )\) is actually a stochastic process, we conclude that the number of jobs has a mixed Poisson distribution, with the expression in Eq. (3) as random parameter. As a consequence, we find by conditioning on \({\mathcal {F}}_t\),

(4)

We have found the following result.

Theorem 3.1

Let \(\Lambda (\cdot )\) be a shot-noise process. Then

(5)

Proof

The result follows directly from Lemma 2.1 and Eq. (4). \(\square \)

In Theorem 3.1 we found that N(t) has a Poisson distribution with the random parameter given in Eq. (3). This leads to the following expression for the expected value:

(6)

In addition, by the law of total variance we find

(7)

The latter expression we can further evaluate, using an approximation argument that resembles the one we used above. Using a Riemann sum approximation, we find

Assuming that \(u\ge v\), we know that \({{\mathrm{{\mathbb {C}}ov}}}(\Lambda (u),\Lambda (v)) = e^{-r(u-v)} {{\mathrm{{\mathbb {V}}ar}}}\Lambda (v)\) (cf. Lemma 4.7). We thus have that (7) equals

We can make this more explicit using the corresponding formulas in (2).

Example 3.2

(Exponential case) Consider the case in which J is exponentially distributed with mean \(1/\mu \) and \(\Lambda (0)=0\). Then we can calculate the mean and variance explicitly. For \(\mu \ne r\),

$$\begin{aligned} {{\mathrm{{\mathbb {E}}}}}N(t) = \frac{{{\mathrm{{\mathbb {E}}}}}\Lambda }{\mu } h_{r,\mu }(t), \end{aligned}$$

where the function \(h_{r,\mu }(\cdot )\) is defined by

$$\begin{aligned} t\mapsto \left\{ \begin{array}{ll} {\displaystyle \frac{\mu (1-e^{-rt})-r(1-e^{-\mu t})}{\mu - r} } &{}\quad \text {if } \mu \ne r,\\ 1-e^{-rt}-rte^{-rt} &{}\quad \text {if } \mu = r. \end{array} \right. \end{aligned}$$

For the variance, we thus find for \(\mu \ne r\)

$$\begin{aligned}&{{\mathrm{{\mathbb {V}}ar}}}N(t) \\&\quad = \frac{\nu {{\mathrm{{\mathbb {E}}}}}B^2}{2r}\frac{r^2(1-e^{-2\mu t})+\mu ^2(1-e^{-2rt}) + \mu r(4e^{-t(\mu +r)}-e^{-2\mu t} - e^{-2rt} -2)}{\mu (\mu -r)^2 (\mu +r)}\\&\qquad +{{\mathrm{{\mathbb {E}}}}}N(t), \end{aligned}$$

and for \(\mu =r\)

$$\begin{aligned} {{\mathrm{{\mathbb {V}}ar}}}N(t) = \frac{\nu {{\mathrm{{\mathbb {E}}}}}B^2}{4 r^3}{\left( 1-e^{-2rt}-2rt(1+rt)e^{-2rt}\right) }{} + {{\mathrm{{\mathbb {E}}}}}N(t). \end{aligned}$$

3.2 Asymptotic analysis

This subsection focuses on deriving a functional central limit theorem (FCLT) for the model under study, after having appropriately scaled the shot rate of the shot-noise process. In the following, we assume that the service requirements are exponentially distributed with rate \(\mu \), and we point out how it can be generalized to a general distribution in Remark 3.6 below. We follow the standard approach to derive the FCLT for infinite-server queueing systems; we mimic the argumentation used in, for example [1, 14]. As the proof has a relatively large number of standard elements, we restrict ourselves to the most important steps.

We apply a linear scaling to the shot rate of the shot-noise process, i.e., \(\nu \mapsto n\nu \). It is readily checked that under this scaling, the steady-state level of the shot-noise process and the steady-state number of jobs in the queue blow up by a factor n. It is our objective to prove that, after appropriate centering and normalization, the process recording the number of jobs in the system converges to a Gaussian process.

In the n-th scaled model, the number of jobs in the system at time t, denoted by \(N_n(t)\), has the following (obvious) representation: with \(A_n(t)\) denoting the number of arrivals in [0, t], and \(D_n(t)\) the number of departures,

$$\begin{aligned} N_n(t) = N_n(0) + A_n(t) - D_n(t). \end{aligned}$$
(8)

Here, \(A_n(t)\) corresponds to a Cox process with a shot-noise-driven rate, and therefore we have, with \(\Lambda _n(s)\) the shot-noise in the scaled model at time s and \(S_A(\cdot )\) a unit-rate Poisson process,

$$\begin{aligned} A_n(t) = S_A\left( \int _0^t \Lambda _n(u)\mathrm{d}u\right) ;\ \end{aligned}$$

in line with our previous assumptions, we put \(\Lambda _n(0)=0.\) For our infinite-server model the departures \(D_n(t)\) can be written as, with \(S_D(\cdot )\) a unit-rate Poisson process (independent of \(S_A(\cdot )\)),

We start by identifying the average behavior of the process \(N_n(t)\). Following the reasoning of [1], assuming that \(N_n(0)/n \Rightarrow \rho (0)\) (where ‘\(\Rightarrow \)’ denotes weak convergence), \(N_n(t)/n\) converges almost surely to the solution of

(9)

This equation is solved by \(\rho (t)= {{\mathrm{{\mathbb {E}}}}}N(t)\), with \({{\mathrm{{\mathbb {E}}}}}N(t)\) provided in Example 3.2.

Now we move to the derivation of the FCLT. Following the approach used in [1], we proceed by studying an FCLT for the input rate process. To this end, we first define

The following lemma states that \({{\hat{K}}}_n(\cdot )\) converges to an integrated Ornstein–Uhlenbeck (OU) process, corresponding to an OU process \({\hat{\Lambda }}(\cdot )\) with a speed of mean reversion equal to r, long-run equilibrium level 0, and variance \(\sigma _\Lambda ^2:=\nu {{\mathrm{{\mathbb {E}}}}}B^2/(2r)\).

Lemma 3.3

Assume that for the shot sizes, distributed as B, it holds that \({{\mathrm{{\mathbb {E}}}}}B,{{\mathrm{{\mathbb {E}}}}}B^2<\infty \). Then \({\hat{K}}_n(\cdot )\Rightarrow {\hat{K}}(\cdot )\) as \(n\rightarrow \infty \), where

(10)

in which \({\hat{\Lambda }}\) satisfies, with \(W_1(\cdot )\) a standard Brownian motion,

(11)

Proof

This proof is standard; for instance from [2, Prop. 3], by putting the \(\lambda _d\) in that paper to zero, it follows that \({\hat{\Lambda } _n}(\cdot ) \Rightarrow {{\hat{\Lambda }}}(\cdot )\). This implies \({\hat{K}}_n(\cdot )\Rightarrow {\hat{K}}(\cdot )\), using (10) together with the continuous mapping theorem. \(\square \)

Interestingly, the above result entails that the arrival rate process displays mean-reverting behavior. This also holds for the job count process in standard infinite-server queues. In other words, the job count process in the queueing system we are studying can be considered as the composition of two mean-reverting processes. We make this more precise in the following.

From now on we consider the following centered and normalized version of the number of jobs in the system:

$$\begin{aligned} {{\hat{N}}}_n(t) := \sqrt{n} \left( \frac{1}{n} N_n(t) - \rho (t)\right) . \end{aligned}$$

We assume that \({{\hat{N}}}_n(0)\Rightarrow {{\hat{N}}}(0)\) as \(n\rightarrow \infty \). To prove the FCLT, we rewrite \({{\hat{N}}}_n(t)\) in a convenient form. Mimicking the steps performed in [1] or [14], with \({{\bar{S}}}_A(t):=S_A(t)-t\), \({{\bar{S}}}_D(t):=S_D(t)-t\),

$$\begin{aligned} R_n(t):= {{\bar{S}}}_A\left( \int _0^t \Lambda _n(u)\mathrm{d}u\right) - {{\bar{S}}}_D\left( \mu \int _0^t N_n(u)\mathrm{d}u\right) , \end{aligned}$$

and using the relation (9), we eventually obtain

$$\begin{aligned} {{\hat{N}}}_n(t)={{\hat{N}}}_n(0) +\frac{R_n(t)}{\sqrt{n}} +{{\hat{K}}}_n(t)-\mu \int _0^t {{\hat{N}}}_n(u)\mathrm{d}u. \end{aligned}$$

Our next goal is to apply the martingale FCLT to the martingales \(R_n(t)/\sqrt{n}\); see, for background on the martingale FCLT, for instance [7] and [17]. The quadratic variation equals

$$\begin{aligned} \left[ \frac{R_n}{\sqrt{n}}\right] _t = \frac{1}{n}\left( S_A\left( \int _0^t \Lambda _n(u)\mathrm{d}u\right) + S_D\left( \mu \int _0^t N_n(u)\mathrm{d}u\right) \right) , \end{aligned}$$

which converges to \(\int _0^t {{\mathrm{{\mathbb {E}}}}}\Lambda (u)\mathrm{d}u+\mu \int _0^t\rho (u)\mathrm{d}u.\) Appealing to the martingale FCLT, the following FCLT is obtained.

Theorem 3.4

The centered and normalized version of the number of jobs in the queue satisfies an FCLT: \({{\hat{N}}}_n(\cdot )\Rightarrow {{\hat{N}}}(\cdot )\) as \(n\rightarrow \infty \), where \({{\hat{N}}}(t)\) solves the stochastic integral equation

$$\begin{aligned} {{\hat{N}}}(t) ={{\hat{N}}}(0) +\int _0^t \sqrt{{{\mathrm{{\mathbb {E}}}}}\Lambda (u)+\mu \rho (u)} \,\mathrm{d}W_2(u) + {{\hat{K}}}(t) -\mu \int _0^t {{\hat{N}}}(u)\mathrm{d}u, \end{aligned}$$

with \(W_2(\cdot )\) a standard Brownian motion that is independent of the Brownian motion \(W_1(\cdot )\) we introduced in the definition of \({{\hat{K}}}(\cdot )\).

Remark 3.5

In passing, we have proven that the arrival process as such obeys an FCLT. With

we find that \({\hat{A}}_n(t)\Rightarrow {{\hat{A}}}(t)\) as \(n\rightarrow \infty \), where

$$\begin{aligned} {{\hat{A}}}(t)&:= \int _0^t \sqrt{{{\mathrm{{\mathbb {E}}}}}\Lambda (u)+\mu \rho (u)} \,\mathrm{d}W_2(u) + {{\hat{K}}}(t)\\&=\int _0^t \sqrt{2\mu \rho (u)+\rho '(u)} \,\mathrm{d}W_2(u) + {{\hat{K}}}(t); \end{aligned}$$

the last equality follows from the fact that \(\rho (\cdot )\) satisfies (9).

Remark 3.6

The FCLT can be extended to non-exponential service requirements, by making use of [15, Thm. 3.2]. Their approach relies on two assumptions:

  • The arrival process should satisfy an FCLT;

  • The service times are i.i.d. nonnegative random variables with a general c.d.f.  independent of the arrival process.

As noted in Remark 3.5, the first assumption is satisfied for the model in this paper. The second assumption holds as well. In the non-exponential case, the results are less clean; in general, the limiting process can be expressed in terms of a Kiefer process, cf. for example [4].

4 Networks

Now that the reader is familiar with the one-dimensional setting, we extend this to networks. In this section, we focus on feedforward networks in which each node corresponds to an infinite-server queue. Feedforward networks are defined as follows.

Definition 4.1

(feedforward network) Let \(G=(V,E)\) be a directed graph with nodes V and edges E. The nodes represent infinite-server queues and the directed edges between the facilities demonstrate how jobs move through the system. We suppose that there are no cycles in G, i.e., there is no sequence of nodes, starting and ending at the same node, with each two consecutive nodes adjacent to each other in the graph, consistent with the orientation of the edges.

We focus on feedforward networks to keep the notation manageable. In Theorem 4.6, we derive the transform of the numbers of jobs in all nodes, jointly with the shot-noise process(es) for feedforward networks. Nonetheless, we provide Example 4.5, to show that analysis is in fact possible if there is a loop, but at the expense of more involved calculations.

Since all nodes represent infinite-server queues, one can see that whenever a node has multiple input streams, it is equivalent to multiple infinite-server queues that work independently from each other, but have the same service speed and induce the same service requirement for arriving jobs. Consider Fig. 3 for an illustration. The reason why this holds is that different job streams move independently through the system, without creating waiting times for others. Therefore, merging streams do not increase the complexity of our network. The same holds for ‘splits’ in job streams. By this we mean that after jobs finished their service at a server, they move to server i with probability \(q_i\) (with \(\sum _i q_i=1\)). Then, one can simply sample the entire path that the job will take through the system, at the arrival instance at its first server.

Fig. 3
figure 3

Since the jobs are not interfering with each other, the network on the left is equivalent to the graph on the right. Node 3’ is a copy of node 3: It works at the same speed and induces the same service requirements

If one recognizes the above, then all feedforward networks reduce to parallel tandem systems in which the first node in each tandem system is fed by external input. The procedure to decompose a network into parallel tandems consists of finding all paths between nodes in which jobs either enter or leave the system. Each of these paths will subsequently be considered as a tandem queue, which are then set in parallel.

To build up to the main result, we first study tandem systems in Sect. 4.1. Subsequently, we put the tandem systems in parallel in Sect. 4.2 and finally we present the main theorem and some implications in Sect. 4.3.

4.1 Tandem systems

As announced, we proceed by studying tandem systems. In Sect. 4.2 below, we study d parallel tandem systems. In this subsection, we consider the i -th of these tandem systems, where \(i=1,\ldots ,d\).

Suppose that tandem i has \(S_i\) service facilities and the input process at the first node is Poisson, with a shot-noise arrival rate \(\Lambda _i(\cdot )\). We assume that jobs enter node i1. When they finish service, they enter node i2, etc., until they enter node \(iS_i\) after which they leave the system. We use ij as a subscript referring to node j in tandem system i and we refer to the node as node ij. Hence \(N_{ij}(t)\) and \( J_{ij}\) denote the number of jobs in node ij at time t, and a copy of a service requirement, respectively, where \(j=1,\ldots ,S_i\).

Fix some time \(t>0\). Again we derive results by splitting time into intervals of length \(\Delta \). Denote by \(M_{ij}(k,\Delta )\) the number of jobs present in node ij at time t that have entered node i1 between time \(k\Delta \) and \((k+1)\Delta \); as we keep t fixed we suppress it in our notation. Because jobs are not interfering with each other in the infinite-server realm, we can decompose the transform of interest:

$$\begin{aligned} {{\mathrm{{\mathbb {E}}}}}\left( \prod _{j=1}^{S_i} z_{ij}^{N_{ij}(t)}\right) = \lim _{\Delta \downarrow 0}\prod _{k=0}^{t/\Delta -1} {{\mathrm{{\mathbb {E}}}}}\left( \prod _{j=1}^{S_i} z_{ij}^{M_{ij}(k,\Delta )}\right) . \end{aligned}$$
(12)

Supposing that the arrival rate is a deterministic function of time \(\lambda _i(\cdot )\), by conditioning on the number of arrivals in the k-th interval,

$$\begin{aligned} {{\mathrm{{\mathbb {E}}}}}\left( \prod _{j=1}^{S_i} z_{ij}^{M_{ij}(k,\Delta )} \right)&= \sum _{m=0}^\infty e^{-\lambda _i(k\Delta ) \Delta } \frac{(\lambda _i(k\Delta )\Delta )^m}{m!} \left( f_i(k\Delta ,z) \right) ^m\\&= \exp \Big (\Delta \lambda _i(k\Delta )(f_i(k\Delta ,z)-1) \Big ), \end{aligned}$$

where

$$\begin{aligned} f_i(u,z) := p_i(u) + \sum _{j=1}^{S_i} z_{ij} p_{ij}(u), \end{aligned}$$
(13)

in which \(p_i(u)\) (\(p_{ij}(u)\), respectively) denotes the probability that the job that entered tandem i at time u has already left the tandem (is in node j, respectively) at time t. Note that

$$\begin{aligned} p_i(u)&= {{\mathrm{{\mathbb {P}}}}}\left( \sum _{\ell =1}^{S_i} J_{i\ell }< t-u\right) ,\nonumber \\ p_{ij}(u)&= {{\mathrm{{\mathbb {P}}}}}\left( \sum _{\ell =1}^{j-1} J_{i\ell } < t-u, \sum _{\ell =1}^{j} J_{i\ell }>t-u\right) . \end{aligned}$$
(14)

Recognizing a Riemann sum and letting \(\Delta \downarrow 0\), we conclude that Eq. (12) takes the following form:

In case of a stochastic rate process \(\Lambda _i(\cdot )\), we obtain

Therefore, it holds that

$$\begin{aligned} {{\mathrm{{\mathbb {E}}}}}\left( \prod _{j=1}^{S_i} z_{ij}^{N_{ij}(t)} e^{-s\Lambda _i(t)}\right)= & {} {{\mathrm{{\mathbb {E}}}}}\left( {{\mathrm{{\mathbb {E}}}}}\left( \left. \prod _{j=1}^{S_i} z_{ij}^{N_{ij}(t)}e^{-s\Lambda _i(t)}\,\right| \,\Lambda _i(\cdot )\right) \right) \\ {}= & {} {{\mathrm{{\mathbb {E}}}}}\left( e^{-s\Lambda _i(t)} {{\mathrm{{\mathbb {E}}}}}\left( \left. \prod _{j=1}^{S_i} z_{ij}^{N_{ij}(t)}\,\right| \,\Lambda _i(\cdot )\right) \right) , \end{aligned}$$

and we consequently find

(15)

4.2 Parallel (tandem) systems

Now that the tandem case has been analyzed, the next step is to put the tandem systems as described in Sect. 4.1 in parallel. We assume that there are d parallel tandems. There are different ways in which dependence between the parallel systems can be created. Two relevant models are listed below, and illustrated in Fig. 4.

(M1) :

Let \(\Lambda \equiv \Lambda (\cdot )\) be a d-dimensional shot-noise process \((\Lambda _1,\ldots ,\Lambda _d)\) where the shots in all \(\Lambda _i\) occur simultaneously (the shot distributions and decay rates may be different). The process \(\Lambda _i\), for \(i=1,\ldots ,d\), corresponds to the arrival rate of tandem system i. Each tandem system has an arrival process, in which the Cox processes are independent given their shot-noise arrival rates.

(M2) :

Let \(\Lambda \equiv \Lambda (\cdot )\) be the shot-noise rate of a Cox process. The corresponding Poisson process generates simultaneous arrivals in all tandems.

Fig. 4
figure 4

Model (M1) is illustrated on the left, and Model (M2) is illustrated on the right. The rectangles represent tandem systems, which consist of an arbitrary number of nodes in series

Remark 4.2

The model in which there is essentially one shot-noise process that generates arrivals for all queues independently is a special case of Model (M1). This can be seen by setting all components of \(\Lambda =(\Lambda _1,\ldots ,\Lambda _d)\) equal, by letting the shots and decay rate be identical.

In Model (M1), correlation between the shot-noise arrival rates induces correlation between the numbers of jobs in the different queues. In Model (M2), correlation clearly appears because all tandem systems have the same input process. Of course, the tandem systems will not behave identically because the jobs may have different service requirements. In short, correlation across different tandems in Model (M1) is due to linked arrival rates, and correlation in Model (M2) is due to simultaneous arrival epochs. We feel that both versions are relevant, depending on the application, and hence we analyze both.

Analysis of (M1)—Suppose that the dependency is of the type as in Model (M1). This means that the shots, in each component of \(\Lambda \), occur simultaneously. Recall the definition of \(f_i\) as stated in Eq. (13). It holds that

(16)

where the last equality holds due to (15).

Analysis of (M2)—Now suppose that the dependency in this model is of type (M2), i.e., there is one shot-noise process that generates simultaneous arrivals in the parallel tandem systems.

First we assume a deterministic arrival rate function \(\lambda (\cdot )\). Let \(M_{ij}(k,\Delta )\) be the number of jobs present in tandem system i at node j at time t that have arrived in the system between \(k\Delta \) and \((k+1)\Delta \). Note that

$$\begin{aligned} {{\mathrm{{\mathbb {E}}}}}\left( \prod _{i=1}^d \prod _{j=1}^{S_i} z_{ij}^{N_{ij}(t)}\right) = \lim _{\Delta \downarrow 0} \prod _{k=0}^{t/\Delta -1} {{\mathrm{{\mathbb {E}}}}}\left( \prod _{i=1}^d\prod _{j=1}^{S_i} z_{ij}^{M_{ij}(k,\Delta )}\right) . \end{aligned}$$

To further evaluate the right-hand side of the previous display, we observe that we can write

$$\begin{aligned} {{\mathrm{{\mathbb {E}}}}}\left( \prod _{i=1}^d \prod _{j=1}^{S_i} z_{ij}^{M_{ij}(k,\Delta )}\right) = \sum _{m=0}^\infty e^{-\lambda (k\Delta )\Delta }\frac{\lambda (k\Delta )\Delta )^m}{m!}(f(k\Delta ,z))^m =e^{\Delta \lambda (k\Delta )(f(k\Delta ,z)-1)}, \end{aligned}$$

where

$$\begin{aligned} f(u,z) := \sum _{j=1}^d \sum _{\ell _j=1}^{S_{j+1}} p_{\ell _1,\ldots , \ell _d} \prod _{i=1}^d z_{i\ell _i}; \end{aligned}$$
(17)

in this definition \(p_{\ell _1,\ldots , \ell _d}\equiv p_{\ell _1,\ldots , \ell _d}(u)\) equals the probability that a job that arrived at time u in tandem i is in node \(\ell _i\) at time t [cf. Eq. (14)]. The situation that \(\ell _i=S_i+1\) means that the job left the tandem system; we define \(z_{i,S_i+1} = 1\).

In a similar fashion as before, we conclude that

(18)

with f defined in Eq. (17).

Example 4.3

(Two-node parallel system) In the case of a parallel system of two infinite-server queues, f(uz) simplifies to

$$\begin{aligned} f(u,z_{11},z_{21}) = \sum _{\ell _1=1}^2 \sum _{\ell _2=1}^2 z_{1\ell _1}z_{2\ell _2} p_{\ell _1,\ell _2} = z_{11}z_{21} p_{11} + z_{21}p_{21} + z_{11}p_{12} + p_{22}. \end{aligned}$$

Remark 4.4

(Routing) Consider a feedforward network with routing. As argued in the beginning of this section, the network can be decomposed as a parallel tandem system. In case there is splitting at some point, then one decomposes the network as a parallel system, in which each tandem i receives the job with probability \(q_i\), such that \(\sum q_i=1\). This can be incorporated simply by adjusting the probabilities contained in \(f_i\) in Eq. (16), which are given in Eq. (14), so that they include the event that the job joined the tandem under consideration. For instance, the expression for \(p_i(u)\) in the left equation in (14) would become

$$\begin{aligned} {{\mathrm{{\mathbb {P}}}}}\bigg (Q=i, \sum _{\ell =1}^{S_i} J_{i\ell }<t-u\bigg ), \end{aligned}$$

where Q is a random variable with a generalized Bernoulli (also called ‘categorical’) distribution, where

$$\begin{aligned} {{\mathrm{{\mathbb {P}}}}}(\text {job is assigned to tandem } i) = {{\mathrm{{\mathbb {P}}}}}(Q=i)=q_i,\quad \text {for } i=1,\ldots d, \end{aligned}$$

with \(\sum q_i=1\); the right equation in (14) is adjusted similarly. Other than that the analysis is the same for the case of splits.

Remark 4.5

(Networks with loops) So far we have only considered feedforward networks. Networks with loops can be analyzed as well, but the notation becomes quite cumbersome. To show the method in by which networks with loops and routing can be analyzed, we consider a specific example. Suppose that arrivals enter node one, after which they enter node two. After they have been served in node two, they go back to node one with probability \(\eta \), or leave the system with probability \(1-\eta \). In this case, with similar techniques as before, we can find

with

$$\begin{aligned} f(u,z_1,z_2) = {{\mathrm{{\mathbb {P}}}}}(\,\hbox {job(} u \hbox {) left system}) + \sum _{i=1}^2 z_i {{\mathrm{{\mathbb {P}}}}}(\,\text {job}( u) \text { is in node } i), \end{aligned}$$

in which \(\text {job}(u)\) is the job that arrived at time u and we are examining the system at time t. Now, if we denote service times in the j-th node by \(J^{(j)}\), then, at a specific time t,

$$\begin{aligned} {{\mathrm{{\mathbb {P}}}}}(\,\text {job}(u)~\text {left system}) = \sum _{k=0}^\infty {{\mathrm{{\mathbb {P}}}}}\left( \sum _{i=1}^{k+1} (J_i^{(1)}+J_i^{(2)}) \le t-u \right) \eta ^k(1-\eta ). \end{aligned}$$

Analogously, \({{\mathrm{{\mathbb {P}}}}}(\,\text {job}(u) \text {is in node}\,\,1)\) equals, by conditioning on the job having taken k loops,

$$\begin{aligned} \sum _{k=0}^\infty \eta ^{k} {{\mathrm{{\mathbb {P}}}}}\left( J_{k+1}^{(1)} + \sum _{i=1}^{k} \left( J_i^{(1)} + J_i^{(2)}\right) > t-u , \sum _{i=1}^{k} \left( J_i^{(1)} + J_i^{(2)}\right) \le t-u\right) ; \end{aligned}$$

likewise, \({{\mathrm{{\mathbb {P}}}}}(\,\text {job}(u) \text {is in node}\,\,2)\) equals

$$\begin{aligned} \sum _{k=0}^\infty \eta ^{k} {{\mathrm{{\mathbb {P}}}}}\left( \sum _{i=1}^{k+1}\left( J_i^{(1)} + J_i^{(2)}\right) > t-u , J^{(1)}_{k+1} + \sum _{i=1}^{k}\left( J_i^{(1)} + J_i^{(2)}\right) \le t-u\right) . \end{aligned}$$

For example, in the case where all \(J_i^{(j)}\) are independent and exponentially distributed with mean \(1/\mu \), we can calculate those probabilities explicitly. Indeed, if we denote by Y a Poisson process with rate \(\mu \), then, for example

$$\begin{aligned}&{{\mathrm{{\mathbb {P}}}}}\left( \sum _{i=1}^{k+1} \left( J_i^{(1)} + J_i^{(2)}\right) > t-u , J^{(1)}_{k+1} + \sum _{i=1}^{k} \left( J_i^{(1)} + J_i^{(2)} \right) \le t-u\right) \\&\quad = {{\mathrm{{\mathbb {P}}}}}(Y(t-u)=2k+1)\\&\quad = e^{-\mu (t-u)} \frac{(\mu (t-u))^{2k+1}}{(2k+1)!} \end{aligned}$$

and thus

$$\begin{aligned} {{\mathrm{{\mathbb {P}}}}}(\,\text {job}(u) \text {is in node}\,\,2) = \sum _{m=0}^\infty \eta ^m e^{-\mu (t-u)} \frac{(\mu (t- u))^{2m+1}}{(2m+1)!}. \end{aligned}$$

A similar calculation can be done for the probability that the job is in node one. Recalling that a sum of exponentials has a Gamma distribution, we can write

where \(F_{\Gamma (2m+2, \mu )}\) denotes the distribution function of a \(\Gamma \)-distributed random variable with rate \(\mu \) and shape parameter \(2m+2\).

4.3 Main result

In this subsection, we summarize and conclude with the following main result. Recall Definition 4.1 of a feedforward network. In the beginning of Sect. 4 we argued that we can decompose a feedforward network into parallel tandems. In Sect. 4.2 we studied exactly those systems, leading up to the following results.

Theorem 4.6

Suppose we have a feedforward network of infinite-server queues, where the input process is a Poisson process with shot-noise arrival rate. Then the network can be decomposed into parallel tandem systems. In Model (M1), it holds that

with \(f_i(\cdot ,\cdot )\) as defined in Eq. (13) and where g(sv) is a vector-valued function in which component i is given by

Furthermore, in Model (M2),

with \(f(\cdot ,\cdot )\) as defined in Eq. (17).

Proof

These are Eqs. (16) and (18) to which we applied Lemma 2.1. \(\square \)

Next we calculate covariances between nodes in tandem and parallel thereafter.

Covariance in Tandem System—Consider a tandem system consisting of two nodes and we want to analyze the covariance between the number of jobs in the nodes. Dropping the index of the tandem system, denote by \(N_1(\cdot )\) and \(N_2(\cdot )\) the number of jobs in nodes 1 and 2, respectively. Using Eq. (16), differentiation yields

and

so that

cf. Eq. (2) for the last equality.

Covariance parallel (M1)—Consider a parallel system consisting of two nodes only. In order to study covariance in the parallel (M1) case, we need a result about the covariance of the corresponding shot-noise process.

Lemma 4.7

Let \(\Lambda _1(\cdot ),\Lambda _2(\cdot )\) be shot-noise processes of which the jumps occur simultaneously according to a Poisson arrival process with rate \(\nu \). Let the decay be exponential with rate \(r_1, r_2\), respectively. Then it holds that, for \(\delta >0\),

$$\begin{aligned} {{\mathrm{{\mathbb {C}}ov}}}(\Lambda _1(t),\Lambda _2(t+\delta ))&= e^{-r_2\delta } {{\mathrm{{\mathbb {C}}ov}}}(\Lambda _1(t),\Lambda _2(t))\nonumber \\&=e^{-r_2\delta } \frac{\nu {{\mathrm{{\mathbb {E}}}}}B_{11} B_{12}}{r_1+r_2}(1-e^{-(r_1+r_2)t}), \end{aligned}$$
(19)

which, in the case \(\Lambda _1=\Lambda _2\), reduces to

$$\begin{aligned} {{\mathrm{{\mathbb {C}}ov}}}(\Lambda _i(t),\Lambda _i(t+\delta )) = e^{-r_i\delta } {{\mathrm{{\mathbb {V}}ar}}}\Lambda _i(t),\quad \text {for } i=1,2, \end{aligned}$$

corresponding to [16, p. 394] and Eq. (2).

Proof

See Appendix 2. \(\square \)

By making use of Eq. (16), we find

This implies

where we made use of the fact that, for \(u\ge v\),

$$\begin{aligned} {{\mathrm{{\mathbb {C}}ov}}}(\Lambda _1(u),\Lambda _2(v)) = \frac{\nu {{\mathrm{{\mathbb {E}}}}}B_{11}B_{12}}{r_1+r_2} \left( 1-e^{-(r_1+r_2)v}\right) e^{-r_2(u-v)}, \end{aligned}$$

cf. Lemma 4.7.

Covariance parallel (M2)—Extracting the mixed moment from the transform in Eq. (18), we derive directly that

This implies

The following proposition compares the correlations present in Model (M1) and (M2). In the proposition, we refer to the number of jobs in queue j in Model (Mi) at time t as \(N_j^{(i)}(t)\), for \(i=1,2\). We find the anticipated result that the correlation in Model (M2) is stronger than in Model (M1).

Proposition 4.8

Let \(\Lambda (\cdot )\) be the shot-noise process that generates simultaneous arrivals in both queues and let \(\Lambda _1(\cdot ), \Lambda _2(\cdot )\) be processes that have simultaneous jumps and generate arrivals in both queues independently. Suppose that \( \Lambda _1(t) \mathop {=}\limits ^{\mathrm{d}} \Lambda _2(t) \mathop {=}\limits ^{\mathrm{d}} \Lambda (t)\), for \(t\ge 0\). Then, for any \(t\ge 0\),

$$\begin{aligned} {{\mathrm{{\mathbb {C}}orr}}}\left( N_1^{(1)}(t),N_2^{(1)}(t)\right) \le {{\mathrm{{\mathbb {C}}orr}}}\left( N_1^{(2)}(t),N_2^{(2)}(t)\right) . \end{aligned}$$

Proof

Because of the assumption \( \Lambda _1(t) \mathop {=}\limits ^{\mathrm{d}} \Lambda _2(t) \mathop {=}\limits ^{\mathrm{d}} \Lambda (t)\), we have that, for all combinations \(i,j\in \{1,2\}\), the \(N_i^{(j)}(t)\) are equal in distribution. Therefore, it is sufficient to show that

$$\begin{aligned} {{\mathrm{{\mathbb {C}}ov}}}\left( N_1^{(1)}(t), N_2^{(1)}(t)\right) \le {{\mathrm{{\mathbb {C}}ov}}}\left( N_1^{(2)}(t), N_2^{(2)}(t)\right) . \end{aligned}$$

The expressions for the covariances, which are derived earlier in this section, imply that

which is nonnegative, as desired. \(\square \)

5 Concluding remarks

We have considered networks of infinite-server queues with shot-noise-driven Coxian input processes. For the single queue, we found explicit expressions for the Laplace transform of the joint distribution of the number of jobs and the driving shot-noise arrival rate, as well as a functional central limit theorem of the number of jobs in the system under a particular scaling. The results were then extended to a network context: we derived an expression for the joint transform of the numbers of jobs in the individual queues, jointly with the values of the driving shot-noise processes.

We included the functional central limit theorem for the single queue, but it is anticipated that a similar setup carries over to the network context, albeit at the expense of considerably more involved notation. Our future research will include the study of the departure process of a single queue; the output stream should remain Coxian, but of another type than the input process.