1 Introduction

An \(M/GI/\infty \) service system (one without queueing) is a fundamental model in applied probability, for example see Serfozo [11], Sect. 3.12, and its references. We consider this system in which customers arrive at times that form a homogeneous Poisson process with a constant arrival rate (or intensity) \(\lambda \) at times \(T_i\), where

$$\begin{aligned} \ldots T_{-1} < T_0 \le 0 < T_1 \ldots \end{aligned}$$

The service times \(S_i\) are i. i .d. (independent and identically distributed) with distribution function \(G(s)\) with a finite mean \(m\). The number of customers receiving service at time \(t\in R\) is denoted by \(Q(t)\). We first assume the system is in a stationary state so that \(\{Q(t)\}\) is a stationary process.

It is well known that \(Q(0)\) has a Poisson distribution with parameter \( \rho = \lambda m \). Simple proofs can be found for instance in Kingman [8], p. 43, in the context of Bartlett’s theorem and in Serfozo [11], Sect. 3.12. This result is also contained in Theorem 1 below. While the process \(\{Q(t)\}\) provides a rough description of the service system, our focus is on a “finer” description in terms of the service-time ages, residuals, and lengths of the customers in the system at a fixed time.

One approach for studying these times is to use the method of “supplementary variables” of the \(M/GI/\infty \) as described below. In particular, Takacs in Chap. 3 of [13] derived the distribution function of the residual service times by evaluating the joint Markovian probabilities of the residual times and \(Q(t)\) at a finite time for a nonstationary system, as in Theorem 2, and then letting the time parameter tend to infinity. We recall his proof in Sect. 3.

Instead of using this approach, we use in Sect. 2 the space-time Poisson process representation of the \(M/GI/\infty \) system and two key properties of Poisson processes as in Daley and Vere-Jones [4], Kingman [8], Rolski and Ryll-Nardzewski [10] and Serfozo [11].

2 Results

With a slight abuse of notation, \(\mathbf A, \mathbf R\), and \(\mathbf L=\mathbf A+\mathbf R\) in the following will refer to typical service-time ages, residuals, and lengths, respectively, and they will appear as subscripts on Poisson processes whose points represent these times. Also, \({\mathcal B}(\cdot )\) denotes the Borel sets in the space \((\cdot )\) (for example \({\mathcal B}(R_+^3)\)).

Theorem 1

(a) The service-time ages, residuals, and lengths of the \(Q(0)\) customers in the stationary \(M/GI/\infty \) system at time \(0\) are the locations in \(R_+^3\) of the points of an inhomogeneous Poisson process \(N\) on \(R_+^3\) with mean measure \(\mu \), where

$$\begin{aligned} \mu (A\times B\times C)= \lambda \int _{A}\mathrm{d}u\int _{(B+u ) \cap C}G(\mathrm{d}s), \quad \quad A\times B\times C \in {\mathcal B}(R_+^3). \end{aligned}$$
(2.1)

In particular, \(Q(0)=N(R_+^3)\) has a Poisson distribution with mean \(\rho =\lambda m\). (b) The service-time ages, residuals, and lengths of the \(Q(0)\) customers in the system at time \(0\) are the locations of the points of three separate, inhomogeneous (dependent) Poisson processes \(N_{\mathbf A}\), \(N_{\mathbf R}\), and \(N_{\mathbf L}\) on \(R_+\), respectively, with

$$\begin{aligned} E[N_{\mathbf A}(0,v]]&= E[N_{\mathbf R}(0,v]]\ =\ \lambda \int _0^v[1-G(s)]\mathrm{d}s,\end{aligned}$$
(2.2)
$$\begin{aligned} \nonumber \\ E[N_{\mathbf L}(0,v]]&= \lambda \int _0^vsG(\mathrm{d}s), \quad \quad \quad v\ge 0. \end{aligned}$$
(2.3)

(c) Conditioned on the event \(Q(0)=n\), the three-dimensional vectors of service-time age, residual, and length of the \(n\) customers are i.i.d. with joint distribution function \(F_{{\mathbf A},{\mathbf R},{\mathbf L}}(\cdot )=\mu (\cdot )/\rho \). The marginal distribution functions of \(F_{\mathbf A,{\mathbf R},{\mathbf L}}\) for the service-time ages, residuals, and lengths are respectively

$$\begin{aligned} F_{\mathbf A}(v)&= F_{\mathbf R}(v)\ =\ m^{-1}\int _0^v[1-G(s)]\mathrm{d}s, \\ F_{\mathbf L}(v)&= m^{-1} \int _0^vsG(\mathrm{d}s), \quad \quad v\ge 0.\nonumber \end{aligned}$$
(2.4)

Proof

Clearly, the customer arriving at time \(T_i<0\) is present in the system at time \(0\) if \(S_i> -T_i\), and that customer’s service-time age, residual, and length are \(-T_i\), \(S_i+T_i\), and \(S_i\), respectively.

With this observation in mind, we will characterize these service-time ages, residuals, and lengths as a function of a point process defining the \(M/GI/\infty \) system. Specifically, we represent the system by a point process \(M\) on the upper half plane \(R\times R_+\) with points at the locations \((T_i,S_i)\) (the arrival and service times of the customers). Clearly \(M\) is an inhomogeneous space-time Poisson process with mean measure \(\Lambda \), where

$$\begin{aligned} \Lambda (A\times B)=E[M(A\times B)]=\lambda \int _A\mathrm{d}t\int _BG(\mathrm{d}s), A\times B\subset {\mathcal B}(R\times R_+). \end{aligned}$$

Let \(M_0\) denote \(M\) restricted to the subspace

$$\begin{aligned} {\mathcal S}_0=\{(x,y)\in R\times R_+:\ x<0,\ -x>y\}. \end{aligned}$$

Clearly, \(M_0\) is a Poisson process with mean measure \(\Lambda \) on \({\mathcal S}_0\), since it is \(M\) on that subspace.

Now, consider the mapping of the points of \(M_0\) into \(R_+^3\) such that a point of \(M_0\) at \((x,y)\in {\mathcal S}_0\) is mapped to \((-x,y+x,y)\). Then the resulting point process \(N\) on \(R_+^3\) consists of points at locations \((-T_i, S_i+T_i, S_i)\), where \(T_i<0\), and \(S_i> -T_i\). These point-location vectors are the service-time ages, residuals, and lengths, respectively, of the \(Q(0)\) customers present in the system at time \(0\).

The process \(N\) is clearly a deterministic mapping of the Poisson process \(M_0\), and so it is also a Poisson processes by the Poisson mapping theorem (for example p. 17 in [8] or Sect. 3.8 in [11] and their references). In this case, using the change of variable \(x\rightarrow -u\),

$$\begin{aligned} \mu (A \times B \times C)&= E[N(A \times B \times C)] \\&= \int _{\{(x,y)\in {\mathcal S}_0: -x\in A, \ y+x\in B,\ y\in C\}}\Lambda (\mathrm{d}x,\mathrm{d}y) \\&= \lambda \int _{A}\mathrm{d}u\int _{(B+u ) \cap C}G(\mathrm{d}s), A\times B\times C \in {\mathcal B}(R_+^3). \end{aligned}$$

This establishes the first assertion in statement (a).

The rest of (a) follows since \(N\) is a Poisson process with mean measure \(\mu \), and so \(Q(0)=N(R_+^3)\) is a Poisson random variable with mean

$$\begin{aligned} \mu (R_+^3)=\lambda \int _{R_+}\mathrm{d}u\int _{(u,\infty )}G(\mathrm{d}s)=\rho . \end{aligned}$$
(2.5)

To prove statement (b), consider the Poisson process \(N\) in statement (a) whose point locations in \(R_+^3\) are the service-time ages, residuals, and lengths of the \(Q(0)\) customers in the system at time \(0\). Then these service-time ages, residuals and lengths are the locations of the points of three point processes on \(R_+\) defined respectively by

$$\begin{aligned} N_{\mathbf A}(\cdot )=N(\cdot \times R_+^2), \ \ N_{\mathbf R}(\cdot )=N(R_+\times \cdot \times R_+), \ \ N_{\mathbf L}(\cdot )=N(R_+^2\times \cdot ). \end{aligned}$$

Since these processes are the Poisson process \(N\) defined on the subspaces indicated in their definitions, they are also Poisson processes, and their means are the functions of the mean measure \(\mu \) of \(N\) shown in (2.2), (2.3). For instance,

$$\begin{aligned} E[N_{\mathbf R}(0,v]]&= \mu (R_+\times (0,v]\times R_+)\ =\ \lambda \int _{R_+}[G(x+v)-G(x)]\mathrm{d}x\\&= \lambda \int _{R_+}[G(x+v)-1]\mathrm{d}x+\lambda \int _{R_+}[1-G(x)]\mathrm{d}x\\&= \lambda \int _0^v[1-G(s)]\mathrm{d}s. \end{aligned}$$

To prove (c) first recall from (a) that the point locations of the Poisson process \(N\) with mean measure \(\mu \) are the service-time ages, residuals, and lengths of the customers \(Q(0)\) customers in the system at time \(0\), and \(N(R_+^3)=Q(0)\) whose mean from (2.5) is \(\mu (R_+^3)= \rho \). We will now use the order statistic property of Poisson processes (called a representation of a Poisson process as a Bernoulli process in [8], or sample process in [4] and Sects. 3.3 and 3.7 in [11]). This property says that if \(N\) is a Poisson process on a space \(S\) with a finite mean measure \(\mu \), then conditioned on the event that \(N(S)=n\), the locations of the \(n\) points are i.i.d. with distribution \(F(\cdot )=\mu (\cdot )/\mu (S)\).

Applying this property to the Poisson process \(N\) in (a) proves (c). In this case, the conditioning is on the event \(N(R_+^3)=Q(0)=n\) and \(\mu (R_+^3)= \rho \) from (2.5). The marginal distributions of \(F_{\mathbf A,\mathbf R,\mathbf L}\) follow by elementary integrations, or by (2.2) and (2.3) (for example \(F_{\mathbf L}(v)= E[N_{\mathbf L}(0,v]]/ \rho \)).

Note. In Theorem 1, the distribution functions \(F_{\mathbf A}\), \(F_{\mathbf R}\), \(F_{\mathbf L}\) of the service-time ages, residuals, and lengths of customers present at time \(0\) are the same as the well-known limiting distribution functions of an inter-renewal-time’s age, residual, and length, respectively, in a renewal process. The latter limiting distributions follow by the key renewal theorem (for example see Sects. 2.8 and 2.15 in [11] and its references). Also, the joint limiting distribution for these inter-renewal-time characteristics is equal to the joint distribution \(F_{\mathbf A,\mathbf R,\mathbf L}\) above. In this context, \(\mathbf A\) and \(\mathbf R\) have the joint behavior

$$\begin{aligned} P(\mathbf A>u,\ \mathbf R>v)=m^{-1}\int _{u+v}^{\infty }[1-G(s)]\mathrm{d}s, u,v \in R_+. \end{aligned}$$

The following is an analog of Theorem 1 for a non-stationary system that characterizes the service-time ages, residuals, and lengths of customers in the system at time \(t\). In this case, these service-time characteristics converge as \(t\rightarrow \infty \) to those characteristics of the stationary system in Theorem 1.

Theorem 2

(a\('\)) If the \(M/GI/\infty \) system above is non-stationary with \(Q(0)=0\), then for any fixed time \(t>0\), the service-time ages, residuals, and lengths of the \(Q(t)\) customers in the system at time \(t\) are the locations of points in an inhomogeneous Poisson process \(N_{t}\) on \(R_+^3\) whose mean measure \(\mu _t\) is defined like \(\mu \) in Theorem 1 by (2.1), with the set \(A\) replaced by \(A\cap [0,t]\). In particular, \(Q(t)=N_t(R_+^3)\) has a Poisson distribution with mean \( \lambda m_t\), where \(m_t=\int _0^t[1-G(s)]\mathrm{d}s\).

(b\('\)) The service-time ages, residuals, and lengths of the \(Q(t)\) customers in the system at time \(t\) are the locations of the points of three separate, inhomogeneous (dependent) Poisson processes \(N_{\mathbf A_t}\), \(N_{\mathbf R_t}\) and \(N_{\mathbf L_t}\) on \(R_+\), respectively, with

$$\begin{aligned} E[N_{\mathbf A_t}(0,v]]&= \lambda \int _0^{\min \{t, v\}} [1-G(s)]\mathrm{d}s \\ E[N_{\mathbf R_t}(0,v]]&= \lambda \int _0^t [G(v+s)-G(s)]\mathrm{d}s \\ E[N_{\mathbf L_t}(0,v]]&= \lambda \int _0^{\min \{t, v\}}sG(\mathrm{d}s), \quad \quad v\ge 0. \end{aligned}$$

(c\('\)) Conditioned on the event \(Q(t)=n\), the three-dimensional service-time age, residual, and length vectors of the \(n\) customers are i.i.d. with joint distribution function \(F_{{\mathbf A_t},{\mathbf R_t},{\mathbf L_t}}(\cdot )=\mu _t(\cdot )/\lambda m_t\). The marginal cumulative distribution functions of \(F_{\mathbf A_t,{\mathbf R_t},{\mathbf L_t}}\) for the service-time ages, residuals, and lengths are respectively

$$\begin{aligned} F_{\mathbf A_t}(v)&= m_t^{-1}\int _0^{\min \{t, v\}}[1-G(s)]\mathrm{d}s, \\ F_{\mathbf R_t}(v)&= m_t^{-1} \int _0^t [G(v+s)-G(s)]\mathrm{d}s \\ F_{\mathbf L_t}(v)&= m_t^{-1} \int _0^{\min \{t, v\}}sG(\mathrm{d}s), \quad \quad v\ge 0. \end{aligned}$$

(d\('\)) As \(t\rightarrow \infty \) the Poisson process \(N_{t}\) converges in distribution to the Poisson process \(N\) in Theorem 1. In addition, the separate Poisson processes \(N_{\mathbf A_t}\), \(N_{\mathbf R_t}\) and \(N_{\mathbf L_t}\) converge in distribution to the respective Poisson processes \(N_{\mathbf A}\), \(N_{\mathbf R}\) and \(N_{\mathbf L}\); and the distribution functions \(F_{\mathbf A_t,{\mathbf R_t},{\mathbf L_t}}, F_{\mathbf A_t}, F_{\mathbf R_t}, F_{\mathbf L_t}\) converge respectively to \(F_{\mathbf A,\mathbf R,\mathbf L}, F_{\mathbf A}, F_{\mathbf R}, F_{\mathbf L}\) in Theorem 1.

Proof

Proceeding analogously to the proof of Theorem , let \(M_{t}\) denote the Poisson process that is the restriction of the Poisson process \(M\) on the subspace

$$\begin{aligned} {\mathcal S}_t=\{(x,y)\in R\times R_+:\ x\in [0,t],\ y>t-x\}. \end{aligned}$$

Consider the mapping of the points of \(M_t\) into \(R_+^3\) such that a point of \(M_t\) at \((x,y)\in {\mathcal S}_t\) is mapped to \((x,y-x,y)\). Under this mapping, the resulting point process \(N_t\) on \(R_+^3\) consists of points at locations \((T_i, S_i-T_i, S_i)\), where \(T_i\in [0,t]\), and \(S_i> t-T_i\). Then \(N_t\), being a mapping of \(M_t\), is a Poisson process on \(R_+^3\) whose mean measure \(\mu _t\) is clearly as described in (a\('\)), and whose point locations are the service-time ages, residuals, and lengths, respectively, of the \(Q(t)\) customers present in the system at time \(t\). One consequence is that \(Q(t)=N_t(R_+^3)\) has a Poisson distribution with mean

$$\begin{aligned} E[N_t(R_+^3)]=\mu _t(R_+^3)=\lambda \int _0^t[1-G(s)]\mathrm{d}s. \end{aligned}$$

This completes the proof of (a\('\)). The proofs of (b\('\)) and (c\('\)) follow from (a\('\)), just as (b) and (c) follow from (a) in Theorem 1.

To prove (d\('\)), note that the measure \(\mu _t\) converges to \(\mu \) in Theorem 1 as \(t\rightarrow \infty \) (since \(A\cap [0,t]\rightarrow A\)). Consequently the Poisson process \(N_t\) converges in distribution to \(N\) as \(t\rightarrow \!\infty \), because the mean measure of a Poisson process completely determines its distribution. This convergence of \(N_t\) yields the other convergence statements in (d\('\)).

Theorems 1 and 2 readily extend to systems with an inhomogeneous Poisson arrival process, or time-dependent service times, and with \(Q(0)>0\) in Theorem 2.

3 Comments on Takacs’ proof and supplementary variable analysis

It is well known that the stochastic process \(\{Q(t)\}\) is generally not Markovian, but it is in the case of exponential service times (the \(M/M/\infty \) system). However, \(\{Q(t)\}\), with its ages or residual times appended as supplementary variables, can be studied by the theory of Markov processes.

Letting \(A_k(t)\) and \(R_k(t)\) denote the age and residual service time of the \(k\)-th customer in the system at time \(t\), each of the following is a Markov process:

$$\begin{aligned} X_{\mathbf A} (t)&= \{Q(t), A_k(t): k=1,\ldots ,Q(t)\}, \\ X_{\mathbf R} (t)&= \{Q(t), R_k(t): k=1,\ldots ,Q(t)\} \end{aligned}$$

These processes are not vector-valued processes in the usual sense, since the index \(k\) on \(A_k(t)\) and \(R_k(t)\) is a function of time denoting the \(k\)-th customer in the system (under an arbitrary ordering). However, a little thought shows that the processes \(X_{\mathbf A} (t)\) and \(X_{\mathbf R} (t)\) are actually another way of denoting the respective Poisson processes \(N_{\mathbf A_t}\) and \(N_{\mathbf R_t}\)in Theorem 2. Consequently, the analysis of \(X_{\mathbf A} (t)\) and \(X_{\mathbf R} (t)\) would amount to the analysis of the Poisson processes in Theorems 1 and 2.

Note that \(\{N_{\mathbf A_t}: t\ge 0\}\) in Theorem 2 is a Markov process which converges in distribution as \(t\rightarrow \infty \), and so it has a stationary version \(\{\overline{N}_{\mathbf A_t}: t\in R\}\), where each \(\overline{N}_{\mathbf A_t}\) is equal in distribution to the Poisson process \(N_{\mathbf A}\) in Theorem 1. In particular, \(\{\overline{N}_{\mathbf A_t}(R_+): t\in R\}\) is the stationary process \(\{Q(t): t\in R\}\) in Theorem 1.

The general method of supplementary variables consists of constructing a Markov process from a non-Markovian process by appending supplementary variables to the latter. This method has proved useful for analyzing queueing and reliability models using Markov process theory. Cohen [2,  pp. 661–662], contains a nice historical review of works applying supplementary variables. We thank Andreas Brandt for alerting us to this valuable reference.

Supplementary variables were introduced by Kosten in 1942, which he later describes in [9], and by Cox [3] and Kendall [7]. A remarkable paper is Sevastyanov [12], who applied the method to \(M/GI/s\) loss systems to prove that Erlang’s formula holds for arbitrary service time distributions, a famous example of insensitivity. Other uses of supplementary variables are in Gnedenko and Kovalenko [5], Cohen [1], Gupur [6] and their references.

Since the Markov analysis by supplementary variables involves a difficult analysis of an infinite system of differential equation with an infinite number of variables, Takacs in Chap. 3 of [13] uses direct reasoning with conditional distributions to evaluate the joint distribution of time-dependent residual times similar to the age processes in Theorem 2. Here we recapitulate the main idea of his proof.

Suppose that \(Q(0)=0\). Consider customers arriving in \([0,t]\), wherein there are \(l\) customers with probability \(\frac{(\lambda t)^l}{l!}{e^{-\lambda t}}\). A customer, say “i,” from this group arrives at \(\mathrm {d}s\) in \([0,t]\) with probability \(\mathrm {d}s\mathbf 1(s\in [0,t])/t\). The probability that it is present at time \(t\) with its residual service time less than \(x_i\) is \(G(x_i+t-s)-G(t-s)\). Then integrating out with respect to \(\mathrm {d}s\mathbf 1(s\in [0,t])/t\), one obtains

$$\begin{aligned} \frac{1}{t}\int _0^t[G(x_i+t-s)-G(t-s)]\,\mathrm {d}s. \end{aligned}$$

Similarly, the probability that a customer is not present at \(t\) is \(\int _0^tG(s)\,\mathrm {d}s/t\).

Hence the probability that \(Q(t)=k\) and \(R_1(t)\le x_1,\ldots ,R_k(t)\le x_k\) is

$$\begin{aligned}&\sum _{l\ge k}\frac{(\lambda t)^l}{l!}e^{-\lambda t} {l\atopwithdelims ()k}\left( \frac{1}{t}\int _0^tG(s)\,\mathrm {d}s\right) ^{l-k} \prod _{i=1}^k\left( \frac{1}{t}\int _0^t[G(x_i+t-s)-G(t-s)]\, \mathrm {d}s\right) \\&\quad = e^{-\lambda \int _0^t[1-G(s)]\,\mathrm {d}s}\frac{\lambda ^k}{k!}\prod _{i=1}^k \left( \int _0^t[G(x_i+s)-G(s)]\,\mathrm {d}s\right) . \end{aligned}$$

Setting \(x_i=\infty \) \((i=1,\ldots ,k)\) one obtains the probability that \(Q(t)=k\). Furthermore, conditioned on the event that \(Q(t)=k\), the joint distribution of \(R_1(t),\ldots ,R_k(t)\) (as in Theorem 2(c\('\))) is

$$\begin{aligned} \frac{\prod _{i=1}^k\left( \int _0^t[G(x_i+s)-G(s)]\, \mathrm {d}s\right) }{\int _0^t[1-G(s)]\,\mathrm {d}s}. \end{aligned}$$

This yields the steady-state distributions by letting \(t\rightarrow \infty \).

4 An ecological application

Our interest in the fine-state distributions in Theorem 1 comes from the following ecological problem.

Consider a natural stand of trees in a forest which is part of a larger forest and not influenced by men. There is a random recruitment of trees over time and the trees have random lifetimes. Other detailed spatial aspects are ignored in the model.

The random recruitment times occur according to a Poisson process with intensity \(\lambda \), while the lifetimes of the trees are assumed to be i. i. d. with distribution function \(G\). Thus the stand can be described by the \(M/GI/\infty \) system.

The assumption that recruitment is independent of population size in the stand may be justified by the facts that (i) the stand is a part of a larger forest (is not isolated), and (ii) in case of existing gaps between trees, new trees have better chances to find their place. This model is mentioned in Kingman [8,  p. 49], and Rolski and Ryll-Nardzewski [10]. The latter paper provides methods for calculations of several quantities involving the oldest member (senior) of the population. In particular, it studies the senior’s age process and the point process of seniors’ deaths obtained by dependent thinning of a Poisson process.

An interesting ecological problem is to determine the lifetime distribution function \(G\) of the trees. Direct measurement of lifetimes is usually impossible since trees can have lifetimes and even ages that are hundreds of years.

Knowledge about the age distribution in \(M/GI/\infty \) leads to an easy way to obtain the lifetime distribution function. Letting \(a(t)\) denote the probability density function of the age of a typical tree in the stand, then Eq. (2.4) yields \(G(t)\) when \(a(t)\) is known statistically.

In the case where \(G\) is an exponential distribution function, it is well known that \(a(t)\) is the probability density function of an exponential distribution with the same parameter.