1 Introduction

Priority models with multiple servers constitute an important class of queueing systems, having applications in areas as diverse as manufacturing, wireless communication and the service industry. Studies of these models date back to at least the 1950s (see, for example, Cobham [7], Davis [8], and Jaiswal [15, 16]), yet many properties of these systems still do not appear to be well understood. Recent work addressing priority models includes Sleptchenko et al. [27] and Wang et al. [29]. We refer the reader to [29] for more specific examples of applications of priority queueing models.

Our contribution to this stream of the literature is an analysis of the time-dependent behavior of a Markovian multi-server queue with two customer classes, class-dependent service rates and preemptive priority between classes. To the best of our knowledge, the joint stationary distribution of the M / M / 1 2-class preemptive priority system was first studied in Miller [24], who makes use of matrix-geometric methods to study the joint stationary distribution of the number of high- and low-priority customers in the system. More particularly, in [24] this queueing system is modeled as a quasi-birth-and-death (QBD) process having infinitely many levels, with each level containing infinitely many phases. Miller then shows how to recursively compute the elements of the rate matrix of this QBD process: Once enough elements of this rate matrix have been found, the joint stationary distribution can be approximated by appropriately truncating this matrix.

This single-server model is featured in many works that have recently appeared in the literature. In Sleptchenko et al. [27] an exact, recursive procedure is given for computing the joint stationary distribution of an M / M / 1 preemptive priority queue that serves an arbitrary finite number of customer classes. The M / M / 1 2-class priority model is briefly discussed in Katehakis et al. [20], where they explain how the successive lumping technique can be used to study M / M / 1 2-class priority models when both customer classes experience the same service rate. Interesting asymptotic properties of the stationary distribution of the M / M / 1 2-class preemptive priority model can be found in the work of Li and Zhao [23].

Multi-server preemptive priority systems with two customer classes have also received some attention in the literature. One of the earlier references allowing for different service requirements between customer classes is Gail et al. [13]; see also the references therein. In [13], the authors derive the generating function of the joint stationary probabilities by expressing it in terms of the stationary probabilities associated with states where there is no queue. A combination of a generating function approach and the matrix-geometric approach is used in Sleptchenko et al. [26] to compute the joint stationary distribution of an M / M / c 2-class preemptive priority queue. The M / PH / c queue with an arbitrary number of preemptive priority classes is studied in Harchol-Balter et al. [14] using a recursive dimensionality reduction technique that leads to an accurate approximation of the mean sojourn time per customer class. Furthermore, in Wang et al. [29] the authors present a procedure for finding, for an M / M / c 2-class priority model, the generating function of the distribution of the number of low-priority customers present in the system in stationarity.

Our work deviates from all of the above approaches, in that we construct a procedure for computing the Laplace transforms of the transition functions of the M / M / c 2-class preemptive priority model. Our method first makes use of a slight tweak of the clearing analysis on phases (CAP) method featured in Doroudi et al. [10], in that we show how CAP can be modified to study Laplace transforms of transition functions. The specific dynamics of our priority model allow us to take the analysis a few steps further, by showing each Laplace transform can be expressed explicitly in terms of transforms corresponding to states contained within a strip of states that is infinite in only one direction. Finally, we show how to compute these remaining transforms recursively, by making use of a slight modification of Ramaswami’s formula [25]. While the focus of our work is on Laplace transforms of transition functions, analogous results can be derived for the stationary distribution of the M / M / c 2-class preemptive priority model as well. We are not aware of any studies that obtain explicit expressions for the Laplace transforms of the transition functions, or even the stationary distribution, as we do here; these results seem to yield the most explicit expressions known to date.

The Laplace transforms we derive can easily be numerically inverted to retrieve the transition functions with the help of the algorithms of Abate and Whitt [3, 4] or den Iseger [9]. These transition functions can be used to study—as a function of time—key performance measures such as the mean number of customers of each priority class in the system; the mean total number of customers in the system; or the probability that an arriving customer has to wait in the queue. These time-dependent performance measures can, for example, be used to analyze and dimension priority systems when one is interested in the behavior of such systems over a finite time horizon. Using the equilibrium distribution as an approximation of the time-dependent behavior to dimension the system can result in either over- or underdimensioning since it neglects transient effects such as the initial number of customers in the system. Prior to our work, the time-dependent behavior of multi-server queues could not be analyzed using any methods from previous work; until now one would have to resort to simulation in order to study the system’s time-dependent behavior. Having explicit expressions for the Laplace transforms of the transition functions greatly simplifies the computation of some performance measures. For instance, these transforms yield explicit expressions for the Laplace transforms of the distribution of the number of low-priority customers in the system at time t.

We now present some numerical examples of the time-dependent performance measures, where we will make use of the notation introduced in Sect. 2. In Fig. 1, we plot the mean number of low-priority customers in the system as a function of time and we also depict the equilibrium values. Similarly, in Fig. 2 we plot the time-dependent and equilibrium delay probabilities for each priority class. The Laplace transforms used to obtain Figs. 1 and 2 can be computed numerically using the approach discussed in Sect. 5.5: here we used an error tolerance of \(\epsilon = 10^{-8}\). Once these transforms have been found, numerical inversion can be done via the Euler summation algorithm of [3] where we again used an error tolerance of \(10^{-8}\). From Fig. 1 we can also informally derive the mixing times of each scenario. It seems that the mixing time vastly increases with an increase in the load. As expected, in Fig. 2 we see that the delay probability of a high-priority customer is much lower than the delay probability of a low-priority customer. Furthermore, as time passes, the delay probability of the high-priority customer tends to the delay probability in an M / M / c queue with only high-priority customers, which was verified to be correct. Finally, in Table 1 we show the computation times of the algorithm, which was implemented in Matlab and run on a 64-bit desktop with an Intel Core i7-3770 processor. The computation time scales reasonably well with the number of servers, and therefore the algorithm can be used to evaluate any practical instance.

Fig. 1
figure 1

Mean number of low-priority customers in the system as a function of time for increasing load of the high-priority customers. Equilibrium values are shown at \(t = \infty \). Parameter settings are \(c = 10\), \(\rho _1 = 1/3\), and \(\rho _2\) varies

Fig. 2
figure 2

Probability that an arriving customer has to wait in the queue as a function of time for the two priority classes. Equilibrium values are shown at \(t = \infty \). Parameter settings are \(c = 10\), \(\rho _1 = 1/3\), and \(\rho _2 = 1/2\)

Table 1 Computation time in seconds required to calculate, for all states in \(\mathbb {S}_k\), the stationary probabilities or the Laplace transforms for a specific \(\alpha = 1/2 + 1/2 \mathrm {i}\)

This paper is organized as follows. Section 2 describes both the M / M / c 2-class preemptive priority queueing system, as well as the two-dimensional Markov process used to model the dynamics of this system. In the same section, we introduce relevant notation and terminology and detail the outline of the approach. In Sects. 35, we describe this approach for calculating the Laplace transforms of the transition functions. We discuss the simplifications in the single-server case in Sect. 6. In Sect. 7, we summarize our contributions and comment on the derivation of the stationary distribution. The appendices provide supporting results on combinatorial identities and single-server queues used in deriving the expressions for the Laplace transforms.

2 Model description and outline of approach

We consider a queueing system consisting of c servers, where each server processes work at unit rate. This system serves customers from two different customer classes, referred to here as class-1 and class-2 customers. The class index indicates the priority rank, meaning that among the servers, class-2 customers have preemptive priority over class-1 customers in service. Recall that the term ‘preemptive priority’ means that whenever a class-2 customer arrives at the system, one of the servers currently serving a class-1 customer immediately drops that customer and begins serving the new class-2 arrival, and the dropped class-1 customer waits in the system until a server is again available to receive further processing, i.e., the priority rule is preemptive resume. Therefore, if there are currently i class-1 customers and j class-2 customers in the system, the number of class-2 customers in service is \(\min (c,j)\), while the number of class-1 customers in service is \(\max (\min (i,c - j),0)\).

Class-n customers arrive in a Poisson manner with rate \(\lambda _n\), \(n = 1,2\), and the Poisson arrival processes of the two populations are assumed to be independent. Each class-n arrival brings an exponentially distributed amount of work with rate \(\mu _n\), independently of everything else. We denote the total arrival rate by \(\lambda :=\lambda _1 + \lambda _2\), the load induced by class-n customers as \(\rho _n :=\lambda _n/(c \mu _n)\), and the load induced by both customer classes as \(\rho :=\rho _1 + \rho _2\).

The dynamics of this queueing system can be described with a continuous-time Markov chain (CTMC). For each \(t \ge 0\), let \(X_n(t)\) represent the number of class-n customers in the system at time t, and define \(X(t) :=(X_1(t),X_2(t))\). Then, \(X :=\{ X(t) \}_{t \ge 0}\) is a CTMC on the state space \(\mathbb {S}= \mathbb {N}_0^2\). Given any two distinct elements \(x,y \in \mathbb {S}\), the element q(xy) of the transition rate matrix \(\mathbf {Q}\) associated with X denotes the transition rate from state x to state y. The row sums of \(\mathbf {Q}\) are 0, meaning for each \(x \in \mathbb {S}\), \(q(x,x) = - \sum _{y \ne x} q(x,y) =:- q(x)\), where q(x) represents the rate of each exponential sojourn time in state x. For our queueing system, the nonzero transition rates of \(\mathbf {Q}\) are given by

$$\begin{aligned} q((i,j),(i + 1,j))&= \lambda _1, \quad i,j \ge 0, \\ q((i,j),(i,j + 1))&= \lambda _2, \quad i,j \ge 0, \\ q((i,j),(i - 1,j))&= \max (\min (i,c - j),0) \mu _1, \quad i \ge 1, ~ j \ge 0, \\ q((i,j),(i,j - 1))&= \min (c,j) \mu _2, \quad i \ge 0, ~ j \ge 1. \end{aligned}$$

Figure 3 displays the transition rate diagram.

Fig. 3
figure 3

Transition rate diagram of the Markov process X

We further associate with the Markov process X the collection of transition functions \(\{ p_{x,y}(\cdot ) \}_{x,y \in \mathbb {S}}\), where for each \(x, y \in \mathbb {S}\) (with possibly \(x = y\)) the function \(p_{x,y} : [0,\infty ) \rightarrow [0,1]\) is defined as

$$\begin{aligned} p_{x,y}(t) :=\mathbb {P}(X(t) = y \mid X(0) = x), \quad t \ge 0. \end{aligned}$$
(2.1)

Each transition function \(p_{x,y}(\cdot )\) has a Laplace transform \(\pi _{x,y}(\cdot )\) that is well-defined on the subset of complex numbers \(\mathbb {C}_+ :=\{ \alpha \in \mathbb {C}: \mathrm {Re}(\alpha ) > 0 \}\) as

$$\begin{aligned} \pi _{x,y}(\alpha ) :=\int _{0}^{\infty } \mathrm {e}^{-\alpha t} p_{x,y}(t) \, d t, \quad \alpha \in \mathbb {C}_+. \end{aligned}$$
(2.2)

We restrict our interest to transition functions of X when \(X(0) = {(0,0)}\) with probability one (w.p.1), and so we drop the first subscript on both transition functions and Laplace transforms, i.e., \(p_{x}(t) :=p_{{(0,0)},x}(t)\) for each \(t \ge 0\) and \(\pi _{x}(\alpha ) :=\pi _{{(0,0)},x}(\alpha )\) for each \(\alpha \in \mathbb {C}_{+}\). Our goal is to derive efficient numerical methods for calculating each Laplace transform \(\pi _{x}(\alpha ), ~ x \in \mathbb {S}\). We often refer to the Laplace transform \(\pi _x(\alpha )\) associated with the state x as the Laplace transform for state x.

2.1 Notation and terminology

It helps to decompose the state space \(\mathbb {S}\) into a countable number of levels, where, for each integer \(i \ge 0\), the i-th level is the set \(\{(i,0), (i,1), \ldots \}\). We further decompose the i-th level into an upper level and a lower level: Upper level i is defined as \(U_{i} :=\{ (i,c),(i,c + 1),\ldots \}\), while lower level i is simply \(L_{i} :=\{ (i,0),(i,1),\ldots ,(i,c - 1) \}\) and the union of lower levels \(L_{0}, L_{1}, \ldots , L_{i}\) is denoted by \(L_{\le i} = \bigcup _{k = 0}^{i} L_{k}\). The set of all states in phase j is denoted by \(P_{j} :=\{ (0,j),(1,j),\ldots \}\).

We sometimes refer to upper level \(U_{0}\) as the vertical boundary. The union of upper levels

$$\begin{aligned} U_{1} \cup U_{2} \cup \cdots \end{aligned}$$
(2.3)

is called the interior of the state space. Finally, the union

$$\begin{aligned} L_{0} \cup L_{1} \cup \cdots \end{aligned}$$
(2.4)

is called the horizontal boundary or the horizontal strip of boundary states. Figure 4 depicts these sets.

Fig. 4
figure 4

Terminology of the various sets of states

The indicator function \({\mathbbm {1}}\{A\}\) equals 1 if A is true and 0 otherwise. Given an arbitrary CTMC Z, we let \(\mathbb {E}_{z}[f(Z)]\) represent the expectation of a functional of Z, conditional on \(Z(0) = z\), and \(\mathbb {P}_{z}(\cdot )\) denotes the conditional probability associated with \(\mathbb {E}_{z}[\cdot ]\). In our analysis, it should be clear from the context what is being conditioned on when we write \(\mathbb {P}_{z}(\cdot )\) or \(\mathbb {E}_{z}[\cdot ]\).

We will also need to make use of hitting-time random variables. We define for each set \(A \subset \mathbb {S}\),

$$\begin{aligned} \tau _A :=\inf \{ t > 0 : \lim _{s \uparrow t} X(s) \ne X(t) \in A \} \end{aligned}$$
(2.5)

as the first time X makes a transition into the set A (so note \(X(0) \in A\) does not imply \(\tau _A = 0\)) and \(\tau _x\) should be understood to mean \(\tau _{\{ x \}}\).

2.2 Notation for M / M / 1 queues

Most of the formulas we derive contain quantities associated with an ordinary M / M / 1 queue. Given an M / M / 1 queueing system with arrival rate \(\lambda \) and service rate \(\mu \), let \(Q_{\lambda ,\mu }(t)\) denote the total number of customers in the system at time t. Under the measure \(\mathbb {P}_{n}(\cdot )\), which, in this case, represents conditioning on \(Q_{\lambda ,\mu }(0) = n\), let \(B_{\lambda ,\mu }\) denote the busy period duration induced by these customers. Under \(\mathbb {P}_{1}(\cdot )\), the Laplace–Stieltjes transform of \(B_{\lambda ,\mu }\) is given by

$$\begin{aligned} \phi _{\lambda ,\mu }(\alpha ) :=\mathbb {E}_{1}[\mathrm {e}^{-\alpha B_{\lambda ,\mu }} ] = \frac{\lambda + \mu + \alpha - \sqrt{(\lambda + \mu + \alpha )^2 - 4\lambda \mu }}{2\lambda }. \end{aligned}$$
(2.6)

Recall that under \(\mathbb {P}_{n}(\cdot )\), \(B_{\lambda ,\mu }\) is equal in distribution to the sum of n i.i.d. copies of \(B_{\lambda ,\mu }\) under the measure \(\mathbb {P}_{1}(\cdot )\); see, for example, [28, p. 32]. Thus, for each integer \(n \ge 1\) we have

$$\begin{aligned} \mathbb {E}_{ n }[\mathrm {e}^{-\alpha B_{\lambda ,\mu }}] = \phi _{\lambda ,\mu }(\alpha )^n. \end{aligned}$$
(2.7)

We will also need to make use of the following quantities in Sects. 5 and 6. Suppose \(\{ \varLambda _\theta (t) \}_{t \ge 0}\) is a homogeneous Poisson process with rate \(\theta \) that is independent of \(\{ Q_{\lambda ,\mu }(t) \}_{t \ge 0}\). For each integer \(i \ge 0\), define

$$\begin{aligned} w^{(\lambda ,\mu ,\theta )}_{i}(\alpha ) :=\mathbb {E}_{1}[\mathrm {e}^{-\alpha B_{\lambda ,\mu }} {\mathbbm {1}}\{ \varLambda _\theta (B_{\lambda ,\mu }) = i\} ]. \end{aligned}$$
(2.8)

Lemma 7 of Appendix C shows that the \(w^{(\lambda ,\mu ,\theta )}_{i}(\alpha )\) terms satisfy a recursion, and, in Lemma 8 of Appendix C, we give explicit expressions for these terms by solving the recursion.

The following quantities associated with M / M / 1 queues will appear at many places of the analysis. To increase readability, we adopt the notation used in [10] and define the quantities

$$\begin{aligned} \phi _2 :=\phi _{\lambda _2,c\mu _2}(\lambda _1 + \alpha ), \quad r_2 :=\rho _2 \phi _{\lambda _2,c\mu _2}(\lambda _1 + \alpha ), \end{aligned}$$
(2.9)

and

$$\begin{aligned} \varOmega _2 :=\frac{\rho _2 \phi _{\lambda _2,c\mu _2}(\lambda _1 + \alpha )}{\lambda _2(1 - \rho _2 \phi _{\lambda _2,c\mu _2}(\lambda _1 + \alpha )^2)}. \end{aligned}$$
(2.10)

Further results for M / M / 1 queues are presented in Appendix C.

2.3 Outline of our approach

Our approach for computing the Laplace transforms of the transition functions of X when \(X(0) = {(0,0)}\) w.p.1 is divided into three parts:

  1. 1.

    For each integer \(i \ge 0\), we use a slight modification of the CAP method [10] to write each Laplace transform for each state in \(U_{i}\), i.e., \(\pi _{(i,c - 1 + j)}(\alpha )\), \(j \ge 1\), in terms of the Laplace transforms \(\pi _{(k,c - 1)}(\alpha )\), \(0 \le k \le i\), as well as additional coefficients \(\{ v_{k,l} \}_{i \ge k \ge l \ge 0}\) that satisfy a recursion.

  2. 2.

    In Sect. 4, we obtain an explicit expression for the coefficients \(\{ v_{k,l} \}_{k \ge l \ge 0}\). This in turn shows that for each \(i \ge 0\), each Laplace transform for each state in \(U_{i}\), i.e., \(\pi _{(i,c - 1 + j)}(\alpha ), ~ j \ge 1\), can be explicitly expressed in terms of \(\pi _{(k,c - 1)}(\alpha ), ~ 0 \le k \le i\).

  3. 3.

    In Sect. 5, we derive a recursion with which we can determine the Laplace transforms for the states in the horizontal boundary. Specifically, we derive a modification of Ramaswami’s formula [25] to recursively compute the remaining Laplace transforms \(\pi _{(i,j)}(\alpha ), ~ i \ge 0, ~ 0 \le j \le c - 1\). The techniques we use to derive this recursion are exactly the same as the techniques recently used in [17] to study block-structured Markov processes. Only the Ramaswami-like recursion is needed to compute all Laplace transforms: Once the values for the Laplace transforms of the states in the horizontal boundary are known, all other transforms can be stated explicitly without using additional recursions.

3 A slight modification of the CAP method

The following theorem is used in multiple ways throughout our analysis. It appears in [17, Theorem 2.1] and can be derived by taking the Laplace transform of both sides of the equation at the top of page 124 of [21]. Equation (3.2) is the Laplace transform version of [10, Theorem 1].

Theorem 1

Suppose A and B are disjoint subsets of \(\mathbb {S}\) with \(x \in A\). Then for each \(y \in B\),

$$\begin{aligned} \pi _{x,y}(\alpha ) = \sum _{z \in A} \pi _{x,z}(\alpha ) (q(z) + \alpha ) \mathbb {E}_{ z }\!\Bigl [ \int _0^{\tau _A} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = y\} \, d t\Bigr ], \end{aligned}$$
(3.1)

or, equivalently,

$$\begin{aligned} \pi _{x,y}(\alpha ) = \sum _{z \in A} \pi _{x,z}(\alpha ) \sum _{z' \in A^c} q(z,z') \mathbb {E}_{ z' }\!\Bigl [ \int _0^{\tau _A} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = y\} \, d t\Bigr ]. \end{aligned}$$
(3.2)

3.1 Laplace transforms for states along the vertical boundary

In this subsection, we employ Theorem 1 to express each Laplace transform \(\pi _{(0,c - 1 + j)}(\alpha ), ~ j \ge 1\), in terms of \(\pi _{(0,c - 1)}(\alpha )\).

Using Theorem 1 with \(A = U_{0}^c\) we obtain, for \(j \ge 1\),

$$\begin{aligned}&\pi _{(0,c - 1 + j)}(\alpha )\nonumber \\&= \sum _{z \in U_{0}^c} \pi _z(\alpha ) \sum _{z' \in U_{0}} q(z,z') \mathbb {E}_{ z' }\!\Bigl [ \int _0^{\tau _{U_{0}^c}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (0,c - 1 + j)\} \, d t\Bigr ] \nonumber \\&= \pi _{(0,c - 1)}(\alpha ) \lambda _2 \mathbb {E}_{ (0,c) }\!\Bigl [ \int _0^{\tau _{U_{0}^c}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (0,c - 1 + j)\} \, d t\Bigr ]. \end{aligned}$$
(3.3)

From the transition rate diagram in Fig. 3, we find that the expectation in (3.3) can be interpreted as an expectation associated with an M / M / 1 queue having arrival rate \(\lambda _2\) and service rate \(c \mu _2\). Indeed, \(\tau _{U_{0}^c}\) is equal in distribution to the minimum of the busy period—initialized by one customer—of this M / M / 1 queue and an exponential random variable with rate \(\lambda _1\) that is independent of the queue. Alternatively, \(\tau _{U_{0}^c}\) can be thought of as being equal in distribution to the busy period duration of an M / M / 1 clearing model, with arrival rate \(\lambda _2\), service rate \(c\mu _2\), and clearings that occur in a Poisson manner with rate \(\lambda _1\). Applying Lemma 6 of Appendix C shows that

$$\begin{aligned} \lambda _2 \mathbb {E}_{ (0,c) }\!\Bigl [ \int _0^{\tau _{U_{0}^c}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (0,c - 1 + j)\} \, d t\Bigr ] = r_2^j. \end{aligned}$$
(3.4)

Substituting (3.4) into (3.3) then yields

$$\begin{aligned} \pi _{(0,c - 1 + j)}(\alpha ) = \pi _{(0,c - 1)}(\alpha ) r_2^j, \quad j \ge 1. \end{aligned}$$
(3.5)

3.2 Laplace transforms for states within the interior

We next develop a recursion for the Laplace transforms for the states within the interior. First, we express the transforms in upper level \(U_{i}\) in terms of the transforms in upper level \(U_{i - 1}\) and in state \((i,c - 1)\). Second, we use this result to express the transforms in upper level \(U_{i}\) in terms of the transforms for the states \((0,c - 1),(1,c - 1),\ldots ,(i,c - 1)\) and some additional coefficients.

Employing again Theorem 1, now with \(A = U_{i}^c\), yields, for \(i,j \ge 1\),

$$\begin{aligned}&\pi _{(i,c - 1 + j)}(\alpha ) \nonumber \\&= \sum _{z \in U_{i}^c} \pi _z(\alpha ) \sum _{z' \in U_{i}} q(z,z') \mathbb {E}_{ z' }\!\Bigl [ \int _0^{\tau _{U_{i}^c}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (i,c - 1 + j)\} \, d t\Bigr ]\nonumber \\&= \sum _{k = 1}^\infty \pi _{(i - 1,c - 1 + k)}(\alpha ) \lambda _1 \mathbb {E}_{ (i,c - 1 + k) }\!\Bigl [ \int _0^{\tau _{U_{i}^c}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (i,c - 1 + j)\} \, d t\Bigr ]\nonumber \\&\quad + \pi _{(i,c - 1)}(\alpha ) \lambda _2 \mathbb {E}_{ (i,c) }\!\Bigl [ \int _0^{\tau _{U_{i}^c}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (i,c - 1 + j)\} \, d t\Bigr ]. \end{aligned}$$
(3.6)

The expectation

$$\begin{aligned} \mathbb {E}_{ (i,c - 1 + k) }\!\Bigl [ \int _0^{\tau _{U_{i}^c}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (i,c - 1 + j)\} \, d t \Bigr ], \quad j,k \ge 1, \end{aligned}$$
(3.7)

has the same interpretation as the expectation in (3.3), except now the M / M / 1 queue starts with k customers at time 0. Using Lemma 6 of Appendix C, we obtain

$$\begin{aligned} \lambda _1 \mathbb {E}_{ (i,c - 1 + k) }\!\Bigl [ \int _0^{\tau _{U_{i}^c}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (i,c - 1 + j)\} \, d t \Bigr ] = \varUpsilon (j,k), \quad j,k \ge 1, \end{aligned}$$
(3.8)

where, for \(j,k \ge 1\),

$$\begin{aligned} \varUpsilon (j,k) :={\left\{ \begin{array}{ll} \lambda _1 \varOmega _2 r_2^{j - k} ( 1 - (r_2 \phi _2)^k ), &{} 1 \le k \le j - 1, \\ \lambda _1 \varOmega _2 \phi _2^{k - j} (1 - (r_2 \phi _2)^j), &{} k \ge j. \end{array}\right. } \end{aligned}$$
(3.9)

Substituting (3.8) into (3.6) and simplifying yields a recursion. Specifically, for \(i \ge 0\) and \(j \ge 1\),

$$\begin{aligned} \pi _{(i + 1,c - 1 + j)}(\alpha ) = r_2^j \pi _{(i + 1,c - 1)}(\alpha ) + \sum _{k = 1}^\infty \varUpsilon (j,k) \pi _{(i,c - 1 + k)}(\alpha ), \end{aligned}$$
(3.10)

with initial conditions \(\pi _{(0,c - 1 + j)}(\alpha ) = \pi _{(0,c - 1)}(\alpha ) r_2^j, ~ j \ge 1\).

The recursion (3.10) can be solved, i.e., \(\pi _{(i,c - 1 + j)}(\alpha )\) can be expressed in terms of \(\pi _{(0,c - 1)}(\alpha ),\pi _{(1,c - 1)}(\alpha ),\ldots ,\pi _{(i,c - 1)}(\alpha )\).

Theorem 2

(Interior) For \(i \ge 0, ~ j \ge 1\),

$$\begin{aligned} \pi _{(i,c - 1 + j)}(\alpha ) = \sum _{k = 0}^i v_{i,k} (1 - V_2)^k \left( {\begin{array}{c}j - 1 + k\\ k\end{array}}\right) r_2^j, \end{aligned}$$
(3.11)

where the quantities \(\{ v_{i,j} \}_{i \ge j \ge 0}\) satisfy the following recursive scheme: for \(i \ge 0\),

$$\begin{aligned} v_{i + 1,0}&= \pi _{(i + 1,c - 1)}(\alpha ), \end{aligned}$$
(3.12a)
$$\begin{aligned} v_{i + 1,j}&= V_1 \Bigl ( v_{i,j - 1} + V_2 \sum _{k = j}^i v_{i,k} \Bigr ), \quad 1 \le j \le i + 1, \end{aligned}$$
(3.12b)

with initial condition \(v_{0,0} = \pi _{(0,c - 1)}(\alpha )\). Here \(V_1 = \frac{\lambda _1 \varOmega _2}{1 - r_2 \phi _2}\), and \(V_2 = r_2 \phi _2\).

Throughout we follow the convention that all empty sums, such as \(\sum _{k = 1}^{0} (\cdot )\), represent the number zero.

Proof

Clearly, when \(i = 0\), (3.11) agrees with (3.5). Proceeding by induction, assume (3.11) holds among upper levels \(U_{0},U_{1},\ldots ,U_{i}\) for some \(i \ge 0\). Substituting (3.11) into (3.10) yields

$$\begin{aligned} \pi _{(i + 1,c - 1 + j)}(\alpha ) = r_2^j \pi _{(i + 1,c - 1)}(\alpha ) + \sum _{k = 1}^\infty \varUpsilon (j,k) \sum _{l = 0}^i v_{i,l} (1 - V_2)^l \left( {\begin{array}{c}k - 1 + l\\ l\end{array}}\right) r_2^k. \end{aligned}$$
(3.13)

Next, interchange the order of the two summations and apply Lemma 1 of Appendix A to get

(3.14)

To further simplify the right-hand side of (3.14), increase the summation index of the first summation by one by setting \(k = l + 1\), multiply its summands by \(\frac{1 - r_2 \phi _2}{1 - r_2 \phi _2}\), and change the order of the double summation. This yields

(3.15)

which shows \(\pi _{(i+1,c - 1 + j)}(\alpha )\) satisfies (3.11), completing the induction step. \(\square \)

4 Deriving an explicit expression for \({v}_{{i,j}}\)

Theorem 2 suggests that the Laplace transforms for the states within the interior can be computed recursively:

  1. 1.

    Initialization step Determine \(\pi _{(0,c - 1)}(\alpha )\), which yields each Laplace transform \(\pi _{(0,c - 1 + j)}(\alpha ), ~ j \ge 1\).

  2. 2.

    Recursive step on i Given \(\pi _{(k, c - 1)}(\alpha )\) for \(0 \le k \le i\) and the coefficients \(\{ v_{k,l} \}_{i \ge k \ge l \ge 0}\)

    1. a.

      Compute \(\pi _{(i + 1,c - 1)}(\alpha )\).

    2. b.

      Compute \(\{ v_{i + 1,l} \}_{i + 1 \ge l \ge 0}\).

    3. c.

      Once steps 2a. and 2b. are completed, all Laplace transform values \(\pi _{(i + 1,c - 1 + j)}(\alpha ), ~ j \ge 1\), are known.

Our next result, i.e., Theorem 3, shows that for each \(i \ge 0\), the \(\{ v_{i,l} \}_{i \ge l \ge 0}\) terms can be expressed explicitly in terms of \(\pi _{(k,c - 1)}(\alpha ), ~ 0 \le k \le i\). If our goal is to only compute \(\pi _{(i,c - 1 + j)}(\alpha )\) for some large i, then Theorem 3 allows us to avoid computing all intermediate \(\{ v_{k,l} \}_{i - 1 \ge k \ge l \ge 0}\) terms, which means we can avoid computing an additional \(\mathrm {O}(i^2)\) terms. Not only that, knowing exactly how these \(v_{i,j}\) coefficients look could aid in future questions asked by researchers interested in the M / M / c 2-class priority queue.

Readers should keep in mind that the expressions we have derived for the \(v_{i,j}\) coefficients do contain binomial coefficients, and one should be careful to avoid roundoff errors while computing these expressions.

Theorem 3

(Coefficients) The coefficients \(\{ v_{i,j} \}_{i \ge j \ge 0}\) from Theorem 2 are as follows: for \(i \ge j \ge 0\),

$$\begin{aligned} v_{i,j}&= V_1^j \pi _{(i - j,c - 1)}(\alpha ) \nonumber \\&\quad + \sum _{k = j + 1}^i V_1^k \pi _{(i - k,c - 1)}(\alpha ) \sum _{l = 1}^{k - j} \frac{j}{k - j} \left( {\begin{array}{c}k - j\\ l\end{array}}\right) \left( {\begin{array}{c}k - 1\\ l - 1\end{array}}\right) V_2^l. \end{aligned}$$
(4.1)

Proof

From (3.12) we find, for each \(i \ge 0\), that \(v_{i,0} = \pi _{(i,c - 1)}(\alpha )\) and \(v_{i,i} = V_1^i \pi _{(0,c - 1)}(\alpha )\); these expressions agree with (4.1).

Next, assume for some integer \(i \ge 0\) that \(v_{i,j}\) satisfies (4.1) for \(0 \le j \le i\). Our aim is to show \(v_{i + 1,j}\) also satisfies (4.1) for \(0 \le j \le i + 1\): We do this by substituting (4.1) into (3.12b) and simplifying. There are three cases to consider: (i) \(j = 1\); (ii) \(2 \le j \le i - 1\); and (iii) \(j = i\). We focus on case (ii), with cases (i) and (iii) following similarly.

We first examine the \(V_1 v_{i,j - 1}\) term in (3.12b) by substituting (4.1). Here,

$$\begin{aligned} V_1 v_{i,j - 1}&= V_1^j \pi _{(i + 1 - j,c - 1)}(\alpha ) \nonumber \\&\quad + \sum _{k = j}^i V_1^{k + 1} \pi _{(i - k,c - 1)}(\alpha ) \sum _{l = 1}^{k + 1 - j} \frac{j - 1}{k + 1 - j} \left( {\begin{array}{c}k + 1 - j\\ l\end{array}}\right) \left( {\begin{array}{c}k - 1\\ l - 1\end{array}}\right) V_2^l \nonumber \\&= V_1^j \pi _{(i + 1 - j,c - 1)}(\alpha ) \nonumber \\&\quad + \sum _{k = j + 1}^{i + 1} V_1^k \pi _{(i + 1 - k,c - 1)}(\alpha ) \sum _{l = 1}^{k - j} \frac{j - 1}{k - j} \left( {\begin{array}{c}k - j\\ l\end{array}}\right) \left( {\begin{array}{c}k - 2\\ l - 1\end{array}}\right) V_2^l. \end{aligned}$$
(4.2)

Next, write (4.2) in a form where the binomial coefficients match the ones in (4.1):

(4.3)

The remaining terms on the right-hand side of (3.12b) can be further simplified by substituting (4.1). Doing so reveals that

$$\begin{aligned} V_1 V_2 \sum _{k = j}^i v_{i,k}&= \sum _{k = j}^i V_1^{k + 1} \pi _{(i - k,c - 1)}(\alpha ) V_2 \nonumber \\&\quad + \sum _{k = j}^{i - 1} \sum _{l = k + 1}^i V_1^{l + 1} \pi _{(i - l,c - 1)}(\alpha ) \sum _{m = 1}^{l - k} \frac{k}{l - k} \left( {\begin{array}{c}l - k\\ m\end{array}}\right) \left( {\begin{array}{c}l - 1\\ m - 1\end{array}}\right) V_2^{m + 1}. \end{aligned}$$
(4.4)

Swapping the order of the triple summation in (4.4) gives

(4.5)

The inner-most summation over k of (4.5) can be evaluated using Lemma 3 of Appendix A:

$$\begin{aligned} \sum _{k = j}^{l - m} \frac{k}{l - k} \left( {\begin{array}{c}l - k\\ m\end{array}}\right) = \frac{l - m + jm}{m(l + 1 - j)} \left( {\begin{array}{c}l + 1 - j\\ m + 1\end{array}}\right) . \end{aligned}$$
(4.6)

Next, substitute (4.6) back into (4.5) and focus on the inner-most double summation of (4.5). This gives

$$\begin{aligned}&\sum _{m = 1}^{l - j} \sum _{k = j}^{l - m} \frac{k}{l - k} \left( {\begin{array}{c}l - k\\ m\end{array}}\right) \left( {\begin{array}{c}l - 1\\ m - 1\end{array}}\right) V_2^{m + 1} \nonumber \\&\quad = \sum _{m = 1}^{l - j} \frac{l - m + jm}{m(l + 1 - j)} \left( {\begin{array}{c}l + 1 - j\\ m + 1\end{array}}\right) \left( {\begin{array}{c}l - 1\\ m - 1\end{array}}\right) V_2^{m + 1} \nonumber \\&\quad = \sum _{m = 1}^{l - j} \frac{l - m + jm}{(l + 1 - j)l} \left( {\begin{array}{c}l + 1 - j\\ m + 1\end{array}}\right) \left( {\begin{array}{c}l\\ m\end{array}}\right) V_2^{m + 1} \nonumber \\&\quad = \sum _{m = 2}^{l + 1 - j} \frac{l + 1 + jm - j - m}{(l + 1 - j)l} \left( {\begin{array}{c}l + 1 - j\\ m\end{array}}\right) \left( {\begin{array}{c}l\\ m - 1\end{array}}\right) V_2^m. \end{aligned}$$
(4.7)

Substituting (4.7) into (4.5) and changing the two outer summation indices shows

(4.8)

Furthermore, since

$$\begin{aligned} V_2 = \frac{l + j \cdot 1 - j - 1}{(l - j)(l - 1)} \left( {\begin{array}{c}l - j\\ 1\end{array}}\right) \left( {\begin{array}{c}l - 1\\ 1 - 1\end{array}}\right) V_2^{1}, \end{aligned}$$
(4.9)

we can merge the single summation with the double summation in (4.8). In other words,

(4.10)

Finally, summing (4.3) and (4.10) produces (4.1), as

$$\begin{aligned}&V_1 v_{i,j - 1} + V_1 V_2 \sum _{k = j}^i v_{i,k} \nonumber \\&= V_1^j \pi _{(i + 1 - j,c - 1)}(\alpha ) \nonumber \\&\quad + \sum _{k = j + 1}^{i + 1} V_1^k \pi _{(i + 1 - k,c - 1)}(\alpha ) \sum _{l = 1}^{k - j} \frac{(j - 1)(k - l)}{(k - j)(k - 1)} \left( {\begin{array}{c}k - j\\ l\end{array}}\right) \left( {\begin{array}{c}k - 1\\ l - 1\end{array}}\right) V_2^l \nonumber \\&\quad + \sum _{k = j + 1}^{i + 1} V_1^k \pi _{(i + 1 - k,c - 1)}(\alpha ) \sum _{l = 1}^{k - j} \frac{k + jl - j - l}{(k - j)(k - 1)} \left( {\begin{array}{c}k - j\\ l\end{array}}\right) \left( {\begin{array}{c}k - 1\\ l - 1\end{array}}\right) V_2^l. \end{aligned}$$
(4.11)

Summing the coefficients in front of the binomial coefficient terms proves case (ii). Cases (i) and (iii) follow similarly. \(\square \)

We now have an explicit expression for the coefficients. Substitute the expressions for \(\{ v_{i,j} \}_{i \ge j \ge 0}\) into (3.11) to obtain, for \(j \ge 1\),

$$\begin{aligned}&\pi _{(i,c - 1 + j)}(\alpha ) = \sum _{k = 0}^i \pi _{(i - k,c - 1)}(\alpha ) \bigl ( V_1 (1 - V_2) \bigr )^k \left( {\begin{array}{c}j - 1 + k\\ k\end{array}}\right) r_2^j \nonumber \\&\quad + \sum _{k = 0}^{i - 1} \sum _{l = k + 1}^i \pi _{(i - l,c - 1)}(\alpha ) V_1^l \nonumber \\&\quad \cdot \sum _{m = 1}^{l - k} \frac{k}{l - k} \left( {\begin{array}{c}l - k\\ m\end{array}}\right) \left( {\begin{array}{c}l - 1\\ m - 1\end{array}}\right) V_2^m (1 - V_2)^k \left( {\begin{array}{c}j - 1 + k\\ k\end{array}}\right) r_2^j. \end{aligned}$$
(4.12)

Swapping the order of the double summation and grouping coefficients in front of each Laplace transform reveals the dependence of \(\pi _{(i,c - 1 + j)}(\alpha )\) on the Laplace transforms \(\pi _{(0,c - 1)}(\alpha ),\pi _{(1,c - 1)}(\alpha ),\ldots ,\pi _{(i,c - 1)}(\alpha )\):

$$\begin{aligned}&\pi _{(i,c - 1 + j)}(\alpha ) = r_2^j \pi _{(i,c - 1)}(\alpha ) + \sum _{l = 1}^i V_1^l \pi _{(i - l,c - 1)}(\alpha ) \Bigl [ ( 1 - V_2)^l \left( {\begin{array}{c}j - 1 + l\\ l\end{array}}\right) r_2^j \nonumber \\&\quad + \sum _{k = 0}^{l - 1} (1 - V_2)^k \left( {\begin{array}{c}j - 1 + k\\ k\end{array}}\right) r_2^j \sum _{m = 1}^{l - k} \frac{k}{l - k} \left( {\begin{array}{c}l - k\\ m\end{array}}\right) \left( {\begin{array}{c}l - 1\\ m - 1\end{array}}\right) V_2^m \Bigr ]. \end{aligned}$$
(4.13)

From this expression, we see that for each fixed \(i \ge 0\), as \(j \rightarrow \infty \), \(\pi _{(i, c - 1 + j)}(\alpha )\) behaves in a manner analogous to that found in Theorem 3.1 of [23], which addresses, when \(c = 1\), the asymptotic behavior of the stationary distribution as the number of high-priority customers approaches infinity, while the number of low-priority customers is fixed.

The explicit expression (4.13) can be used to obtain an expression for the Laplace transforms of the number of class-1 customers in the system. That is,

$$\begin{aligned}&\int _0^\infty \mathrm {e}^{-\alpha t} \mathbb {P}( X_1(t) = i \mid X(0) = (0,0) ) \, d t \nonumber \\&= \int _0^\infty \mathrm {e}^{-\alpha t} \sum _{j = 0}^\infty \mathbb {P}( X(t) = (i,j) \mid X(0) = (0,0) ) \, d t \nonumber \\&= \sum _{j = 0}^\infty \pi _{(i,j)}(\alpha ) = \sum _{j = 0}^{c - 1} \pi _{(i,j)}(\alpha ) + \sum _{j = 1}^\infty \pi _{(i,c - 1 + j)}(\alpha ), \end{aligned}$$
(4.14)

where we can simplify the final infinite sum as

$$\begin{aligned}&\sum _{j = 1}^\infty \pi _{(i,c - 1 + j)}(\alpha ) = \frac{r_2}{1 - r_2} \pi _{(i, c - 1)}(\alpha ) + \sum _{l = 1}^i V_1^l \pi _{(i - l,c - 1)}(\alpha ) \Bigl [ \frac{(1 - V_2)^l r_2}{(1 - r_2)^{l + 1}} \nonumber \\&\quad + \sum _{k = 0}^{l - 1} \frac{(1 - V_2)^k r_2}{(1 - r_2)^{k + 1}} \sum _{m = 1}^{l - k} \frac{k}{l - k} \left( {\begin{array}{c}l - k\\ m\end{array}}\right) \left( {\begin{array}{c}l - 1\\ m - 1\end{array}}\right) V_2^m \Bigr ], \end{aligned}$$
(4.15)

via the identity

$$\begin{aligned} \sum _{j = 1}^\infty \left( {\begin{array}{c}j - 1 + k\\ k\end{array}}\right) r_2^j = \frac{r_2}{(1 - r_2)^{k + 1}}. \end{aligned}$$
(4.16)

5 Laplace transforms for states in the horizontal boundary

In the previous section, we showed how to express each Laplace transform for the states on the vertical boundary and within the interior explicitly in terms of transforms for the states in the horizontal boundary. So, it remains to determine the transforms for the states in the horizontal boundary. In this section, we show that the latter Laplace transforms satisfy a variant of Ramaswami’s formula, which will allow us to numerically compute these transforms recursively.

The approach we use to compute the above-mentioned variant of Ramaswami’s formula makes, like the CAP method, repeated use of Theorem 1. This approach is highly analogous to the approach used in [17] to study block-structured Markov processes, yet slightly modified since we are interested in recursively computing Laplace transforms only associated with states within the horizonal boundary. This idea of restricting ourselves to a subset of the state space seems similar in spirit to the censoring approach featured in the work of Li and Zhao [22], but it is not currently obvious to the authors if their approach is applicable to our setting.

We first introduce some relevant notation. Define the \(1 \times c\) row vectors \(\varvec{\pi }_i(\alpha )\) as

$$\begin{aligned} \varvec{\pi }_i(\alpha ) :=\begin{bmatrix} \pi _{(i,0)}(\alpha )&\pi _{(i,1)}(\alpha )&\cdots&\pi _{(i,c - 1)}(\alpha ) \end{bmatrix}, \quad i \ge 0. \end{aligned}$$
(5.1)

To properly state the Ramaswami-like formula satisfied by these row vectors, we need to define additional matrices. First, we define the \(c \times c\) transition rate submatrices corresponding to lower levels \(L_{i}, ~ i \ge c\), as \(\mathbf {A}_1 :=\lambda _1 \mathbf {I}\), \(\mathbf {A}_{-1} :={\text {diag}}(c\mu _1,(c - 1)\mu _1,\ldots ,\mu _1)\) and

(5.2)

where \(\mathbf {I}\) is the \(c \times c\) identity matrix and \({\text {diag}}(\mathbf {x})\) is a square matrix with the vector \(\mathbf {x}\) along its main diagonal. We further define the \(c \times c\) level-dependent transition rate submatrices associated with \(L_{i}, ~ 1 \le i \le c - 1\), as \(\mathbf {A}_{-1}^{(i)} :={\text {diag}}(\mathbf {x}^{(i)})\) with \((\mathbf {x}^{(i)})_j :=\min (i,c - j) \mu _1, ~ 0 \le j \le c - 1\), \(\mathbf {A}_0^{(i)} :=\mathbf {A}_0 + \mathbf {A}_{-1} - \mathbf {A}_{-1}^{(i)}\), and for \(L_{0}\) we have \(\mathbf {A}^{(0)}_0 :=\mathbf {A}_0 + \mathbf {A}_{-1}\).

Next, we define the collection of \(c \times c\) matrices \(\{ \mathbf {W}_m(\alpha ) \}_{m \ge 0}\). Each element of \(\mathbf {W}_{m}(\alpha )\) is equal to 0 except for element \(\bigl ( \mathbf {W}_m(\alpha ) \bigr )_{c - 1,c - 1}\), which is defined as \(\bigl ( \mathbf {W}_m(\alpha ) \bigr )_{c - 1,c - 1} :=\lambda _2 w^{(\lambda _2,c\mu _2,\lambda _1)}_{m}(\alpha )\).

We also need the collection of \(c \times c\) matrices \(\{ \mathbf {G}_{i,j}(\alpha ) \}_{i > j \ge 0}\), where the (kl)-th element of \(\mathbf {G}_{i,j}(\alpha )\) is defined as

$$\begin{aligned} \bigr ( \mathbf {G}_{i,j}(\alpha ) \bigl )_{k,l} :=\mathbb {E}_{ (i,k) }[ \mathrm {e}^{-\alpha \tau _{L_{j}}} {\mathbbm {1}}\{ X(\tau _{L_{j}}) = (j,l) \} ], \quad 0 \le k,l \le c - 1. \end{aligned}$$
(5.3)

Finally, we will need the collection of \(c \times c\) matrices \(\{ \mathbf {N}_i(\alpha ) \}_{i \ge 1}\), whose elements are defined as follows:

$$\begin{aligned} \bigl ( \mathbf {N}_i(\alpha ) \bigr )_{k,l} :=\mathbb {E}_{ (i,k) }\!\Bigl [ \int _0^{\tau _{L_{i - 1}}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (i,l)\} \, d t \Bigr ], \quad 0 \le k,l \le c - 1. \end{aligned}$$
(5.4)

Note that \(\mathbf {N}_i(\alpha ) = \mathbf {N}_c(\alpha )\) for \(i \ge c\), and we therefore denote \(\mathbf {N}(\alpha ) :=\mathbf {N}_c(\alpha )\).

5.1 A Ramaswami-like recursion

The following theorem shows that the vectors of transforms \(\{\varvec{\pi }_i(\alpha )\}_{i \ge 0}\) satisfy a recursion analogous to Ramaswami’s formula [25].

Theorem 4

(Horizontal boundary) For each integer \(i \ge 0\), we have

$$\begin{aligned} \varvec{\pi }_{i + 1}(\alpha )&= \varvec{\pi }_i(\alpha ) \mathbf {A}_1 \mathbf {N}_{i + 1}(\alpha ) \nonumber \\&\quad + \sum _{k = 0}^i \varvec{\pi }_k(\alpha ) \sum _{l = i + 1}^\infty \mathbf {W}_{l - k}(\alpha ) \mathbf {G}_{l,i + 1}(\alpha ) \mathbf {N}_{i + 1}(\alpha ), \end{aligned}$$
(5.5)

where we use the convention \(\mathbf {G}_{i + 1,i + 1}(\alpha ) = \mathbf {I}\).

Proof

This result can be proven by making use of the approach found in [17]. Using Theorem 1 with \(A = L_{\le i}\), we see that for \(i \ge 0\) and \(0 \le j \le c - 1\),

$$\begin{aligned} \pi _{(i + 1,j)}(\alpha ) = \sum _{z \in L_{\le i}} \pi _z(\alpha ) \sum _{z' \in L_{\le i}^c} q(z,z') \mathbb {E}_{ z' }\!\Bigl [ \int _0^{\tau _{L_{\le i}}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{ X(t) = (i + 1,j)\} \, d t \Bigr ]. \end{aligned}$$
(5.6)

Due to the structure of the transition rates, many terms in the summation of (5.6) are zero. In particular, (5.6) can be stated more explicitly as

$$\begin{aligned} \pi _{(i + 1,j)}(\alpha )&= \sum _{k = 0}^{i} \pi _{(k,c - 1)}(\alpha ) \lambda _2 \mathbb {E}_{ (k,c) }\!\Bigl [ \int _0^{\tau _{L_{\le i}}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{ X(t) = (i + 1,j)\} \, d t \Bigr ] \nonumber \\&\quad + \sum _{m = 0}^{c - 1} \pi _{(i,m)}(\alpha ) \lambda _1 \mathbb {E}_{ (i + 1,m) }\!\Bigl [ \int _0^{\tau _{L_{\le i}}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{ X(t) = (i + 1,j)\} \, d t \Bigr ]. \end{aligned}$$
(5.7)

We now simplify each expectation appearing within the first sum on the right-hand side of (5.7). Summing over all ways in which the process reaches phase \(c - 1\) again yields

$$\begin{aligned}&\mathbb {E}_{ (k,c) }\!\Bigl [ \int _0^{\tau _{L_{\le i}}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{ X(t) = (i + 1,j)\} \, d t \Bigr ] \nonumber \\&= \mathbb {E}_{ (k,c) }\!\Bigl [ \int _{\tau _{P_{c - 1}}}^{\tau _{L_{\le i}}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{ X(t) = (i + 1,j)\} \, d t \Bigr ] \nonumber \\&= \sum _{l = i + 1}^\infty \mathbb {E}_{ (k,c) }\!\Bigl [ {\mathbbm {1}}\{X(\tau _{P_{c - 1}}) = (l,c - 1)\} \mathrm {e}^{-\alpha \tau _{P_{c - 1}}} \nonumber \\&\qquad \qquad \qquad \cdot \int _{\tau _{P_{c - 1}}}^{\tau _{L_{\le i}}} \mathrm {e}^{-\alpha (t - \tau _{P_{c - 1}})} {\mathbbm {1}}\{ X(t) = (i + 1,j)\} \, d t \Bigr ]. \end{aligned}$$
(5.8)

Applying the strong Markov property to each expectation appearing in (5.8) shows that

(5.9)

where the last equality follows from the definitions of \(\mathbf {G}_{l,i + 1}(\alpha )\) and \(\mathbf {N}_{i + 1}(\alpha )\), and Lemma 8 of Appendix C. The expectations appearing within the second sum of (5.7) can easily be simplified by recognizing that they are elements of \(\mathbf {N}_{i + 1}(\alpha )\). Hence, we ultimately obtain

$$\begin{aligned} \pi _{(i + 1,j)}(\alpha )&= \sum _{k = 0}^{i} \pi _{(k,c - 1)}(\alpha ) \sum _{l = i + 1 }^\infty \lambda _2 w^{(\lambda _2,c\mu _2,\lambda _1)}_{l - k}(\alpha ) \bigl ( \mathbf {G}_{l,i + 1}(\alpha ) \mathbf {N}_{i + 1}(\alpha ) \bigr )_{c - 1,j} \nonumber \\&\quad + \sum _{m = 0}^{c - 1} \pi _{(i,m)}(\alpha ) \bigl ( \mathbf {A}_1 \bigr )_{m,m} \bigl ( \mathbf {N}_{i + 1}(\alpha ) \bigr )_{m,j}, \end{aligned}$$
(5.10)

which, in matrix form, is (5.5). \(\square \)

It remains to derive computable representations of \(\{ \mathbf {G}_{i,j}(\alpha ) \}_{i > j \ge 0}\), as well as the matrices \(\{ \mathbf {N}_{i}(\alpha ) \}_{1 \le i \le c}\).

5.2 Computing the \(\mathbf {G}_{{i,j}}({\alpha })\) matrices

The next proposition shows that each \(\mathbf {G}_{\hbox { i,j}}(\alpha )\) matrix can be expressed entirely in terms of the subset \(\{\mathbf {G}_{i + 1,i}(\alpha )\}_{0 \le i \le c-1}\).

Proposition 1

For each pair of integers ij satisfying \(i > j \ge 0\), we have

$$\begin{aligned} \mathbf {G}_{i,j}(\alpha ) = \mathbf {G}_{i,i - 1}(\alpha ) \mathbf {G}_{i - 1,i - 2}(\alpha ) \cdots \mathbf {G}_{j + 1,j}(\alpha ). \end{aligned}$$
(5.11)

Furthermore, for each integer \(k \ge 0\) we also have

$$\begin{aligned} \mathbf {G}_{c + k,c - 1 + k}(\alpha ) = \mathbf {G}_{c,c - 1}(\alpha ). \end{aligned}$$
(5.12)

Proof

Equation (5.11) can be derived by applying the strong Markov property in an iterative manner, while (5.12) follows from the homogeneous structure of X along all lower levels \(L_{i}, ~ i \ge c\). \(\square \)

In light of Proposition 1, our goal now is to determine the subset of matrices \(\{ \mathbf {G}_{i + 1,i}(\alpha ) \}_{0 \le i \le c - 1}\). We first focus on showing that \(\mathbf {G}(\alpha ) :=\mathbf {G}_{c,c - 1}(\alpha )\) is the solution to a fixed-point equation.

Proposition 2

The matrix \(\mathbf {G}(\alpha )\) satisfies

$$\begin{aligned} \mathbf {G}(\alpha ) = \bigl ( \alpha \mathbf {I} - \mathbf {A}_0 - \mathbf {W}_0(\alpha ) \bigr )^{-1} \Bigl ( \mathbf {A}_{-1} + \mathbf {A}_1 \mathbf {G}(\alpha )^2 + \sum _{l = 1}^\infty \mathbf {W}_l(\alpha ) \mathbf {G}(\alpha )^{l + 1} \Bigr ). \end{aligned}$$
(5.13)

Proof

Assuming the matrix \(\alpha \mathbf {I} - \mathbf {A}_0 - \mathbf {W}_0(\alpha )\) is invertible, Eq. (5.13) follows from a one-step analysis argument. To show that \(\alpha \mathbf {I} - \mathbf {A}_0 - \mathbf {W}_0(\alpha )\) is indeed invertible, define the \(c \times c\) matrix \(\mathbf {H}(\alpha )\) with elements

$$\begin{aligned} \bigl ( \mathbf {H}(\alpha ) \bigr )_{i,j} :=\mathbb {E}_{ (c,i) }\!\Bigl [\int _0^{\tau _{(L_{c} \cup U_{c})^c}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (c,j)\} \, d t \Bigr ], \quad 0 \le i,j \le c - 1, \end{aligned}$$
(5.14)

and use again one-step analysis to establish

$$\begin{aligned} \bigl ( \alpha \mathbf {I} - \mathbf {A}_0 - \mathbf {W}_0(\alpha ) \bigr ) \mathbf {H}(\alpha ) = \mathbf {I}, \end{aligned}$$
(5.15)

which proves the claim. \(\square \)

The next proposition shows that through successive substitutions one can obtain \(\mathbf {G}(\alpha )\) from (5.13).

Proposition 3

Suppose the sequence of matrices \(\{ \mathbf {Z}(n,\alpha ) \}_{n \ge 0}\) satisfies the recursion

$$\begin{aligned} \mathbf {Z}(n + 1,\alpha )&= \bigl ( \alpha \mathbf {I} - \mathbf {A}_0 - \mathbf {W}_0(\alpha ) \bigr )^{-1} \nonumber \\&\quad \cdot \Bigl ( \mathbf {A}_{-1} + \mathbf {A}_1 \mathbf {Z}(n,\alpha )^2 + \sum _{l = 1}^\infty \mathbf {W}_l(\alpha ) \mathbf {Z}(n,\alpha )^{l + 1} \Bigr ) \end{aligned}$$
(5.16)

with initial condition \(\mathbf {Z}(0,\alpha ) = \mathbf {0}\). Then,

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbf {Z}(n,\alpha ) = \mathbf {G}(\alpha ). \end{aligned}$$
(5.17)

Proof

This proof makes use of Proposition 2, and is completely analogous to the proofs of [18, Theorems 3.1 and 4.1] and [17, Theorem 3.4]. It is therefore omitted. \(\square \)

Now that we have a method for approximating \(\mathbf {G}(\alpha )\), it remains to find a method for computing \(\mathbf {G}_{i + 1,i}(\alpha ), ~ 0 \le i \le c - 2\). The next proposition shows that these matrices can be computed recursively.

Proposition 4

For each integer i satisfying \(0 \le i \le c - 2\), we have

$$\begin{aligned} \mathbf {G}_{i + 1,i}(\alpha )&= \Bigl ( \alpha \mathbf {I} - \mathbf {A}^{(i + 1)}_0 - \mathbf {A}_1 \mathbf {G}_{i + 2,i + 1}(\alpha ) \nonumber \\&\quad - \sum _{l = i + 1}^\infty \mathbf {W}_{l - (i + 1)}(\alpha ) \mathbf {G}_{l,i + 1}(\alpha ) \Bigr )^{-1} \mathbf {A}^{(i + 1)}_{-1}. \end{aligned}$$
(5.18)

Proof

Similar to Proposition 2, (5.18) follows by a one-step analysis. The inverse in (5.18) exists because the matrix \(\mathbf {A}^{(i + 1)}_{-1}\) is a diagonal matrix whose diagonal elements are all positive. \(\square \)

We now have an iterative procedure for computing all \(\{ \mathbf {G}_{i + 1,i}(\alpha ) \}_{0 \le i \le c - 1}\) matrices: first compute \(\mathbf {G}(\alpha )\) from Proposition 3, then use Proposition 4 to compute \(\mathbf {G}_{c - 1,c - 2}(\alpha )\), then \(\mathbf {G}_{c - 2,c - 3}(\alpha )\), and so on, stopping at \(\mathbf {G}_{1,0}(\alpha )\).

5.3 Computing the \({\mathbf {N}}_{{ i}}({\alpha })\) matrices

The matrices \(\{ \mathbf {N}_{i}(\alpha ) \}_{1 \le i \le c}\) can be expressed in terms of \(\{ \mathbf {G}_{i,j}(\alpha ) \}_{i \ge j \ge 0}\).

Proposition 5

For each integer i satisfying \(1 \le i \le c\), we have

$$\begin{aligned} \mathbf {N}_i(\alpha ) = \Bigl ( \alpha \mathbf {I} - \mathbf {A}^{(i)}_0 - \mathbf {A}_1 \mathbf {G}_{i + 1,i}(\alpha ) - \sum _{l = i}^\infty \mathbf {W}_{l - i}(\alpha ) \mathbf {G}_{l,i}(\alpha ) \Bigr )^{-1}, \end{aligned}$$
(5.19)

where we use the convention \(\mathbf {A}^{(c)}_0 = \mathbf {A}_0\).

Proof

The proof uses one-step analysis and is analogous to the proofs of Propositions 2 and 4. It is therefore omitted. \(\square \)

5.4 Computing \(\varvec{\pi }_{{0}}({\alpha })\)

It remains to devise a method for computing the vector \(\varvec{\pi }_0(\alpha )\) so that the Ramaswami-like recursion from Theorem 4 can be properly initialized. The following is an adaptation of [17, Section 3.3]. We define the \(c \times c\) matrix \(\mathbf {N}_0(\alpha )\) whose elements are given by

$$\begin{aligned} \bigl ( \mathbf {N}_0(\alpha ) \bigr )_{i,j} :=\mathbb {E}_{ (0,i) }\!\Bigl [\int _0^{\tau _{(0,0)}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{ X(t) = (0,j) \} \, d t\Bigr ], \quad 0 \le i,j \le c - 1. \end{aligned}$$
(5.20)

In the derivation to follow, we require the notation \((\mathbf {A})^{[i,j]}\) which represents the matrix \(\mathbf {A}\) with row i and column j removed (meaning it is a \((c - 1) \times (c - 1)\) matrix), while keeping the indexing of entries exactly as in \(\mathbf {A}\). Similarly, \((\mathbf {A})^{[i,\cdot ]}\) has row i removed from \(\mathbf {A}\) (meaning it is a \((c - 1) \times c\) matrix) and \((\mathbf {A})^{[\cdot ,j]}\) has column j removed from \(\mathbf {A}\) (meaning it is a \(c \times (c - 1)\) matrix).

Proposition 6

We have

$$\begin{aligned} \bigl ( \mathbf {N}_0(\alpha ) \bigr )^{[0,0]}&= \Bigl ( \alpha \bigl ( \mathbf {I} \bigr )^{[0,0]} - \bigl ( \mathbf {A}_0^{(0)} \bigr )^{[0,0]} - \bigl ( \mathbf {A}_1 \bigr )^{[0,\cdot ]} \bigl ( \mathbf {G}_{1,0}(\alpha ) \bigr )^{[\cdot ,0]} \nonumber \\&\quad - \sum _{l = 0}^\infty \bigl ( \mathbf {W}_l(\alpha ) \bigr )^{[0,\cdot ]} \bigl ( \mathbf {G}_{l,0}(\alpha ) \bigr )^{[\cdot ,0]} \Bigr )^{-1}. \end{aligned}$$
(5.21)

Proof

Similar to the proofs of Propositions 2, 4 and 5, we use a one-step analysis and the strong Markov property to prove the result. \(\square \)

We employ (3.2) and Proposition 5 to determine the elements of the row vector \(\varvec{\pi }_0(\alpha )\). Set \(A = \{ {(0,0)}\}\) in Theorem 1: then, for \(1 \le j \le c - 1\) and \(c \ge 2\),

$$\begin{aligned} \pi _{(0,j)}(\alpha )&= \sum _{z \in A} \pi _z(\alpha ) \sum _{z' \in A} q(z,z') \mathbb {E}_{ z' }\!\Bigl [ \int _0^{\tau _A} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (0,j)\} \, d t\Bigr ] \nonumber \\&= \pi _{(0,0)}(\alpha ) \Bigl ( \lambda _1 \mathbb {E}_{ (1,0) }\!\Bigl [ \int _0^{\tau _{(0,0)}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (0,j)\} \, d t\Bigr ] \nonumber \\&\qquad \qquad + \lambda _2 \mathbb {E}_{ (0,1) }\!\Bigl [ \int _0^{\tau _{(0,0)}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{X(t) = (0,j)\} \, d t\Bigr ] \Bigr ) \nonumber \\&= \pi _{(0,0)}(\alpha ) \Bigl ( \lambda _1 \sum _{l = 1}^{c - 1} \bigl ( \mathbf {G}_{1,0}(\alpha ) \bigr )_{0,l} \bigl ( \mathbf {N}_0(\alpha ) \bigr )_{l,j} + \lambda _2 \bigl ( \mathbf {N}_0(\alpha ) \bigr )_{1,j} \Bigr ). \end{aligned}$$
(5.22)

Hence, the transforms \(\pi _{(0,j)}(\alpha ), ~ 1 \le j \le c - 1\), can be expressed in terms of \(\pi _{(0,0)}(\alpha )\). In Sect. 5.5, we describe a numerical procedure to determine \(\pi _{(0,0)}(\alpha )\).

Remark 1

(An alternative method for determining \(\varvec{\pi }_0(\alpha )\)) The Kolmogorov forward equations can also be used to derive \(\varvec{\pi }_0(\alpha )\). The transition functions are known to satisfy these equations, since \(\sup _{x \in \mathbb {S}} q(x) < \infty \). Taking the Laplace transform of the Kolmogorov forward equations for the states in \(L_{0}\) yields, after using (3.5),

$$\begin{aligned} \varvec{\pi }_{0}(\alpha ) \bigl ( \alpha \mathbf {I} - \mathbf {A}^{(0)}_0 \bigr ) - \varvec{\pi }_0(\alpha ) \mathbf {W}_0(\alpha ) - \varvec{\pi }_{1}(\alpha ) \mathbf {A}^{(1)}_{-1} = \mathbf {e}_{0}, \end{aligned}$$
(5.23)

where \(\mathbf {e}_{i}\) is a row vector with all elements equal to zero except for the i-th element which is unity. Finally, using Theorem 4 to express \(\varvec{\pi }_{1}(\alpha )\) in terms of \(\varvec{\pi }_{0}(\alpha )\) yields

$$\begin{aligned}&- \varvec{\pi }_{0}(\alpha ) \Bigl ( - \alpha \mathbf {I} + \mathbf {A}^{(0)}_0 + \mathbf {W}_0(\alpha ) \nonumber \\&\quad + \bigl ( \mathbf {A}_1 + \sum _{l = 1}^\infty \mathbf {W}_l(\alpha ) \mathbf {G}_{l,1}(\alpha ) \bigr ) \mathbf {N}_1(\alpha ) \mathbf {A}^{(1)}_{-1} \Bigr ) = \mathbf {e}_{0}. \end{aligned}$$
(5.24)

If one is instead interested in the stationary distribution and in particular the stationary probabilities of \(L_{0}\), i.e., \(\lim _{\alpha \downarrow 0} \alpha \, \varvec{\pi }_{0}(\alpha )\), (5.24) results in a homogeneous system of equations, but it is not clear that this system still has a unique solution. On the other hand, the approach outlined earlier for determining \(\varvec{\pi }_{0}(\alpha )\) can straightforwardly be employed to obtain \(\lim _{\alpha \downarrow 0} \alpha \, \varvec{\pi }_{0}(\alpha )\).

5.5 Numerical implementation

In order to compute the vectors \(\{ \varvec{\pi }_i(\alpha ) \}_{i \ge 0}\), we first need to compute the matrices \(\{ \mathbf {G}_{i + 1,i}(\alpha ) \}_{0 \le i \le c - 1}\), \(\{ \mathbf {N}_i(\alpha ) \}_{1 \le i \le c}\), and \(\bigl ( \mathbf {N}_0(\alpha ) \bigr )^{[0,0]}\).

The first step is to compute \(\mathbf {G}(\alpha ) = \mathbf {G}_{c,c - 1}(\alpha )\). Proposition 3 shows that this matrix can be approximated by using the recursion (5.16). Using this recursion requires us to truncate the infinite sum appearing within the recursion. One way of applying this truncation is as follows: given a fixed tolerance \(\epsilon \), pick an integer \(\kappa _\epsilon \) large enough so that

$$\begin{aligned} \sum _{l = \kappa _\epsilon + 1}^\infty | w^{(\lambda _2,c\mu _2,\lambda _1)}_{l}(\alpha ) | \le \epsilon / \lambda _2. \end{aligned}$$
(5.25)

Once \(\kappa _\epsilon \) has been found, we can use the approximation

$$\begin{aligned} \sum _{l = 1}^{\kappa _\epsilon } \mathbf {W}_l(\alpha ) \mathbf {Z}(n,\alpha )^{l + 1} \approx \sum _{l = 1}^\infty \mathbf {W}_l(\alpha ) \mathbf {Z}(n,\alpha )^{l + 1}, \end{aligned}$$
(5.26)

since the modulus of each element of the matrix on the left-hand side of (5.26) can be shown to be within \(\epsilon \) of what is being approximated. Here we used that the matrices \(\mathbf {W}_l(\alpha )\) only have one element and the absolute value of each element of \(\mathbf {Z}(n,\alpha )\) (and \(\mathbf {G}(\alpha )\)) is less than or equal to 1. Hence, we propose using the recursion

$$\begin{aligned} \mathbf {Z}(n + 1,\alpha )&= \bigl ( \alpha \mathbf {I} - \mathbf {A}_0 - \mathbf {W}_0(\alpha ) \bigr )^{-1} \nonumber \\&\quad \cdot \Bigl ( \mathbf {A}_{-1} + \mathbf {A}_1 \mathbf {Z}(n,\alpha )^2 + \sum _{l = 1}^{\kappa _\epsilon } \mathbf {W}_l(\alpha ) \mathbf {Z}(n,\alpha )^{l + 1} \Bigr ) \end{aligned}$$
(5.27)

to approximate \(\mathbf {G}(\alpha )\). Notice that we can determine \(\kappa _\epsilon \) satisfying (5.25) by writing the left-hand side of (5.25) as

$$\begin{aligned}&\sum _{l = \kappa _\epsilon + 1}^\infty | w^{(\lambda _2,c\mu _2,\lambda _1)}_{l}(\alpha ) | = \sum _{l = 1}^\infty | w^{(\lambda _2,c\mu _2,\lambda _1)}_{l}(\alpha ) | - \sum _{l = 1} ^{\kappa _\epsilon } | w^{(\lambda _2,c\mu _2,\lambda _1)}_{l}(\alpha ) |. \end{aligned}$$
(5.28)

An explicit expression for the infinite sum on the right-hand side of (5.28) can be derived with the help of Lemma 8 of Appendix C and the generating function of the Catalan numbers; the finite sum can be computed numerically.

Once \(\mathbf {G}(\alpha )\) has been found, we can use Proposition 4 to compute each \(\mathbf {G}_{i + 1,i}(\alpha )\) matrix, for \(0 \le i \le c - 2\). For this computation, we use the same truncation procedure as outlined above.

The next step is to compute the matrices \(\{ \mathbf {N}_i(\alpha ) \}_{1 \le i \le c}\) and \(\bigl ( \mathbf {N}_0(\alpha ) \bigr )^{[0,0]}\) using Propositions 5 and 6, respectively. For both computations, we again use the above truncation procedure.

It remains to recursively determine \(\{ \varvec{\pi }_i(\alpha ) \}_{i \ge 0}\) using Theorem 4, where we again use the truncation procedure for the infinite sum. As we have seen in Sect. 5.4, this recursion should be properly initialized by the value of \(\pi _{(0,0)}(\alpha )\). The random-product representation in Theorem 6 of Appendix B shows that all Laplace transforms \(\pi _x(\alpha )\) satisfy, for each \(x \in \mathbb {S}\),

$$\begin{aligned} \pi _x(\alpha ) = \pi _{(0,0)}(\alpha ) \psi _x(\alpha ), \end{aligned}$$
(5.29)

where \(\pi _{(0,0)}(\alpha )\) is an unknown transform and \(\psi _{(0,0)}(\alpha ) = 1\). It is clear that the \(\psi _x(\alpha )\) can be computed using the same procedure as for \(\pi _x(\alpha )\), and (5.22) shows that \(\psi _{(0,0)}(\alpha ),\psi _{(0,1)}(\alpha ),\ldots ,\psi _{(0,c - 1)}(\alpha )\) are computable expressions.

We can calculate \(\pi _{(0,0)}(\alpha )\) from the normalization condition

$$\begin{aligned} \sum _{x \in \mathbb {S}} \pi _x(\alpha ) = \pi _{(0,0)}(\alpha ) \sum _{x \in \mathbb {S}} \psi _x(\alpha ) = \frac{1}{\alpha }, \end{aligned}$$
(5.30)

which yields

$$\begin{aligned} \pi _{(0,0)}(\alpha ) = \frac{1}{\alpha \sum _{x \in \mathbb {S}} \psi _x(\alpha )}. \end{aligned}$$
(5.31)

Since we cannot compute this infinite sum, we determine \(\psi _{(i,j)}(\alpha )\) for all (ij) in a sufficiently large bounding box \(\mathbb {S}_k :=\{ (i,j) \in \mathbb {S}: 0 \le i \le k \}\) for some \(k \ge 0\). Notice that (4.15) allows \(\mathbb {S}_k\) to be an infinitely large rectangle. The choice of k in \(\mathbb {S}_k\) clearly influences the quality of the approximation. A simple procedure to choose k is the following. Define

$$\begin{aligned} \Psi _k :=\sum _{x \in \mathbb {S}_k} \psi _x(\alpha ). \end{aligned}$$
(5.32)

Pick \(\epsilon \) small and positive and continue increasing k until \(\frac{|\Psi _{k + 1} - \Psi _k|}{|\Psi _k|} < \epsilon \). Then, set \(\pi _{(0,0)}(\alpha ) = 1/(\alpha \Psi _{k + 1})\) to normalize the Laplace transforms \(\pi _x(\alpha ) = \pi _{(0,0)}(\alpha ) \psi _x(\alpha )\).

For \(c = 1\) we can normalize the solution as outlined above, or we can explicitly determine the value of \(\pi _{(0,0)}(\alpha )\); see the next section.

6 The single-server case

We now turn our attention to the case where \(c = 1\), i.e., the case where the system consists of a single server. In this case, the analysis of the Laplace transforms \(\pi _{(i,0)}(\alpha ), ~ i \ge 0\), simplifies considerably. The expressions for the Laplace transforms for the states in the interior and on the vertical boundary are identical to the multi-server case.

From [12, Corollary 2.1]—see also Theorem 6 in Appendix B—we have

$$\begin{aligned} \pi _{(0,0)}(\alpha ) = \frac{1}{(q({(0,0)}) + \alpha )(1 - \mathbb {E}_{ {(0,0)}}[ \mathrm {e}^{-\alpha \tau _{(0,0)}}])}. \end{aligned}$$
(6.1)

In light of (6.1), to evaluate \(\pi _{(0,0)}(\alpha )\) the only thing that needs to be determined is the expectation \(\mathbb {E}_{{(0,0)}}[\mathrm {e}^{-\alpha \tau _{(0,0)}}]\). This quantity is the Laplace–Stieltjes transform of the sum of two independent exponential random variables: One is the exponential random variable \(E_{\lambda }\) having rate \(\lambda \), the other is the busy period B of an M / G / 1 queue having arrival rate \(\lambda \) and hyperexponential service times having cumulative distribution function \(F(\cdot )\). More specifically,

$$\begin{aligned} F(t) = \frac{\lambda _1}{\lambda }(1 - \mathrm {e}^{-\mu _1 t}) + \frac{\lambda _2}{\lambda }(1 - \mathrm {e}^{-\mu _2 t}), \quad t \ge 0. \end{aligned}$$
(6.2)

The Laplace–Stieltjes transform \(\varphi (\alpha )\) of B is known to satisfy the Kendall functional equation

$$\begin{aligned} \varphi (\alpha ) = \frac{\lambda _1}{\lambda } \frac{\mu _1}{\mu _1 + \alpha + \lambda (1 - \varphi (\alpha ))} + \frac{\lambda _2}{\lambda } \frac{\mu _2}{\mu _2 + \alpha + \lambda (1 - \varphi (\alpha ))}. \end{aligned}$$
(6.3)

Furthermore, \(\varphi (\alpha )\) can be determined numerically through successive substitutions of (6.3), starting with \(\varphi (\alpha ) = 0\); see [1, Section 1] for details. Using independence of \(E_\lambda \) and B,

$$\begin{aligned} \mathbb {E}_{{(0,0)}}[\mathrm {e}^{-\alpha \tau _{(0,0)}}] = \mathbb {E}[\mathrm {e}^{-\alpha (E_\lambda + B)}] = \frac{\lambda }{\lambda + \alpha } \varphi (\alpha ), \end{aligned}$$
(6.4)

meaning that (see also [2, Eq. (36)]) for \(c = 1\),

$$\begin{aligned} \pi _{(0,0)}(\alpha ) = \frac{1}{\lambda (1 - \varphi (\alpha )) + \alpha }. \end{aligned}$$
(6.5)

We now turn our attention to the horizontal boundary. When \(c = 1\), the matrices \(\mathbf {G}(\alpha )\) and \(\mathbf {N}(\alpha )\) become scalars, which we denote as \(G(\alpha )\) and \(N(\alpha )\), respectively. More precisely,

$$\begin{aligned} G(\alpha ) :=\mathbb {E}_{ (i + 1,0) }[ \mathrm {e}^{-\alpha \tau _{L_{i}}} ], \quad N(\alpha ) :=\mathbb {E}_{ (i,0) }\!\Bigl [ \int _0^{\tau _{L_{i - 1}}} \mathrm {e}^{-\alpha t} {\mathbbm {1}}\{ X(t) = (i,0) \} \, d t\Bigr ], \end{aligned}$$
(6.6)

which are independent of \(i \ge 1\).

Proposition 7

The scalar \(G(\alpha )\) is a solution to

$$\begin{aligned} (\lambda + \mu _1 + \alpha )G(\alpha ) = \mu _1 + \lambda _1 G(\alpha )^2 + \lambda _2 G(\alpha ) \phi _{\lambda _2,\mu _2}(\lambda _1(1 - G(\alpha )) + \alpha ). \end{aligned}$$
(6.7)

Proof

From Proposition 2, we easily find that when \(c = 1\),

$$\begin{aligned} (\lambda + \mu _1 + \alpha ) G(\alpha ) = \mu _1 + \lambda _1 G(\alpha )^2 + \lambda _2 \sum _{l = 0}^\infty w^{(\lambda _2,\mu _2,\lambda _1)}_{l}(\alpha ) G(\alpha )^{l + 1}. \end{aligned}$$
(6.8)

The infinite series appearing in (6.8) can be simplified using Lemma 9 of Appendix C; doing so yields (6.7). \(\square \)

Even though we cannot use (6.7) to write down an explicit expression for \(G(\alpha )\), we can still use it to devise an iterative scheme for computing \(G(\alpha )\). The next result shows that \(N(\alpha )\) can be expressed in terms of \(G(\alpha )\).

Proposition 8

We have

$$\begin{aligned} N(\alpha ) = \frac{1}{\alpha + \lambda + \mu _1 - \lambda _1 G(\alpha ) - \lambda _2 \phi _{\lambda _2,\mu _2}(\lambda _1(1 - G(\alpha )) + \alpha )}. \end{aligned}$$
(6.9)

Proof

Using Proposition 5, we observe that when \(c = 1\),

$$\begin{aligned} N(\alpha ) = \Bigl ( \alpha + \lambda + \mu _1 - \lambda _1 G(\alpha ) - \lambda _2 \sum _{l = 0}^\infty w^{(\lambda _2,\mu _2,\lambda _1)}_{l}(\alpha ) G(\alpha )^l \Bigr )^{-1}. \end{aligned}$$
(6.10)

The proof is completed by applying Lemma 9 of Appendix C to (6.10). \(\square \)

We now focus on the recursion for the horizontal boundary. When \(c = 1\), Theorem 4 reduces to

$$\begin{aligned} \pi _{(i + 1,0)}(\alpha )&= \lambda _1 \pi _{(i,0)}(\alpha ) N(\alpha ) \nonumber \\&\quad + \lambda _2 \sum _{k = 0}^i \pi _{(k,0)}(\alpha ) \sum _{l = i + 1}^\infty w^{(\lambda _2,\mu _2,\lambda _1)}_{l - k}(\alpha ) G(\alpha )^{l - (i + 1)} N(\alpha ). \end{aligned}$$
(6.11)

For the inner-most sum over l we have

$$\begin{aligned} \sum _{l = i + 1}^\infty w^{(\lambda _2,\mu _2,\lambda _1)}_{l - k}(\alpha ) G(\alpha )^{l - (i + 1)} = G(\alpha )^{k - (i + 1)} \sum _{m = i + 1 - k}^\infty w^{(\lambda _2,\mu _2,\lambda _1)}_{m}(\alpha ) G(\alpha )^m. \end{aligned}$$
(6.12)

Clearly, \(i + 1 - k \ge 1\). So let us try to evaluate the tail of the generating function of \(\{ w^{(\lambda _2,\mu _2,\lambda _1)}_{m}(\alpha ) \}_{m \ge 0}\) evaluated at the point \(G(\alpha )\), i.e.,

$$\begin{aligned}&\sum _{m = i + 1 - k}^\infty w^{(\lambda _2,\mu _2,\lambda _1)}_{m}(\alpha ) \, G(\alpha )^m \nonumber \\&= \phi _{\lambda _2, \mu _2}(\lambda _1(1 - G(\alpha )) + \alpha ) - \sum _{m = 0}^{i - k} w^{(\lambda _2,\mu _2,\lambda _1)}_{m}(\alpha ) \, G(\alpha )^m, \end{aligned}$$
(6.13)

which follows from an application of Lemma 9 of Appendix C. The remaining finite summation is easy to compute since each \(w^{(\lambda _2,\mu _2,\lambda _1)}_{m}(\alpha )\) term, by Lemma 8 of Appendix C, can be stated in terms of \(b_{K}(\cdot )\) functions, and these satisfy the recursion found in Lemma 4 of Appendix A. These observations allow us to state the following theorem, which yields a practical method for recursively computing Laplace transforms of the form \(\pi _{(i,0)}(\alpha )\).

Theorem 5

(Horizontal boundary, single server) When \(c = 1\), the Laplace transforms of the transition functions on the horizontal boundary satisfy the following recursion: for \(i \ge 0\),

$$\begin{aligned} \pi _{(i + 1,0)}(\alpha )&= \lambda _1 \pi _{(i,0)}(\alpha ) N(\alpha ) \nonumber \\&\quad + \lambda _2 \sum _{k = 0}^i \pi _{(k,0)}(\alpha ) G(\alpha )^{k - (i + 1)} \Bigl ( \phi _{\lambda _2,\mu _2}(\lambda _1(1 - G(\alpha )) + \alpha ) \nonumber \\&\quad - \sum _{l = 0}^{i - k} w^{(\lambda _2,\mu _2,\lambda _1)}_{l}(\alpha ) G(\alpha )^l \Bigr ) N(\alpha ). \end{aligned}$$
(6.14)

7 Conclusion

In this paper, we analyzed an M / M / c priority system with two customer classes, class-dependent service rates and a preemptive resume priority rule. This queueing system can be modeled as a two-dimensional Markov process for which we analyzed the time-dependent behavior. More precisely, we obtained expressions for the Laplace transforms of the transition functions under the condition that the system is initially empty.

Using a slight modification of the CAP method, we showed that the Laplace transforms for the states with at least c high-priority customers can be expressed in terms of a finite sum of the Laplace transforms for the states with exactly \(c - 1\) high-priority customers. This expression contained coefficients that satisfy a recursion. We solved this recursion to obtain an explicit expression for each coefficient. In doing so, each Laplace transform for the states on the vertical boundary and in the interior can easily be calculated from the values of the Laplace transforms for the states in the horizontal boundary.

Next, we developed a Ramaswami-like recursion for the Laplace transforms for the states in the horizontal boundary. The recursion required the collections of matrices \(\{ \mathbf {G}_{i + 1,i} \}_{0 \le i \le c - 1}\) and \(\{ \mathbf {N}_i(\alpha ) \}_{1 \le i \le c}\) for which we showed that they can be determined iteratively. We demonstrated two ways in which the initial value of the recursion, i.e., the vector \(\varvec{\pi }_0(\alpha )\), can be calculated. Finally, we discussed the numerical implementation of our approach for the horizontal boundary.

In the single-server case, the expressions for the Laplace transforms for the states on the vertical boundary and in the interior were identical to the multi-server case. The expressions for the horizontal boundary, however, simplified considerably. Specifically, the initial value \(\varvec{\pi }_{(0,0)}(\alpha )\) of the recursion could be determined by comparing the queueing system to an M / G / 1 queue with hyperexponentially distributed service times. Moreover, the calculation of \(G(\alpha )\) and \(N(\alpha )\), which are now scalars, simplified greatly.

We now comment on how our expressions for the Laplace transforms of the transitions functions can be used to determine the stationary distribution. It is clear from the transition rate diagram in Fig. 3 that the Markov process X is irreducible. Moreover, it is well-known that X is positive recurrent if and only if \(\rho < 1\). In that case, X has a unique stationary distribution \(\mathbf {p} :=[ p_{x} ]_{x \in \mathbb {S}}\). To compute each \(p_x\) term from \(\pi _x(\alpha )\), simply note that

$$\begin{aligned} p_{x} = \lim _{\alpha \downarrow 0} \alpha \, \pi _{x}(\alpha ). \end{aligned}$$
(7.1)

Using this observation, we see that the procedure for finding \(\mathbf {p}\) is highly analogous to the one we presented for finding the Laplace transforms of the transition functions.