Queueing Systems

, Volume 73, Issue 3, pp 295–316

Repair systems with exchangeable items and the longest queue mechanism

Authors

  • R. Ravid
    • Department of StatisticsUniversity of Haifa
    • College of Engineering, ORT Braude
    • EURANDOM
    • Department of Mathematics and Computer ScienceTechnische Universiteit Eindhoven
  • D. Perry
    • Department of StatisticsUniversity of Haifa
Open AccessArticle

DOI: 10.1007/s11134-012-9319-5

Cite this article as:
Ravid, R., Boxma, O.J. & Perry, D. Queueing Syst (2013) 73: 295. doi:10.1007/s11134-012-9319-5

Abstract

We consider a repair facility consisting of one repairman and two arrival streams of failed items, from bases 1 and 2. The arrival processes are independent Poisson processes, and the repair times are independent and identically exponentially distributed. The item types are exchangeable, and a failed item from base 1 could just as well be returned to base 2, and vice versa. The rule according to which backorders are satisfied by repaired items is the longest queue rule: At the completion of a service (repair), the repaired item is delivered to the base that has the largest number of failed items.

We point out a direct relation between our model and the classical longer queue model. We obtain simple expressions for several probabilities of interest, and show how all two-dimensional queue length probabilities may be obtained. Finally, we derive the sojourn time distributions.

Keywords

Repair system Longest queue Queue lengths Sojourn time

Mathematics Subject Classification

60K25 90B22

1 Introduction

In this paper, we consider a repair facility consisting of one repairman and two arrival streams of failed items, from bases 1 and 2. The arrival processes are independent Poisson processes with rates λ 1 and λ 2. The repair times are independent and identically distributed, with exp(μ) distribution regardless of the type of failed item. The item types are exchangeable, and a failed item from base 1 could just as well be returned to base 2, and vice versa. The rule according to which backorders are satisfied by repaired items is the longest queue rule: At the completion of a service (repair), the repaired item is delivered to the base that has the largest number of failed items. In case of a tie, the item will be delivered to base 1 or base 2 with probability \(\frac{1}{2}\).

We are interested in key performance measures of this repair facility, such as the (joint) queue length distribution of failed items of both types, and their sojourn time distribution (the time between arrival and departure of a failed item). In the literature, several studies have appeared about the so-called longest queue system. That is a queueing system with one server and (typically) two queues with customers of two different types; the server choosing a customer from the longest queue upon service completion. Cohen [2] has studied the case of two customer types with Poisson arrival streams with rates λ 1 and λ 2, having service time distributions B 1(⋅) and B 2(⋅). If the server has completed a service, then the next customer to be served is the one at the head of the longest queue if the queue lengths are not equal; if both queues have equal length, then the next customer in service is of type i with some probability α i . He determines the generating function of the joint steady-state queue length distribution right after service completions, by solving a boundary value problem of Riemann–Hilbert type.

Zheng and Zipkin [9] consider the completely symmetric exponential case (λ 1=λ 2; \(\alpha_{1} = \alpha_{2} = \frac{1}{2}\); B 1(x)=B 2(x)=1−eμx ). They calculate the steady-state distribution of the difference between the two queue lengths, and they provide a recursive scheme for the calculation of the joint queue length distribution and the marginal distributions. They also briefly consider the case that λ 1λ 2. Flatto [5] also considers the symmetric exponential case. He allows preemption, and derives an expression for the probability generating function of the joint queue length distribution. He uses this expression to derive asymptotic results.

Van Houtum et al. [7] also focus on the completely symmetric exponential model. They consider two variants: a longest queue system with threshold rejection of customers and one with threshold addition of customers. They show that these systems can be analyzed in detail using matrix-geometric methods, and that this provides lower and upper bounds for the longest queue system.

The repair facility with exchangeable items, that is the subject of our paper, has already been studied by Daryanani and Miller [4]. Using taboo sets and taboo probabilities, they derive various relations between the steady-state queue length probabilities; however, they do not solve those equations.

Remark 1

While the above described classical longer queue model is closely related to the model studied in [4] and the present paper, there are significant differences. To demonstrate these, let us consider the classical system and our repair system, receiving exactly the same input and having exactly the same service times. Suppose both systems start empty, and then a type-1 item arrives. The server starts serving. During the service, there are no type-1 arrivals and three type-2 arrivals. In the classical system, the type-1 customer leaves and the server starts serving the first of the three type-2 customers. The state of the classical system now is (0,3): one type-2 customer is just entering service and the other two type-2 customers are waiting.

In our repair system, the items are exchangeable. The repaired item is assigned to the first type-2 customer, even though it was brought as a type-1 item. Hence, the state of the system right after the repair is (1,2): there is still one waiting type-1 customer and there are two waiting type-2 customers.

Motivation

The longer queue model is related to the join-the-shortest-queue model: both models feature a mechanism that tends to equate the queue lengths. The longer queue model is a very natural one, but it has received much less attention than the join-the-shortest-queue model. We believe our paper yields valuable new insight into the longer queue model and a variant of it.

Contributions

Our main contributions are: (i) We point out a direct relation between our model and the classical longer queue model of Cohen [2]; (ii) we obtain simple expressions for several probabilities of interest, and we show how all two-dimensional probabilities may be obtained; (iii) we derive the sojourn time distributions—this performance measure was not studied in the papers mentioned above; and (iv) we present some methodological ideas which might be more broadly applicable; one example is the use of the “difference busy period.”

Organization of the paper

In Sect. 2, we first give the balance equations for the joint steady-state queue length distribution. We then study its generating function (GF), deriving various special results like the distribution of the difference of the two queue lengths and the probability that there are n 1 customers of one type and none of the other type. Then we point out a direct relation between Cohen’s model and our model, which in principle allows us to use his results for obtaining the GF of the joint queue length distribution. However, we are interested in providing explicit results for the probabilities P(i,i) of having i customers of either type, i=1,2,…, which will also give us the marginal distributions explicitly. These probabilities are studied in Sect. 3; in that section, we also give an iterative method for obtaining all queue length probabilities. Finally, Sect. 4 is devoted to the determination of the sojourn time distribution of a customer of either type.

2 Queue lengths—a generating function approach

Let N i (t) denote the number of failed items of type i=1,2 at time t. Clearly, {(N 1(t),N 2(t)),t≥0} is a Markov process. We restrict ourselves to the case that the total load \(\rho := \frac{\lambda_{1} + \lambda_{2}}{ \mu} < 1\). Then the limiting distribution P(n 1,n 2)=P(N 1=n 1,N 2=n 2):=lim t→∞ P(N 1(t)=n 1,N 2(t)=n 2|N 1(0)=k 1,N 2(0)=k 2) exists and is independent of the initial state; indeed, notice that the total number of customers has the same distribution as the number of customers in an M/M/1 queue with arrival rate λ 1+λ 2 and service rate μ. The P(n 1,n 2) satisfy the following balance equations: for n 1,n 2≥1, with I(⋅) denoting an indicator function,
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ1_HTML.gif
(1)
For n 1=0 and/or n 2=0, the same equations hold with minor adaptations. Multiplying these equations with \(z_{1}^{n_{1}}z_{2}^{n_{2}}\) and summing, one gets:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ2_HTML.gif
(2)
We delay the solution of this equation, first showing how one can derive various special probabilities from this equation.

2.1 P(N 1+N 2=n)

Taking z 1=z 2=z in (2), it easily follows that
$$ \biggl((\lambda_1 + \lambda_2) (1-z) + \mu \biggl(1 - \frac{1}{z}\biggr)\biggr) E\bigl[z^{N_1 + N_2}\bigr] = P(0,0) \biggl(\mu \biggl(1 - \frac{1}{z}\biggr)\biggr), $$
(3)
so
$$ E\bigl[z^{N_1+N_2}\bigr] = \frac{P(0,0)}{1-\rho z}. $$
(4)
This implies (by taking z=1) that P(0,0)=1−ρ, and that the total number of customers is geom(ρ) distributed. This confirms that the total number of customers behaves as if the system is an M/M/1 queue with arrival rate λ 1+λ 2 and service rate μ.

2.2 The probabilities P(n 1,0) and P(0,n 2)

Taking z 1=z and z 2=0 in (2), we obtain by carefully considering the 1/z terms:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ5_HTML.gif
(5)
Hence,
$$ E\bigl[z^{N_1}I(N_1>0,N_2=0)\bigr] = - z \frac{\mu P(0,1) -(\lambda_1(1-z) +\lambda_2)P(0,0) + \frac{\mu}{2} z P(1,1)}{\lambda_1 z^2 -(\lambda_1+\lambda_2+\mu)z +\mu} . $$
(6)
Consider the denominator of the right-hand side of (6). Partial fraction yields
$$ E\bigl[z^{N_1}I(N_1>0,N_2=0)\bigr] = \frac{C_1 z}{1-z/z_+} + \frac{C_2 z}{1-z/z_-}, $$
(7)
where C 1 and C 2 remain to be determined. From M/M/1 theory (cf. Cohen [1], Chap. II.4), it follows that the zero z with “minus the square root” of the denominator of (6) is the Laplace–Stieltjes transform (LST) with argument λ 2 of the length of the busy period P 1 in an M/M/1 queue with arrival rate λ 1 and service rate μ: \(z_{-} = E[\mathrm{e}^{-\lambda_{2} P_{1}}]\). It may also be interpreted as the probability of zero arrivals from base 2 during a busy period of type 1. Since this zero z has absolute value less than one, and the left-hand side of (6) is analytic inside the unit circle, C 2 must be zero. The product of z + and z is μ/λ 1>1, so z + has absolute value larger than one. Hence, we conclude that the probabilities P(N 1=n 1,N 2=0), for n 1>0, are geometric with parameter \(\frac{1}{z_{+}} = \frac{\lambda_{1} z_{-}}{\mu}\):
$$ P(j,0) = C_1 \biggl(\frac{1}{z_+}\biggr)^{j-1}, \quad j=1,2,\ldots. $$
(8)
Similarly, \(P(0,j) = \tilde{C}_{1} (\frac{1}{\tilde{z}_{+}})^{j-1}\), j=1,2,…, where \(\tilde{z}_{+}\) is obtained from z + by interchanging λ 1 and λ 2. It remains to determine the constants C 1 and \(\tilde{C}_{1} \). We shall return to this in Sect. 3.2, where an expression for all P(i+1,i) and P(i,i+1) is derived (cf. (27)). An alternative approach to determining these two constants is to use the two equations P(1,0)+P(0,1)=ρ(1−ρ) and P(2,0)+P(1,1)+P(0,2)=ρ 2(1−ρ) (cf. Sect. 2.1), in combination with an expression for P(1,1) which will be obtained in Sect. 3.1. One thus gets two equations for P(1,0)=C 1 and \(P(0,1) = \tilde{C}_{1}\).

2.3 P(N 1N 2=n|N 1>N 2) and P(N 2N 1=n|N 2>N 1)

Taking z 1=z=1/z 2 in (2), we get a relation between the generating functions \(E[z^{N_{1}-N_{2}}I(N_{1}>N_{2})]\) and \(E[z^{N_{1}-N_{2}}I(N_{2}>N_{1})]\):
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ9_HTML.gif
(9)
Now observe that the terms in the left-hand side are analytic in z for |z|<1, whereas the term in the right-hand side is analytic in z for |z|>1. Application of Liouville’s theorem, using the fact that the right-hand side has a finite limit for |z|→∞, yields that both sides are equal to a constant, say C, respectively, for |z|<1 and for |z|>1. In particular, taking y=1/z,
$$ E\bigl[y^{N_2-N_1}I(N_2>N_1)\bigr] = \frac{Cy}{\lambda_1 +\mu - \lambda_2y}, $$
(10)
implying that N 2N 1, when positive, is geom(\(\frac{\lambda_{2}}{\lambda_{1}+\mu}\)) distributed. By symmetry (or by studying the left-hand side of (9), which equals C for |z|<1, and by considering the z 0 terms in the left-hand side) one may conclude that N 1N 2, when positive, is geom(\(\frac{\lambda_{1}}{\lambda_{2}+\mu}\)) distributed. In the symmetric case λ 1=λ 2, this was already observed in [9]. Below we would like to interpret this result. Consider the system from the moment N 2 reaches the value N 1+k for some positive k, until the level N 1+k−1 is reached again for the first time. In between, the server will invariably be giving repaired items back to base 2, and never to base 1. The value k, when positive, plays no role in this. Hence, N 2N 1, when positive, is memoryless: P(N 2N 1k+l|N 2N 1k)=P(N 2N 1l). In fact, all the time that N 2N 1k, the system behaves like an M/M/1 queue with arrival rate λ 2 and service rate λ 1+μ: the difference N 2N 1 decreases with rate λ 1+μ. The events when both queues are of equal length will be particularly important; in the next section, we shall derive P(N 1=N 2=i), i=0,1,….

So far, we have not yet tackled the general problem of finding \(E[z_{1}^{N_{1}}z_{2}^{N_{2}}]\); we only showed that various relevant performance measures have a geometric distribution. The natural approach to the general solution of (2) seems to be to translate the problem into a boundary value problem, like a Riemann–Hilbert problem (cf. [3]). That was also the approach chosen by Cohen [2] in his analysis of the two-dimensional queue length process right after departures in the case of Poisson arrivals and generally distributed service times; see also Flatto [5] for the case of exponential service times.

When we compared (2) with formula (1.7) of Cohen [2], we came to the conclusion that his formula for exp(μ) service times reduces to our formula (2). This is surprising in view of Remark 1, where it is explained that the exchangeability feature of our repair system leads to different queue length behavior in both models. Below we shall show that, despite that different behavior, the steady-state joint queue length distributions in both models are the same.

Cohen [2] studies \((x_{n}^{(1)},x_{n}^{(2)})\), where these are the numbers of customers of types 1 and 2 right after a service completion. He states in his formula (1.6): If \(x_{n}^{(1)} > x_{n}^{(2)}\), then
$$ x_{n+1}^{(1)} = x_n^{(1)} - 1 + \nu_{n+1}^{(1)} ,\quad x_{n+1}^{(2)} = x_n^{(2)} + \nu_{n+1}^{(2)}, $$
(11)
where the \(\nu_{n+1}^{(i)}\) are numbers of type-i arrivals during the (n+1)th service (actually, he distinguishes between arrivals during a service of type 1 and type 2, but we assume all service times are exp(μ)). He has similar equations for the other cases. In particular, if \(x_{n}^{(1)} = x_{n}^{(2)} = 0\), then \(x_{n+1}^{(i)} = \nu_{n+1}^{(i)}\), i=1,2.
In our repair system, we study (N 1,N 2), where N i is the steady-state number of type-i requests. Let us, however, instead look at subsequent departure epochs (i.e., repair completion epochs), with one significant difference: Remove the idle periods of the system. Moreover, ignore the customer who is the first to arrive after such an idle period (both his arrival and his departure). Call the numbers of requests waiting just before the nth departure epoch: \((N_{n}^{(1)},N_{n}^{(2)})\). Since we now view the system at independent exp(μ) intervals, PASTA (Poisson Arrivals See Time Averages) applies [8]. PASTA states that the distribution of (N 1,N 2) equals the distribution of numbers of requests just before those exp(μ) departure intervals. We claim that those satisfy exactly the same recursion as Cohen’s \(x_{n}^{(i)}\). Indeed, one may easily verify that, exactly as for the \(x_{n}^{(i)}\) in (11), one has: If \(N_{n}^{(1)} > N_{n}^{(2)}\) then
$$ N_{n+1}^{(1)} = N_n^{(1)} - 1 + \nu_{n+1}^{(1)} ,\qquad N_{n+1}^{(2)} = N_n^{(2)} + \nu_{n+1}^{(2)}, $$
(12)
and other similar equations hold; in particular, if \(N_{n}^{(1)} = N_{n}^{(2)} = 0\), then \(N_{n+1}^{(i)} = \nu_{n+1}^{(i)}\), i=1,2. The tricky case is when we have (1,0) or (0,1) just before a service completion. Now an idle period will start. It will be ended with an arrival. As said before, we ignore the idle period and the arrival that ends it. Just before the end of the next exp(μ), we shall have \((\nu_{n+1}^{(1)},\nu_{n+1}^{(2)})\) plus that one ignored customer. However, as he is ignored, we have exactly the same recursion relations for \((N_{n}^{(1)},N_{n}^{(2)})\) as Cohen [2] gets for \((x_{n}^{(1)},x_{n}^{(2)})\).

So, although Cohen [2] and we study different quantities (cf. Remark 1), the above reasoning shows that in the case of exp(μ) service times, his \((x_{n}^{(1)},x_{n}^{(2)})\) and our (N 1(t),N 2(t)) have the same limiting distribution. This is confirmed by the fact that Cohen’s formula (1.7) for the generating function, when taking exp(μ) service times, agrees with our formula (2).

For general service times, this reasoning fails because then successive service times do not generate a Poisson process, and PASTA cannot be applied.

3 Queue lengths—a probabilistic approach

In this section, we shall first determine the probabilities P(i,i), and then present a procedure to obtain all P(i,j).

3.1 Determination of P(i,i)

We use an argument from Markov renewal theory to derive an expression for (the generating function of) P(i,i).

Step 1: relate P(i,i) to the steady-state probabilities π i of an underlying Markov chain.

The successive busy cycles, where a busy cycle BC is the sum of an idle period and consecutive busy period of the server, constitute renewal cycles. Let θ i denote the mean number of visits to state (i,i) during a cycle. Since the mean visit time to state (i,i) equals \(\frac{1}{\lambda_{1} +\lambda_{2} +\mu}\) for i≥1, we have
$$ P(i,i) = \frac{\frac{\theta_i}{\lambda_1+\lambda_2+\mu}}{E[\mathit{BC}]}, $$
(13)
where \(E[\mathit{BC}] = \frac{1}{\lambda_{1}+\lambda_{2}} + \frac{1}{\mu - \lambda_{1} - \lambda_{2}} = \frac{1}{\lambda_{1} + \lambda_{2}}\frac{1}{1-\rho}\).
Now consider a discrete-time Markov chain K:={K n , n=0,1,…}, with state space {(0,0),(1,1),…,(i,i),…}, and with transition probabilities P (j,j),(k,k) which will be determined in step 2. This is a Markov chain where we only consider the states where both queue lengths are equal. Its limiting distribution is π i :=lim n→∞ P(K n =i). Clearly, π i is proportional to θ i for i≥1. More specifically, since the state (0,0) is visited exactly once per busy cycle, we have
$$ \theta_i = \frac{\pi_i}{\pi_0},\quad i \geq 1. $$
(14)
Step 2: determination of the π i .
The steady-state solution of the Markov chain K satisfies the normalizing condition \(\sum_{i=0}^{\infty} \pi_{i} = 1\) and the balance equations
$$ \pi_i = \sum_{k=0}^{\infty} \pi_k P_{(k,k),(i,i)} ,\quad i=0,1,\ldots . $$
(15)
Let us now determine the transition probabilities P (k,k),(i,i). Suppose that the original queue length process is in state (k,k). There are three possible events: an arrival from base 1, an arrival from base 2, and a service completion.
  1. (i)

    An arrival from base 1 occurs first. As indicated in the previous section, one may argue that this arrival starts a busy period, call it B 1, with arrival rate λ 1 and service rate λ 2+μ. Indeed, all the time until equality of the two queue lengths occurs again for the first time (at some level (i,i)), repaired items will be handed back to base 1 and not to base 2, since queue 1 is the longer queue. Notice that the stability condition λ 1+λ 2<μ implies that λ 1<λ 2+μ.

     
  2. (ii)

    Similarly, if an arrival from base 2 occurs first when the queue length process is in state (k,k), then it takes a busy period B 2 with arrival rate λ 2 and service rate λ 1+μ until the system is back at some state (i,i) with equal queue lengths.

     
  3. (iii)

    Finally, if a service completion occurs first, then with probability \(\frac{1}{2}\) the queue length process moves to (k,k−1), respectively, to (k−1,k), and again a busy period B 1 respectively B 2 occurs. We shall sometimes speak of a ‘difference busy period’.

     

We now determine the probability that the underlying Markov chain jumps from (k,k) to (i,i), in each of these three possible events.

Let us define L (m) as the number of arrivals from base 3−m during B m , m=1,2. Furthermore, define K (m) as the number of services in the busy period B m , m=1,2; it is the sum of the number of arrivals L (m) and the number of item departures during B m . Since K (m) is the number of customers served in a busy period of an M/M/1 queue with arrival rate λ m and service rate λ 3−m +μ, it follows from M/M/1 theory (cf. Chap. II.4 of Cohen [1]) that, for m=1,2,
$$ E\bigl[z^{K^{(m)}}\bigr]=\frac{\lambda_{1}+\lambda_{2}+\mu -\sqrt{\{(\lambda_{1}+\lambda_{2}+\mu )^{2}-4\lambda_{m}(\lambda_{3-m}+\mu )z\}}}{2\lambda_{m}}. $$
(16)
\(E[z^{L^{(m)}}]\) now follows, since given K (m)=j, we have that \(L^{(m)}= \mathrm{bin}(j,\frac{\lambda_{3-m}}{\lambda_{3-m}+\mu })\):
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ17_HTML.gif
(17)
The key observation in determining the transition probabilities P (k,k),(i,i) in case (i) (so via a difference busy period B 1) is the following. During the difference busy period there is no equality of queue lengths. If the busy period starts from (k+1,k) and L (1)=ik, then there have been ik arrivals from base 2 during B 1, and none of these is served in B 1. Hence, the next level at which there are equal queue lengths is level i (in state (i,i)). Using similar reasonings in cases (ii) and (iii) finally results in the following transition probabilities of the Markov chain K:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ18_HTML.gif
(18)
Introducing a i :=P (0,0),(i,i), and b l :=P (k,k),(k+l−1,k+l−1), l≥0, k≥1, we notice that we have an M/G/1-type Markov chain that satisfies (cf. (15)) the following balance equations:
$$ \pi_{i}=\pi_{0}a_{i}+\sum _{k=1}^{i+1}\pi_{k}b_{i-k+1},\quad i=0,1, \ldots. $$
(19)
Introducing the GF \(A(z):=\sum_{i=0}^{\infty }a_{i}z^{i}\), \(B(z):=\sum_{i=0}^{\infty }b_{i}z^{i}\), and \(\varPi (z):=\sum_{i=0}^{\infty }\pi_{i}z^{i}\), it is easily seen that
$$ \varPi (z)=\pi_{0}\frac{zA(z)-B(z)}{z-B(z)}. $$
(20)
Here,
$$ A(z)=\frac{\lambda_{1}}{\lambda_{1}+\lambda_{2}}E\bigl[z^{L^{(1)}}\bigr]+\frac{\lambda_{2}}{\lambda_{1}+\lambda_{2}}E \bigl[z^{L^{(2)}}\bigr], $$
$$ B(z)=\frac{(\lambda_{1}z+\frac{\mu }{2})E[z^{L^{(1)}}]+(\lambda_{2}z+\frac{\mu }{2})E[z^{L^{(2)}}]}{\lambda_{1}+\lambda_{2}+\mu }. $$
π 0 follows by applying l’Hopital’s formula to (20):
$$ \pi_{0}=\frac{B^{\prime }(1)-1}{B^{\prime }(1)-1-A^{\prime }(1)}, $$
(21)
where
$$ A^{\prime }(1)=\frac{\lambda_{1}}{\lambda_{1}+\lambda_{2}}E\bigl[L^{(1)}\bigr]+\frac{\lambda_{2}}{\lambda_{1}+\lambda_{2}}E\bigl[L^{(2)}\bigr], $$
$$ B^{\prime }(1)=\frac{\lambda_{1}+\lambda_{2}+(\lambda_{1}+\frac{\mu }{2})E[L^{(1)}]+(\lambda_{2}+\frac{\mu }{2})E[L^{(2)}]}{\lambda_{1}+\lambda_{2}+\mu }. $$
Finally, let us determine the GF of P(i,i) using (13) and (14):
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ22_HTML.gif
(22)
In particular, it follows that
$$ P(N_{1}=N_{2})=\sum_{i=0}^{\infty }P(i,i)=1- \rho +\frac{\rho (1-\rho )}{1+\rho }\frac{A^{\prime }(1)}{1-B^{\prime }(1)}. $$
(23)

Remark 2

Zheng and Zipkin [9], studying the fully symmetric case with λ 1=λ 2, give a relation between the probabilities P(i,i) and the marginal queue lengths P(N 1=i) in that symmetric case:
$$ P(N_{1}=i)=\frac{\rho }{2}P(N_{1}=i-1)+ \frac{1}{2}P(i,i)+\frac{1}{\rho }P(i+1,i+1),\quad i=1,2,\ldots , $$
(24)
and \(P(N_{1}=0)=(1-\rho )(1+\sqrt{1+\rho^{2}})/(1-\rho +\sqrt{1+\rho^{2}})\). It is trivial to use (24) to establish a relation between \(\sum_{i=0}^{\infty }z^{i}P(i,i)\) and \(\sum_{i=0}^{\infty }z^{i}P(N_{1}=i)\); the latter GF now follows from (22).

3.2 Determination of P(i+1,i) and P(i,i+1)

In Sects. 2.1, 2.2, and 2.3, we successively derived simple geometric expressions for P(N 1+N 2=n), P(n 1,0) and P(0,n 2), and for P(N 1N 2=n|N 1>N 2) and P(N 2N 1=n|N 2>N 1). In Sect. 3.1, we obtained a slightly more complicated expression for the (GF of) P(i,i). In the present subsection, we shall determine P(i+1,i) and P(i,i+1); that is an important building block for obtaining all P(i,j). Once more, there is a crucial role for the idea of having a “difference busy period” that starts at the moment the queue length vector leaves the state (i,i), and that lasts until equality is reached again. In the next subsection, we shall subsequently show how all P(i,j) can be determined once we know the above mentioned probabilities.

Consider the discrete-time Markov chain denoting the state at the times in which the difference between the two queue lengths becomes 0, 1 or −1. Its state space S consists of the states (i,i), (i+1,i) and (i,i+1), i=0,1,…. It has the following one-step transition probabilities:
$$ P_{(0,0),(1,0)} = \frac{\lambda_1}{\lambda_1 + \lambda_2}, \quad P_{(0,0),(0,1)} = \frac{\lambda_2}{\lambda_1 + \lambda_2}, $$
and for i=1,2,…,
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equf_HTML.gif
The last line perhaps requires an explanation. If a type-1 arrival occurs in state (i+1,i), then the difference becomes 2. It now takes a “difference busy period” until the difference returns to 1. With probability P(L (1)=l), l customers of type 2 arrive during this busy period, so in the discrete-time Markov chain under consideration we then move from state (i+1,i) to state (i+l+1,i+l).
Let us denote the limiting probabilities of the Markov chain by π(i+1,i), π(i,i+1) and π(i,i), i=0,1,…. Since 1/π(0,0) is the expected number of visits in states of S during the server busy period, we get that \(\frac{\pi(i,i)}{\pi(0,0)}\), \(\frac{\pi(i+1,i)}{\pi(0,0)}\) and \(\frac{\pi(i,i+1)}{\pi(0,0)}\), i=1,2,… are, respectively, the expected numbers of visits to states (i,i), (i+1,i), and (i,i+1) during the server busy period. Notice that, hence, π(i,i)/π(0,0) is identical to the π i /π 0 of the previous subsection; they both represent the mean number of visits to state (i,i) during a server busy period. Let us concentrate on P(i+1,i); P(i,i+1) then follows by symmetry (interchanging λ 1 and λ 2). We find for i=0,1,…:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ25_HTML.gif
(25)
Dividing both sides by π(0,0) and introducing \(F_{+}(z) := \sum_{i=0}^{\infty} \frac{\pi(i+1,i)}{\pi(0,0)} z^{i}\) and \(\varTheta(z) := \sum_{i=0}^{\infty} \theta_{i} z^{i}\) where, as before, θ i =π i /π 0=π(i,i)/π(0,0), we find after a straightforward calculation:
$$ F_+(z) = \sum_{i=0}^{\infty} \frac{\pi(i+1,i)}{\pi(0,0)} z^i = \frac{\frac{\lambda_1}{\lambda_1+\lambda_2+\mu} \varTheta(z) + \frac{\mu}{2(\lambda_1+\lambda_2+\mu)} \frac{\varTheta(z) -1}{z}}{1 - \frac{\lambda_1}{\lambda_1+\lambda_2+\mu} E[z^{L^{(1)}}]} . $$
(26)
The GF F +(z) of the \(\frac{\pi(i+1,i)}{\pi(0,0)}\) now follows from the known expressions for \(E[z^{L^{(1)}}]\) (cf. (17)) and for Θ(z) (via Π(z), cf. (20)).
We finally obtain P(i+1,i). Remembering that \(\frac{\pi(i+1,i)}{\pi(0,0)}\) is the expected number of visits to state (i+1,i) during the server busy period, and using Markov renewal theory, it follows that, like in (13),
$$ P(i+1,i) = \frac{\frac{1}{\lambda_1 + \lambda_2 +\mu}}{E[\mathit{BC}]} \frac{\pi(i+1,i)}{\pi(0,0)} = \frac{\rho(1-\rho)}{1+\rho} \frac{\pi(i+1,i)}{\pi(0,0)} , \quad i=0,1,\ldots. $$
(27)
Using (26), which gives the GF of P(i+1,i), this determines all P(i+1,i) for i=0,1,…, and by symmetry the P(i,i+1) also follow. Notice that P(j,0) and P(0,j) were given in Sect. 2.2 up to a multiplicative constant; that constant now follows since P(1,0) also is given by (27).

3.3 Determination of P(i,j)

We now describe a recipe to find the remaining P(i,j), ji+2, and ij+2. By symmetry, we can concentrate on the former case. First, we indicate how to obtain all P(1,j), j≥3; P(1,2) follows from the results of the previous subsection. From (1), we have
$$ (\lambda_1 + \lambda_2 + \mu) P(1,j) = \lambda_1 P(0,j) + \lambda_2 P(1,j-1) + \mu P(1,j+1) . $$
(28)
For j=2, this immediately determines P(1,3) as we know all other terms: P(0,2) from Sect. 2.2, P(1,1) from Sect. 3.1 and P(1,2) from Sect. 3.2. For j≥3, one can now use (28) to get P(1,j+1) once we have P(1,j−1) and P(1,j). We refrain from giving details, but we would like to observe the following. Equation (28) is an inhomogeneous second-order difference equation. The general solution of the homogeneous second-order difference equation reads \(K_{1} a_{+}^{j} + K_{2} a_{-}^{j}\), j≥2, where 1/a + and 1/a are the two zeros of the equation λ 2 x 2−(λ 1+λ 2+μ)x+μ=0. We already encountered this expression, with λ 1 and λ 2 interchanged, in the denominator of (6). There we concluded that one of the two roots has absolute value smaller than one. Accordingly, we may conclude that the general solution of the homogeneous second-order differential equation reads \(K_{1} a_{+}^{j}\), j≥2. Turning to the inhomogeneous equation, it should be observed that \(P(0,j) = \tilde{C}_{1} a_{+}^{j}\), with exactly the same parameter a + as for the homogeneous equation. This follows, by symmetry, from (8). The theory of inhomogeneous difference equations now implies that the general solution of (28) is given by \((L_{0} + L_{1} j) a_{+}^{j-1}\). This would also readily follow via an approach with generating functions, expressing ∑P(1,j)z j into ∑P(0,j)z j .
Next, we indicate how to obtain all P(i,j), i≥2, ji+2. From (1), we have
$$ (\lambda_1 + \lambda_2 + \mu) P(i,j) = \lambda_1 P(i-1,j) + \lambda_2 P(i,j-1) + \mu P(i,j+1) . $$
(29)
After having obtained P(i,j−1) and P(i,j), and the (lower level) P(i−1,j), the P(i,j+1) follow. Again observe that (29) is an inhomogeneous second-order difference equation of exactly the same form as (28). By induction, one may show that its solution has the form:
$$ P(i,j) = \sum_{m=0}^{i} h_m j^m a_+^{j-i}, \quad j=i+1,i+2,\ldots. $$
(30)
In fact, in the completely symmetric case this was proven in [9], and an algorithm was provided to determine the h m .

Remark 3

An interesting feature of the present model is that, for general service times, it has only been solved via the boundary value method (cf. Cohen [2]). In almost all two-dimensional queueing problems which have been solved via the boundary value method, taking exponential service times does not simplify the problem to such an extent that one no longer needs to rely on that method. In the present problem, though, there seems to be so much structure that, in the exponential case, the P(i,j) have the nice form indicated in (30).

3.4 Numerical example

We have implemented the formulas and algorithms for calculating P(i,j), as outlined above, in MATLAB. We have first computed the P(i,j) in a completely symmetric case (λ 1=λ 2) that was already studied in [9], and we verified that we obtained the same numbers as in their Table III. Next, we took an asymmetric case: λ 1=2, λ 2=1 and μ=4; so the load ρ=0.75. The results are presented in Table 1. First observe that P(0,0)=1−ρ=0.25. Next, one may observe that \(\sum_{j=0}^{i}P(j,i-j)=(1-\rho )\rho^{i}\) (Sect. 2.1) and that the P(i,0) and P(0,j) decrease geometrically fast (Sect. 2.2). We have computed the P(i,i) using (22). Notice that P(1,1) was used in calculating the constants C 1=P(1,0) and \(\tilde{C}_{1}=P(0,1)\) as outlined at the end of Sect. 2.2. Next, we determined P(i+1,i) and P(i,i+1) using (25) and (27). Finally, we have computed P(i,j) for |ij|≥2 using (28)–(30). Notice that P(i,j)>P(j,i) for i>j; this makes sense as λ 1>λ 2.
Table 1

Queue length probabilities P(i,j) for the case λ 1=2, λ 2=1, μ=4

ij

0

1

2

3

4

5

6

7

0

0.250000

0.066432

0.010425

0.001636

0.000257

0.000040

0.000006

0.000001

1

0.121068

0.086662

0.030848

0.005411

0.000938

0.000161

0.000028

0.000005

2

0.043537

0.057328

0.043391

0.015993

0.002840

0.000502

0.000088

0.000015

3

0.015657

0.024413

0.030185

0.023189

0.008623

0.001531

0.000271

0.000048

4

0.005630

0.010145

0.013431

0.016414

0.012686

0.004734

0.000838

0.000149

5

0.002025

0.004139

0.005875

0.007430

0.009052

0.007018

0.002624

0.000464

6

0.000728

0.001665

0.002532

0.003326

0.004131

0.005028

0.003906

0.001462

7

0.000262

0.000662

0.001076

0.001473

0.001871

0.002305

0.002805

0.002182

Remark 4

In the case of equal arrival rates, the computation of the steady-state probabilities simplifies a bit. In particular, one now has P(i,j)=P(j,i). P(j,0)=P(0,j) follow as before from (8). The P(i,i) follow from (22). P(i,i+1)=P(i+1,i) follow using the relations (2λ+μ)P(i,i)=2λP(i−1,i)+2μP(i+1,i). Note that P(1,0)=P(0,1) were already determined. Finally. P(i,j)=P(j,i) follow for |ij|≥2 as in Sect. 3.3.

4 Sojourn times

The main purpose of this section is to express the LST of the sojourn time distribution of a customer into the joint queue length distribution that was derived in the previous sections. We focus on a customer who has brought a failed item of type 1; by interchanging indices 1 and 2 (in particular, the arrival rates), we then also obtain the sojourn time LST for items of type 2.

4.1 The LST of the sojourn time distribution

Let X k,j := sojourn time of a type-1 customer who increases the number of customers waiting in line 1 from j−1 to j, and whose arrival increases the difference between numbers of waiting customers in lines 1 and 2 from k−1 to k, k≥1, j≥1. We similarly define X k,j for k≤0; in that case, the difference between lines 1 and 2 again decreases from k−1 to k. Define \(\varPsi_{k,j}(\alpha) := E[\mathrm{e}^{-\alpha X_{k,j}}]\).

Case I: k≥0

Let us first concentrate on the case k≥1, j≥1. Notice that X k,j lasts until the moment that j items have been returned to base 1. Conditioning on the amount of time until the first event occurs, we can write for k≥1, j≥1:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ31_HTML.gif
(31)
Here, we define Ψ k,0=1, k≥1. Similarly, we obtain for k=0, j≥1:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ32_HTML.gif
(32)
We shall solve this set of recurrence relations using generating functions. Let
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ33_HTML.gif
(33)
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ34_HTML.gif
(34)
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ35_HTML.gif
(35)
Multiplying both sides of (31) by \(z_{1}^{k}z_{2}^{j}\) and summing over k≥1, j≥1 yields:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ36_HTML.gif
(36)
We need to determine G 0(z 2;α) and G 1(z 2;α). One relation between these two functions is obtained via (32). First, we rewrite that equation by observing that
$$ \varPsi_{-1,j}(\alpha) = \varGamma(\alpha) \varPsi_{0,j}(\alpha), $$
(37)
where \(\varGamma(\alpha) = E[\mathrm{e}^{-\alpha B_{2}}]\), the LST of the difference busy period corresponding to an M/M/1 queue with arrival rate λ 2 and service rate λ 1+μ. The idea behind (37) is the following. If the tagged type-1 customer arrives to find j−1 customers in Q 1 and j+1 customers in Q 2, leading to a state (j,j+1), then it first takes a difference busy period B 2 until the two queue lengths are again equal. No type-1 items are returned during that busy period. At the end of B 2, the state has become (j+m,j+m) for some m≥0, with the tagged customer still in position j of Q 1. For the sojourn time of the tagged customer, it makes no difference whether the system is in state (j+m,j+m) or in state (j,j).

Remark 5

To see that it indeed makes no difference for the sojourn time of the tagged customer whether the system is in state (j+m,j+m) or in state (j,j), suppose that a type-1 customer (just for convenience we refer to this customer as the red customer), arrives to find the system at state (j−1,j). That means that he finds j−1 type-1 customers in front of him in line and also j type-2 customers; however, it is not yet determined how many of them are served before him. The reason for that is that in principle, the sojourn time of the red customer depends on future arrivals of both types of customers. After the admittance of the red customer, the state of the system becomes (j,j), and the LST of the sojourn time of the red customer is Ψ 0,j (α). However, the latter dependence on future arrivals has a special regenerative property. For example, suppose that m type-1 customers and m type-2 customers were admitted to the system after the arrival of the red customer and before the service completion of the item being served. Then, by the memoryless property of the service, the residual sojourn time of the red customer (the time it takes from the arrival of the above 2mth customer until the red customer leaves the system), is stochastically equal to the sojourn time of the red customer. Thus, the LST of the above residual sojourn time is also Ψ 0,j (α).

Thus, rewriting (32) yields for j=1:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ38_HTML.gif
(38)
and for j≥2:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ39_HTML.gif
(39)
or equivalently, for j≥2:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ40_HTML.gif
(40)
Multiplying the terms of the last equation by \(z_{2}^{j}\) and summing over j≥1 gives:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ41_HTML.gif
(41)
Subsequently, we derive a second relation between G 0(z 2;α) and G 1(z 2;α). We first rewrite (36), by grouping all G(z 1,z 2;α) terms, and multiplying all terms by μ+λ 1+λ 2+α:
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ42_HTML.gif
(42)
Consider the factor in front of G(z 1,z 2;α) in the left-hand side of (42). Multiplying this factor by z 1 and equating the result to zero yields the following quadratic equation in z 1:
$$ (\lambda_{2}+\mu z_{2})z_{1}^{2}-( \mu +\lambda_{1}+\lambda_{2}+\alpha )z_{1}+ \lambda_{1}=0. $$
(43)
We shall show that this equation has one root \(z_{1}^{+}(z_{2})\) inside the unit circle, and one root \(z_{1}^{-}(z_{2})\) outside the unit circle. Notice that \(1/z_{1}^{+}(z_{2})\) and \(1/z_{1}^{-}(z_{2})\) are the two roots of the equation
$$ \lambda_{1}y^{2}-(\mu +\lambda_{1}+ \lambda_{2}+\alpha )y+(\lambda_{2}+\mu z_{2})=0. $$
(44)
Following the reasoning that led to (16) and (17), it can be shown that the minus root of (44) is given by \(E[z_{2}^{K^{(1)}-L^{(1)}}\mathrm{e}^{-\alpha B_{1}}]\), where K (1)L (1) equals the number of item departures during the difference busy period B 1. This interpretation immediately shows that this y lies inside the unit circle. The sum of the two y-roots equals \(\frac{\mu +\lambda_{1}+\lambda_{2}+\alpha }{\lambda_{1}}\), which in absolute value is at least 2 since λ 1+λ 2<μ. Hence, one y-root is inside the unit circle and the other one lies outside, implying that the same holds for the roots of (43).
Since G(z 1,z 2;α) is analytic in z 1 for |z 1|<1, the right-hand side of (42) must be zero for \(z_{1} = z_{1}^{+}\). This yields a second relation between G 0(z 2;α) and G 1(z 2;α):
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ45_HTML.gif
(45)
We are thus able to determine those functions, and finally G(z 1,z 2;α) follows from (42).

In this way, we find the double GF of the sojourn time LST for a customer of type 1 who arrives to find j−1 customers waiting at Q 1, and whose arrival increases the difference between numbers of waiting customers in lines 1 and 2 from k−1 to k, for both k≥1 and k=0 (see G 0(z 2;α)).

Case II: k<0

In handling the case k<0, we use the same argument as in (37): if a customer arrives at base 1 and finds there j−1 waiting customers while the difference with the queue length in Q 2 decreases to k<0, then
$$ \varPsi_{k,j}(\alpha) = \varGamma(\alpha)^{-k} \varPsi_{0,j}(\alpha). $$
(46)
Indeed, \(\varGamma(\alpha)^{k}= (E[\mathrm{e}^{- \alpha B_{2}}])^{k}\) is the LST of the time it takes to reduce the difference between the two queue lengths to zero. Of course, during that process the queue lengths move to some state (j+m,j+m), but for the tagged customer in position j of Q 1, the value of m≥0 is irrelevant.

We thus obtain all conditional sojourn time LST’s. Multiplying by the probabilities P(j−1,jk) as seen by an arriving customer (use PASTA to conclude that these arrival probabilities are exactly the probabilities which were calculated in Sects. 2 and 3) and summing yields the unconditional sojourn time LST.

4.2 Further results for the sojourn time distribution

One might use the results of the previous subsection to obtain mean conditional sojourn times E(X k,j ). In the present subsection, we present an alternative approach for obtaining those means.

Let T j be the number of arrivals into base 1 minus the number of arrivals into base 2 that occur between the (j−1)th and the jth departure. Clearly, {T j ;j≥1} is a sequence of i.i.d. random variables; let T be the generic random variable of the sequence.

Lemma 1

$$ \bigskip P(T=n)=\mu \biggl( \frac{\lambda_{1}}{\lambda_{2}} \biggr)^{\frac{n}{2}} \bigl[ 2 \sqrt{\lambda_{1}\lambda_{2}\bigl(a^{2}-1\bigr)} \bigr]^{-1} \bigl\{ a-\sqrt{a^{2}-1} \bigr \}^{\vert n\vert },\quad n=0,\pm 1,\pm 2,\ldots $$
where
$$ a=\frac{\lambda_{1}+\lambda_{2}+\mu }{2\sqrt{\lambda_{1}\lambda_{2}}}>1. $$

Proof

Let V i be the number of arrivals into base i (i=1,2) between two successive departures and let X be the generic service time of an item. By conditioning on X, \(V_{i\text{ }}\) is a Poisson random variable with mean λ i /μ. We have for n=0,±1,±2,…
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ47_HTML.gif
(47)
where \(I_{n}(y)=\sum_{k=0}^{\infty }\frac{(y/2)^{n+2k}}{k!(k+n)!}\) is the Bessel function (of order n) of a purely imaginary argument. Using [6], we get
$$ \int_{0}^{\infty }e^{-ay}I_{n}(y)\,dy= \bigl( \sqrt{a^{2}-1} \bigr)^{-1} \bigl( a- \sqrt{a^{2}-1} \bigr)^{n}, \quad n=0,1,2,\ldots\ a>0 $$
(48)
where
$$ a=\frac{\lambda_{1}+\lambda_{2}+\mu }{2\sqrt{\lambda_{1}\lambda_{2}}}. $$
 □
Note that
$$ P(T=n)=\left\{ \begin{array}{c@{\quad }c} cb_{1}^{n}, & n\geq 0, \\[3pt] cb_{2}^{-n}, & n<0,\end{array}\right. $$
(49)
where
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equj_HTML.gif
and
$$ b_{2}=(\lambda_{2}/\lambda_{1})^{1/2} \bigl(a-\bigl(a^{2}-1\bigr)^{1/2}\bigr). $$
By using Lemma 1, we derive some relevant relations for Ψ k,j , k≥0, j≥1. The relations are computed via a recursive algorithm that will be implemented to compute E k,j :=E(X k,j ).
$$ E\bigl(e^{-\alpha X_{0,1}}\mid T=n\bigr)=\left\{ \begin{array}{l@{\quad }l} \frac{\mu }{\mu +\alpha }, & n\geq 1, \\[8pt] \frac{1}{2}\frac{\mu }{\mu +\alpha }[1+\varPsi_{1,1}(\alpha )], & n=0, \\[8pt] \frac{\mu }{\mu +\alpha }[\varGamma (\alpha )]^{-(n+1)}\varPsi_{0,1}(\alpha ), & n\leq -1,\end{array} \right . $$
(50)
so that by Lemma 1
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ51_HTML.gif
(51)
For k≥1,
$$ E\bigl(e^{-\alpha X_{k,1}}\mid T=n\bigr)= \left\{ \begin{array}{l@{\quad }l} \frac{\mu }{\mu +\alpha }, & n\geq 1-k, \\[6pt] \frac{1}{2}\frac{\mu }{\mu +\alpha }[1+\varPsi_{1,1}(\alpha )], & n=-k, \\[6pt] \frac{\mu }{\mu +\alpha }[\varGamma (\alpha )]^{-(n+k+1)}\varPsi_{0,1}(\alpha ), & n\leq -k-1,\end{array}\right. $$
and
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ52_HTML.gif
(52)
Substituting α=0 in the first derivative of (51) and (52), we obtain
$$ E_{0,1}=\frac{1}{\mu }+\frac{c}{2}E_{1,1}+ \frac{cb_{2}}{1-b_{2}}E_{0,1}+ \frac{cb_{2}^{2}}{(1-b_{2})^{2}(\lambda_{1}+\mu -\lambda_{2})} $$
(53)
and for k≥1
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ54_HTML.gif
(54)
Note that in the first phase we compute E 0,1 and E 1,1. Then, in the second phase, the E k,1 for k≥2 are obtained recursively.
From (53) and (54), we get
$$ E_{k,1}=b_{2}^{k}E_{0,1}+ \frac{1-b_{2}^{k}}{\mu }, $$
(55)
where obviously,
$$ \lim_{k\longrightarrow \infty }E_{k,1}=\frac{1}{\mu }. $$
Similarly, we create recursive equations for Ψ 0,j (α), j≥1:
$$ E\bigl(e^{-\alpha X_{0,j}}\mid T=n\bigr)=\left \{ \begin{array}{l@{\quad }l} \frac{\mu }{\mu +\alpha }\varPsi_{n-1,j-1}(\alpha ) , & n\geq 1, \\[6pt] \frac{ \mu }{\mu +\alpha }\frac{1}{2}[\varGamma (\alpha )\varPsi_{0,j-1}(\alpha )+\varPsi_{1,j}(\alpha )], & n=0, \\[6pt] \frac{\mu }{\mu +\alpha }[\varGamma (\alpha )]^{-(n+1)}\varPsi_{0,j}(\alpha ), & n\leq -1.\end{array}\right. $$
(56)
By Lemma 1, we get (after substituting α=0 in the first derivative)
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ57_HTML.gif
(57)
Substituting α=0 in the first derivative of (40), we get for j≥2:
$$ E_{0,j}(\mu +\lambda_{1})=\frac{3\mu +2\lambda_{1}}{2(\mu +\lambda_{1}-\lambda_{2})}+ \frac{\mu }{2}E_{0,j-1}+\frac{\mu +2\lambda_{1}}{2}E_{1,j}. $$
(58)
To compute E k,2 for k≥0, substitute (55) in (57) and use (58).
By substituting α=0 in the first derivative of (31), we get
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-012-9319-5/MediaObjects/11134_2012_9319_Equ59_HTML.gif
(59)
By (59), E 0,2, E 1,2, and E k,1, k≥0, are used for the computation of E k,2; the recursion is complete.

4.3 Numerical example

We have implemented the formulas and algorithms for calculating E k,j , as outlined above, in MATLAB. We have taken λ 1=2, λ 2=1 and μ=4. The results are displayed in Table 2.
Table 2

Conditional expected sojourn time E k,j for the case λ 1=2, λ 2=1, μ=4

kj

1

2

3

4

5

6

7

0

0.379555

0.811411

1.224560

1.631373

2.035292

2.437686

2.839205

1

0.269332

0.627366

1.031134

1.434780

1.837252

2.238883

2.639965

2

0.252885

0.531077

0.873868

1.256923

1.649990

2.046662

2.444902

3

0.250430

0.506422

0.788239

1.119573

1.486777

1.869373

2.259407

4

0.250064

0.501170

0.759749

1.042309

1.364878

1.719494

2.092150

5

0.250009

0.500022

0.752156

1.011815

1.294536

1.609989

1.954075

6

0.250001

0.500000

0.750332

1.000703

1.263822

1.545459

1.854200

7

0.250000

0.500000

0.750039

1.000000

1.252476

1.515039

1.792684

One can see that for a constant j, as k increases, the expected sojourn time E k,j , tends to j/4 which is the expectation of the Erlang(j,4) random variable. This is not surprising, since as k gets larger, the probability that repaired items will be released only to base 1 during the customer’s sojourn time, is getting closer to 1.

Open Access

This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Copyright information

© The Author(s) 2012