Markovian bulk-arrival and bulk-service queues with general state-dependent control

We study a modified Markovian bulk-arrival and bulk-service queue incorporating general state-dependent control. The stopped bulk-arrival and bulk-service queue is first investigated, and the relationship between this stopped queue and the full queueing model is examined and exploited. Using this relationship, the equilibrium behaviour for the full queueing process is studied and the probability generating function of the equilibrium distribution is obtained. Queue length behaviour is also examined, and the Laplace transform of the queue length distribution is presented. The important questions regarding hitting times and busy period distributions are answered in detail, and the Laplace transforms of these distributions are presented. Further properties regarding the busy period distributions including expectation and conditional expectation of busy periods are also explored.


Introduction
Markovian queues occupy a significant niche in applied probability. Indeed, Markovian queues play a very important role both in the development of general queueing models and in the theory and applications of continuous-time Markov chains. Good references, among many others, for the former are Asmussen [4], Gross and Harris [18], Kleinrock [23] and Medhi [27], and, for the latter, Anderson [1], Chung [12], Freedman [15] and Syski [36].
Within the framework of queueing theory, there are two particularly interesting topics which have attracted much attention. The first one is bulk queues, which have wide and very important applications. The theory of bulk queues, (including bulk arrivals and/or bulk service) has attracted extensive attention and is well developed. Note that bulk arrivals (sometimes to be called batch arrivals) and bulk service queues are commonplace in scenarios such as industrial assembly lines, road traffic flow, the movement of aircraft passengers, etc., and thus the related models have extensive and important applications. One important reference in this topic is the good monograph by Chaudhry and Templeton [7]. For advances in this topic, see Armero and Conesa [2], Arumuganathan and Ramaswami [3], Chang, Choi and Kim [6], Fakinos [14], Srinivasan, Renganathan and Kalyanaraman [33], Sumita and Masuda [35] and Ushakumari and Krishnamoorthy [37], Stadje [34], V. Ramaswami [31], Lucantoni [25] and many others. The second topic is state-dependent input and output mechanisms, usually called state-dependent controls, which have also attracted considerable attention. For example, Chen and Renshaw [10,11] considered models which have allowed the possibility of clearing the entire workload.
It seems that it is Chen et al. [8] who first combined the two modifications together by interweaving the bulk-arrival and bulk-service queues with state-dependent control either at idle time or at time with empty waiting line, which thus generalises the Chen-Renshaw models [10,11] to make them more relevant and applicable. See also Chen et al. [9].
The model discussed in Chen et al. [8] has close links with so-called negative arrivals. It seems to us that Gelenbe [16] and Gelenbe, Glynn and Sigman [17] first introduced the particularly useful concept of negative arrivals, and this was followed up by many other authors, including Bayer and Boxma [5], Harrison and Pitel [19], Henderson [20] and Jain and Sigman [22]. The model also has close theoretical links with the versatile Markovian arrival processes introduced by Neuts [28], which includes several kinds of batch-arrival processes. Additionally, Neuts [29] described a number of interesting batch-arrival models together with useful methods for analysing them. For further developments, see, for example, Lucantoni and Neuts [26], Nishimura and Sato [30] and Dudin and Nishimura [13]. However, the model discussed in Chen et al. [8] has the serious limitation that the control effect only happens when the queue is empty or has only one customer, which prevents the model from having extensive applications. The main aim of this paper is therefore to overcome this limitation. That is, the current paper combines the bulk-arrival and bulk-service mechanism with general state-dependent control. More specifically, based on the bulk-arrival and bulk-service structure, the control effect can happen for arbitrary (but finitely) many states. This makes the model have much wider applications.
We now give a formal definition of our model. Our model is a Markovian one and thus the model is usually specified by an infinitesimal q-matrix; see Asmussen [4] or Anderson [1]. Now our model discussed in this paper has an infinitesimal q-matrix Q = {q i j ; i, j ∈ Z + }, where Z + = {0, 1, 2 . . .}, which takes the following form: there exist a positive integer N ≥ 1 such that where (1.3) By the above definition, we see that the underlying structure of our model is something like an M X /M Y /1 queue. However, mainly due to the applications, the state-dependent input and output mechanisms have been incorporated into this underlying structure. Intuitively speaking, this extra structure indicates that when the queueing length is less than some specific level, N say, then the manager may wish, randomly, to move some "customers" or "workload", stored somewhere else, to the system in order to speed up working and thus make the work more effectively. However, because of applications, we shall not only consider increasing working loads but also consider some other aspects, and thus make the extra structure arbitrary. This is exactly the reason why we name this extra structure "state-dependent control" even though it may not be an appropriate terminology. It should be noticed that due to this arbitrary effect, the underlying structure is seriously affected and, in particular, the original "arrival process" and "service process" are closely interwoven and correlated with each other. This makes some traditional methods and techniques, including the powerful matrix-analytic method, less effective in analysing our current model. It should be also noticed that our model is a Markovian one and thus our model has more deep properties than, for example, the M/G/1-type Markov chains. Being a Markovian model, we also have more powerful methods and techniques such as Kolmogorov backward and forward equations and Ito's excursion law which enable us to get many more fruitful results than for the M/G/1-type Markov chains, say.
Another reason for us to use the current approach is that the results obtained in this paper open the door and paves the way to study another advanced and extremely important topic of quasi-limiting distributions, including determining the decay parameter and invariant measure, and functions which reveal deep properties regarding transient behaviour of our current queuing models. It is remarkable that the quasi-limiting behaviour is quite different for the continuous time and discrete time processes. Therefore, as far as quasi-limiting distributions are concerned, the behaviour of our current model is also remarkably different to say, the M/G/1-type Markov chains. We shall discuss this topic in a couple of subsequent papers.
The paper is organised as follows: In Sect. 2, we present a fundamental construction theorem together with some key lemmas. All later developments depend heavily on these results. Section 3 concentrates on discussing the so-called stopped queue with both bulk arrivals and bulk service. A delicate structure is revealed. The results obtained in this section regarding stopped queues are not only of their own interest but also crucial in our later analysis. Sect. 4 concerns questions of recurrence and ergodicity, as well as equilibrium distributions. The generating function of the equilibrium distribution is derived. The queue length distribution is also derived in this section. In Sect. 5, the bulk-arrival and bulk-service queues stopped at the idle state is analysed in detail, which paves the way to study the busy period distributions. In Sect. 6, the important hitting time distributions and the busy period distributions are discussed. Many important properties regarding these hitting time distributions and busy period distributions are revealed and many deep results regarding these distributions are presented. In the final Sect. 7, an example is provided to illustrate the conclusions obtained in the previous sections.

Preliminaries
We first pack our known data specified in (1.1)-(1.3) into a few generating functions. and where B(s) and H k (s) (0 ≤ k ≤ N − 1) are defined in (2.1) and (2.2), respectively.
Proof By the Kolmogorov forward equation λR(λ) − I = R(λ)Q, together with noting the form of Q given in (1.1), we immediately obtain that, for any i, j ∈ Z + , Multiplying by s j , where |s| < 1, on both sides of (2.4) and summing over j from 0 to ∞ yields By noting the definitions given in (2.1) and (2.2), we immediately obtain We now provide a couple of fundamental lemmas which will be our stepping stones for further analysis. To make these lemmas, as well as the conclusions obtained thereafter, enjoy probabilistic meanings, let denote the mean "arrival" and "service" rates, respectively. Note that 0 < m d < +∞ and 0 < m b ≤ +∞. Clearly, For the full proof of these important conclusions, readers could consult Li and Chen [24]. Clearly the key point in Lemma [24].
The most important root is the positive root u 0 (λ) of the equation B λ (z) = 0 on [0, 1). Hence, in the following lemma, we concentrate on discussing important properties of u 0 (λ). First note that u 0 (λ) is defined only for λ > 0 at the current stage.

Lemma 2.3
The root u 0 (λ) given in Lemma 2.2 has the following properties: (2.10) (v) If m d < m b ≤ +∞ (and hence u < 1), then, for any positive integer k, (2.11) (vi) If m b = m d (and hence u = 1), then, for any positive integer k, To prove the other parts, first note by the proven (2.9) and on writing u 0 = u 0 (0) we have since u 0 (λ) is differentiable for λ > 0. Also, since u 0 (λ) is a decreasing function of λ > 0, we thus have that u 0 (ξ ) < 0 for any ξ ∈ (0, λ). Now since m b > m d implies u < 1 (refer to (2.9)), we see that (2.10) holds true if m b > m d . Hence we need only consider the case that m b ≤ m d , in which case (2.13) can be written as (2.14) Whence, on considering u 0 (λ) as the root of B λ (s) = 0, we may get B(u 0 (λ)) = λ(u 0 (λ)) N . Differentiation with λ > 0 then yields Letting λ → 0, and noting that u 0 (λ) and B (u 0 (λ)) are continuous functions of λ on [0, ∞) (see Remark 2.1 and noting that we have already proved (i)-(iii)) and that λu N 0 (λ) tends to 0 as λ → 0 leads us to conclude that which is true for all cases. Note that the right-hand side of the above (2.16) is a finite positive value and thus so is the left-hand side of (2.16). However, B (s) is a continuous function of s, at least for 0 ≤ s < 1, and lim while if m d < m b ≤ +∞, then, again by (2.16), and, thus since B (u 0 (λ)) tends to B (1) = 0, we know that if m d = m b then which is a consequence of (2.19) together with the fact that B (u 0 (λ)) is negative when λ ↓ 0. Now, m b ≤ m d implies u 0 = 1, and combining this fact with (2.14) and noting that B (1) = m b − m d then yields and so (2.10) holds true for k = 1. Now, for m b < m d , we may rewrite (2.21) as Hence, for any positive integer k, (u 0 (λ)) k = 1 − kλ/(m d − m b ) + o(λ) and so (2.10) follows. This completes the proof of (iv). Turning to (v), since m d < m b ≤ +∞ and hence u 0 < 1, both B (u 0 ) and B (u 0 ) are finite. Now by (2.18) we see that the finiteness of B (u) implies u (0) is a nonzero and finite value and thus we have (2.22) and, by using (2.22) together with (2.18), we get that, for any positive integer k, from which (2.11) follows by also noting u 0 = u, which ends the proof of (v). We now proceed to the subtle case where m d = m b . Recall if m d = m b then B(1) and B (1) are both zero, and thus, if we assume that B (1) is finite, then we have, for 0 < s < 1, Letting s = u(λ) for λ > 0 in the above and noting that 0 < u(λ) < 1, we get It then follows that by noting that 0 < B (1) < +∞. Now letting λ → 0 in the above (2.23) we see that, firstly, and, secondly, by noting that lim which is just one; see (2.19).
Hence, we obtain , (2.24) we have thus proved (2.12) for the case k = 1. For the general case k > 1, we may use (2.23) together with some easy algebra to show that (2.12) holds true for any k ≥ 1, which then finishes the proof of (vi) and thus the whole of Lemma 2.3.

Remark 2.2
Note that in proving properties of u 0 (λ) in Lemma 2.3, the only condition we have used is that u 0 (λ) is a zero of B λ (s). Hence, all the conclusions, particularly statements analogous to (ii)-(iii), hold true for all the other zeros of B λ (s), i.e.

The stopped bulk-arrival and bulk-service queue
In this section, we assume that all the first N states are absorbing. That is, we assume . The corresponding q-matrix is denoted by Q * . There are two main reasons for us to study the Q * -process first. On the one hand, the properties of the Q * -process will serve as a tool in investigating the main models which will be discussed in detail in the following sections and, on the other hand, to study the corresponding Q * -process has its own interests since it can be viewed as a model of generalised Markov branching processes rather than a queueing model. Just because of the latter reason, we shall use some notation and terminology, such as extinction probabilities, within this section. For both reasons, we are now interested in getting closed forms of the Feller minimal Q * -resolvent {φ * ik (λ)}. By Lemma 2.2 the equation B λ (s) = 0 has exactly N roots on the open unit disc {s; |s| < 1}, denoted by {u 0 (λ), u 1 (λ), . . . , u N −1 (λ)}, with u 0 (λ) as the unique positive root on (0, 1). We now use the N -dimensional column vector U(λ) to denote these N roots, i.e., where T denote the transpose. Also, for any non-negative integers k ≥ 0, denote Of course U 1 (λ) = U(λ) and U 0 (λ) = 1; here 1 denotes the column vector whose components are all one. Similarly, we let denote the N roots of the equation B(s) = 0 on the unit closed disc {s; |s| ≤ 1} as Lemma 2.1 shows. In fact, we have lim λ→0 U(λ) = U(0). In many cases, we shall just denote U = U(0), or in component form, Note that, however, if m d < m b ≤ +∞, u 0 (0) = u < 1, and the trivial root 1 is not included in (3.3). As a result, all the component of U(0) ≡ U are totally distinct. It is also convenient to denoteÛ That is,Û(λ) is nothing but cutting off the first component of U(λ). Similarly,Û(0)=Û is just the column vector obtained by cutting off the first component from U. Finally, the determinant of the N × N matrix (1, U(λ), U 2 (λ), . . . , U N −1 (λ)) will be denoted by By applying properties regarding determinants, it is easy to see that (λ) defined in the above (3.6) can be rewritten in the following forms: Also, for any 0 ≤ i ≤ N − 1, denote By comparing the forms of (λ) and (i) (λ) given in (3.6) and (3.8), we see that they are the same except that the kth column of the former is replaced by the ith column of the latter. Keeping the above notation in mind, we may claim the following simple yet interesting result.
(3.10) is simply a consequence of (2.3) in Theorem 2.1 together with the fact that all Hence we only need to show that (3.11) is true. However, this is easy. Indeed, by It follows from (3.10) that the right-hand side of (3.10) is also well defined for s = u 0 (λ). But u 0 (λ) is a zero of B λ (s) and thus the denominator of the right-hand side of (3.10) is zero. It follows that u 0 (λ) must also be a zero of the numerator of the right-hand side of (3.10). Therefore, we have But, u 0 (λ) is a root of B λ (s) = 0 and thus B(u 0 (λ)) = λ(u 0 (λ)) N , and then, by noting we use u i 0 (λ) to denote (u 0 (λ)) i , we obtain By (3.12) and (3.13), we immediately obtain the first equality in (3.11). The second equality follows from the notation in (3.6) and (3.8). This ends the proof. It is interesting to note that (3.10) and (3.11) provide a perfect solution to our process, since the whole resolvent can be simply expressed by the N roots of the known equation B(s)−λs N = 0. Therefore the expressions (3.10) and (3.11) will be extremely helpful in analysing the properties of the Q * -process. In particular, let {Z * t , t ≥ 0} be the Q *process. Define the hitting time to state k=0 τ * k denote the "overall" hitting time (the hitting time to the absorbing set {0, 1, . . . , N − 1} ). We are now ready and interested in getting these important quantities. We first claim the following simple lemma.

Lemma 3.1 For any n ≥ N we have
and, in particular,

(3.16)
Proof First note that (3.14) is just a rewritten form of the proved (3.11) by letting i = n and k = 0. We now use mathematical induction to prove (3.15). By using (3.14) and (3.11), we obtain and hence (3.15) is true for k = 1. Now, suppose that for some k, where 1 ≤ k ≤ N − 2, (3.15) is true. Then by using properties of determinants together with some easy algebra, it is easily seen that (3.15) is true for k + 1. Therefore (3.15) is true for any n ≥ N and any 0 ≤ k ≤ N − 1. In particular, (3.16) is true since it is a consequence of (3.15).
As a consequence of Lemma 3.1, we have the following corollary which can be more easily applied to our later analysis.
We are now ready to reveal properties of the Q * -process. Let P is just the cumulative distribution function of τ k given that the Q * -process starts at state n ≥ N . That is, , and thus a * nk is the hitting probability to state k (0 ≤ k ≤ N − 1), given that the process {Z * t (ω); t ≥ 0} starts at the state n ≥ N . We shall use the terms "extinction probabilities", etc., as though the Q * -process is some kind of branching processes. Similarly, let a * n = P n {τ * < ∞} be the overall "extinction probability" to the "extinction set" {0, 1, 2, . . . , N − 1} given that the Q * -process starts at state n ≥ N . Now we may claim the following important conclusion by noting that ∞ j=0 φ * i j (λ)s j is the Laplace transform of the generating function of transition probabilities be the Q * -process. Then the overall extinction probabilities {a * n ; n ≥ N } and the extinction probabilities {a * nk ; n ≥ N , 0 ≤ k ≤ N − 1} are given as follows: where, for example, U 2 = U 2 (0) = lim λ→0 U 2 (λ), etc., as stated before. Moreover, for the overall extinction probabilities a * n = P n (τ * < ∞) (n ≥ N ), we have that , then, for any n ≥ N , a * n < 1 and the value of a * n is given by We now show that (i) and (ii), including (3.20), are true. Indeed, since, for any n ≥ N , (3.21) By using the notation introduced in (3.5) we may write (3.21) as We see that the numerator and the denominator of (3.23) are exactly the same and also nonzero, and thus, a * n = 1 for all n ≥ N . Here the fact that the numerator and the denominator in (3.23) are the same is clear, while the fact that, they are nonzero is due to the fact that, by using the properties of determinants, It is easy to see, by noting (3.24), that a * n < 1 (n ≥ N ). Indeed, the difference between the denominator and the numerator of (3.24) is just which is clearly nonzero and thus strictly great than zero, and thus a * n < 1 and the value of a * n is just (3.24) (i.e., (3.20)). This completes the proof.
By Theorem 3.2, we see that if m d ≥ m b , then the Q * -process will definitely go to extinction. We are therefore interested in obtaining the mean extinction time, i.e., E(τ * |Z * 0 (ω) = n), where n ≥ N . We shall denote this important quantity as  (3.25) where in (3.25) is given by and E n (·) denote the mathematical expectation under P n (·), i.e. under the condition that the Q * -process starts at state n ≥ N .
Proof First if m d < m b ≤ +∞, then by Theorem 3.2, a * n < 1 and thus, trivially, E n (τ * ) = ∞ (n ≥ N ). Hence we only need to consider the case that m d ≥ m b . For this case, applying the Tauberian Theorem yields (3.27) where (λ) is given in (3.6) or (3.7) and By (3.6) and (3.7) and noting the notationÛ(λ) introduced in (3.5), it is easy to get that Letting λ → 0 in (3.30) and using (2.10) proved in Lemma 2.3 immediately yields the result that if where we recall that 1 −Û k (k is a positive integer) is nothing but On the other hand, it is trivial to see that lim λ→0 (λ) = (0), which is finite and strictly positive. By (3.6) we see that, if we let = (0), then Substituting (3.31)-(3.33) into (3.29) shows that we have proved our conclusion.

Remark 3.2
Although somewhat similar the notations (i) (λ) and n (λ) defined in (3.8) and (3.28), respectively, are not the same. In particular, for the former we have 0 ≤ i ≤ N − 1 and thus the position of the column vector U i (λ) is varying while for the latter n ≥ N and thus the position of U n is always in the last column.
By Theorem 3.3 we see that, if m d < m b ≤ ∞, the mean time to overall extinction E n (τ * ) is infinite which is quite informative. The main reason is that if m d < m b ≤ ∞ then the overall extinction probability a * n is strictly less than 1. This prompts us to consider the expectation of τ * conditional on τ * < ∞ which will be much more informative.
In order to state our results more simply, we give the following non-standard definition: Suppose A = {a 1 , a 2 , . . . , a n } T and B = {b 1 , b 2 , . . . , b n } T are two ndimensional column vectors. We define as a new n-dimensional column vector.
where a * n is given in (3.20) and N −1 k=0 λφ * nk (λ) is given in (3.17). More specifically, for n ≥ N , where is defined in (3.33) and n is defined as below: and in fact, n = lim λ→0 n (λ); here n (λ) is defined in (3.28). Also, for 1 ≤ k ≤ N − 1, Proof First note that, for any n ≥ N , But it is easy to see that and thus substituting the above into (3.40) yields However, P{τ * ≤ t|Z * 0 = n} is just the p * n0 (t) for the Q * -function of the Q * -process {Z * t ; t ≥ 0} whose Laplace transform is just φ * n0 (λ) (n ≥ N ) which is given in (3.11). It follows that the Laplace transform of and thus by applying the Tauberian Theorem in (3.41) yields (3.42) Noting that P{τ * < ∞|Z * 0 = n} is nothing but a * n (n ≥ N ) which is given in (3.20) (since in our case m d < m b ), the crucial thing is to calculate Applying L'Hospital's rule in (3.43) shows that the limit in (3.43) is just Hence (3.35) is proven. Moreover, substituting (3.17) into the above (3.44) yields that the limit in (3.44) is just . Expressions (3.9)-(3.11) in Theorem 3.1 can also be used to obtain information about the mean function of the Q * -process. Let m * i (t) = E i (Z * t (ω)) and denote its Laplace transform by ξ * i (λ). Then, since m * from which we can obtain the following result.

The full bulk-arrival and bulk-service queues
After studying the properties of the Q * -processes in the previous section, we are now ready to study the full queueing model with bulk-arrival-bulk-service and general state-dependent control; this is the key section of this paper. Our q-matrix Q now takes the general form (1. . For convenience, we shall further assume that the q-matrix Q is irreducible. It follows that the (unique Feller minimal) Q-function and Q-resolvent are also irreducible. Denote the corresponding queueing process as {X t ; t ≥ 0}. This section consists of three sub-sections. In the first sub-section, we consider the structure of the Q-resolvent of the queueing process {X t ; t ≥ 0} which is the main tool to investigate properties of our full queuing model. Based on the conclusions obtained in the first sub-section, the ergodic property, particularly the equilibrium distribution, which is always one of the most important topics for all kinds of queuing models, is fully discussed in the second sub-section. In the final sub-section, the queue length distribution, which is also one of the important topics in queuing models, is investigated in detail. Another extremely important topic, the busy period distribution, will be investigated in the following Sects. 5 and 6.

Construction of the Q-resolvent
We now consider the structure of the Q-resolvent of the process {X t ; t ≥ 0}. By using Theorem 2.1 and similar methods to those used in the last section, we can immediately obtain the following important construction theorem.
Proof (4.1) is a rewriting of (2.3) in Theorem 2.1. For other parts, just note that again, by the facts that B(s) − λs N = 0 has N roots {u 0 (λ), u 1 (λ), . . . , u N −1 (λ)} on the open unit disc {s; |s| < 1} and the right-hand side of (4.1) is finite at these roots, we know that the numerator of the left-hand side of (4.1) vanishes at these N roots. It follows that, for 0 ≤ l ≤ N − 1 and i ≥ 0, Now, using the fact that B(u l (λ)) = λ(u l (λ)) N (0 ≤ l ≤ N − 1), we obtain that, for all i ≥ 0, By solving the linear equations (4.7) and noting the notation (4.5), we immediately obtain (4.2)-(4.4). This completes the proof of Theorem 4.1.

Recurrence, ergodic and equilibrium properties
We are now ready to consider the ergodic and equilibrium behaviour of the full queueing model. Recall that we have assumed that the q-matrix Q and thus the Feller minimal Q-process is irreducible. By applying the conclusions obtained in the previous Sect. 3, we immediately obtain the following important theorem.
We use a probabilistic approach to prove this theorem. Just consider the relationship between the Q-process and the Q * -process discussed in the previous Sect. 3. It is obvious that the Q-process is recurrent if and only if, for the Q * -process, the overall extinction probability is 1, and the latter is equivalent to the fact that m d ≥ m b . Next, it is also easy to see that the Q-process is positive recurrent if and only if, for the Q * -process, the mean extinction time is finite and for any state i (0 ≤ i ≤ N − 1), the mean returning time to states {N , N + 1, . . .} from any state k (0 ≤ k ≤ N − 1) is finite. However, it is easily seen that the former is equivalent to m d > m b , and the latter is equivalent to (4.8) holding true.
Under the positive recurrence conditions, we know that there exists a unique equilibrium distribution. We are now interested in obtaining a closed form for this unique equilibrium distribution. Interestingly, the closed form can be easily obtained.
To determine the form of π k for 0 ≤ k ≤ N − 1, we turn our attention to the proven expressions (4.2)−(4.4). Letting i = 0 and then multiplying λ > 0 on both side of (4.2) yields . (4.15) Letting λ → 0 in (4.15), noting that lim where κ is given in (4.13), which is just π 0 . This also shows that (4.10) is true. Similarly, for 1 ≤ k ≤ N − 1, by letting i = 0 and multiplying by λ > 0 on both sides of either (4.3) or (4.4), together with some similar algebra, it is quite easy to show that in letting λ → 0, (4.11) or (4.12) becomes true. This finishes the proof.
The particular interesting forms presented in the above Theorem 4.3 are very convenient to obtain other important queueing quantities in the equilibrium distribution = {π j ; j ≥ 0}. For example, we may obtain the mean equilibrium queue size, , as the following corollary shows. Note: H k (1) < +∞ implies that H k (1) < +∞.

Corollary 4.1 The expectation of the equilibrium queueing size distribution is given by
Proof By using (4.9), we can get that (4.17) In order to consider E( ), we first consider the following two limits: Clearly, the former is just B (1) whose finiteness is guaranteed by our conditions that m d > m b and H k (1) < ∞ (0 ≤ k ≤ N − 1), while the latter, by using Hospital's rules two times and after some easy algebra, is just which is finite if and only if B (1) < +∞ and H k (1) < +∞. Now, letting s ↑ 1 in (4.17) immediately yields which can be rewritten as (4.16) by noting that The higher moments, including the variance of , can be similarly obtained. But we shall omit the details here.

Queue length distributions
We now turn to consider another important topic, the queuing length distribution, which people are usually quite interested in. The construction Theorem 4.1 presented in subsection 4.1 is also particularly convenient in analysing the related properties. We, again, assume that for all k (0 ≤ k ≤ N − 1), h kk = 0 and thus H k (s) = 0 (0 ≤ k ≤ N − 1). Now let m i (t) be the mean length of the queueing process {X t } at time t > 0, starting from X 0 = i. Let ξ i (λ) denote the Laplace transform of m i (t).

Theorem 4.4
The Laplace transform of the mean queue length function, ξ i (λ), starting from X 0 = i ≥ 0, is given, by (4.18) where the quantities {r ik (λ); 0 ≤ k ≤ N − 1} are given in (4.2)-(4.4). Moreover, the mean queue length function at time t ≥ 0 is given by Proof We first rewrite (4.1) as Differentiating both sides of the above equation with respect to s yields (4.20) Letting s ↑ 1 in (4.20) and noting that B(1) = 0 and H k (1) = 0 (0 ≤ k ≤ N − 1), together with ∞ j=0 r i j (λ) = 1 λ , immediately yields which, by denoting ξ i (λ) = ∞ j=1 jr i j (λ), can be rewritten as The higher moments of the queueing length can be similarly given. As an example, we consider the second moment as follows.

The bulk-arrival and bulk-service queue stopped at idle state
In this, as well as the following, section, we concentrate on discussing the very important topic of the busy period distribution. As a preparation, the probabilistic laws regarding hitting time to the idle state zero are firstly revealed in this section. To achieve this aim, we need to study a special structure of our queueing model. More specifically, we need to examine the structure of the regular process {Y t ; t ≥ 0} whose (regular) q-matrixQ is defined in (1.1)−(1.3) but with the special feature of . We shall immediately see that there exists a close relationship between this process and our full queuing model. In particular, the hitting properties of the process {Y t ; t ≥ 0} are directly related to the busy period properties of our full queueing process {X t }. For convenience, as in Sect. 3, let's view {Y t ; t ≥ 0} as a branching process and thus use terms such as extinction probability etc. In fact, the process {Y t ; t ≥ 0} can indeed be viewed as a generalised Markov branching process with state-dependent immigration, and thus also has its own interest. Similar to in Sect. 3, we now again provide a special structure for theQ-resolvent of the process {Y t ; t ≥ 0}. But this is easy. In fact, it is simply a direct consequence of Theorem 2.1. Indeed, letting H 0 (s) = 0 in Theorem 2.1 immediately yields the following conclusion.
possesses the following form: where the known function B(s) and H k (s)(1 ≤ k ≤ N − 1) are given in (2.2) and (2.1), respectively. Moreover, the {φ i j (λ); i ≥ 1, 0 ≤ j ≤ N − 1} in the numerator of the right-hand side of (5.2) take the form

3)
and, for 1 ≤ k ≤ N − 2, Proof Using similar arguments as in proving Theorem 4.1, we can immediately prove this theorem.

(5.7)
We now use Theorem 5.1 to investigate properties of the branching process (named as above) {Y t ; t ≥ 0}. Denote by τ the extinction time to state 0, i.e. τ : denote the conditional distribution of the extinction time and the extinction probability, respectively, conditional on the process starting at state i ≥ 1. By noting that the Laplace transforms of q i0 (t) are actually given in (5.3), we can immediately obtain the following important result.

Theorem 5.2
If m d ≥ m b then q i0 = 1 (i ≥ 1). On the other hand, if m d < m b (including m b = +∞), then q i0 < 1 (i ≥ 1), in which case q i0 is given by where u < 1 is the unique smallest positive root of B(s) = 0 on (0, 1).
Proof By using (5.3) and noting that U(λ), U k (λ) and H k (λ) are all continuous functions of λ ≥ 0, together with the fact that lim λ→0 U k (λ) is finite, we obtain By applying the Tauberian Theorem, we immediately get that, for i ≥ 1, Note that in obtaining (5.10), we have used the fact that, for any u k (λ) (1 ≤ k ≤ N −1), |u k (λ)| ≤ 1 and thus lim λ→0 λU k (λ) = 0 for any non-negative integer k ≥ 0 (and hence (5.10)), together with using the rules regarding taking limit in determinants. By noting the notations introduced in (3.5), using (4.6) and (5.9), and applying the properties of determinants, we can rewrite (5.10) as follows: Now, if m d ≥ m b , then u 0 = u = 1 and thus, for all 1 ≤ k ≤ N − 1, we have H k (u 0 ) = H k (1) = 0. It follows that, by applying H k (1) = 0 in (5.11), On the other hand, if m d < m b ≤ ∞, then u 0 = u < 1 and thus not all of the H k (u) = 0. Hence, , (i ≥ 1), (5.12) which is less than 1. Indeed, the determinants in the numerator and denominator are exactly the same except for the two first columns. This finishes the proof.
By the above theorem, we see that if m d ≥ m b , then, starting from any state i ≥ 1, the process will go to extinction with probability one. Therefore we are interested in obtaining the mean extinction times E i (τ ) (i ≥ 1) as well as the conditions under which these mean extinction times are finite.

Theorem 5.3 E i (τ ) (i ≥ 1) is finite if and only if m d > m b and H
.

) are finite and given by
, (5.20) where q i0 is given in (5.12) and φ i0 (λ) is given in (5.3). More specifically, for i ≥ 1, (5.24) and, for 1 ≤ k ≤ N − 1,F Proof First note that, for any i ≥ 1, and thus substituting the above into (5.27) yields However, P{τ ≤ t|Y 0 = i} is just the p i0 (t) for theQ-function of theQ-process {Y t ; t ≥ 0} whose Laplace-transform is just φ i0 (λ) (i ≥ 1) which is given in (5.3). It follows that the Laplace-transform of and thus, applying the Tauberian Theorem in (5.28) yields (5.29) Noting that P{τ < ∞|Y 0 = i} is nothing but q i0 (i ≥ 1) given in (5.8) (since in our case m d < m b ), the crucial thing is to calculate Applying Hospital's rule in (5.30) shows that the limit in (5.30) is just where φ i0 (λ)(k ≥ 1) is given in (5.3). Hence (5.20) is proven. Moreover, substituting (5.3) into the above (5.31) yields that the limit in (5.31) is just Now by applying the differential rules regarding the determinants together with some lengthy but elementary algebra, we can get all the conclusions stated in Theorem 5.4.

Busy period distributions
We are now ready to consider the busy period distributions of our full queueing processes. As in Sect. 4, the full irreducible queueing process will be denoted by {X t ; t ≥ 0} which denotes the number of customers in the queue (including the customers being served) at time t ≥ 0. Without loss of any generality, we assume that X 0 = 0. We now define a sequence of random variables {σ n ; n ≥ 0} as follows: and, for all n ≥ 1, It is clear that {σ n ; n ≥ 0} is a sequence of increasing stopping times. Note that the random variables {σ 2n − σ 2n−1 ; n ≥ 1} are just the busy periods. By Itô's excursion law, see [21], we know that {σ 2n − σ 2n−1 ; n ≥ 1} are independent identically distributed random variables. For more details regarding this important excursion law, one can also see Rogers and Williams [32]. Our main aim now is to find this common distribution. Now, by the strong Markov property, we have, for all n ≥ 1, However, under the condition that X σ 1 = k (k ≥ 1), the distribution of σ 2 − σ 1 is just the extinction distribution of the {Y t (ω); t ≥ 0} discussed in the previous Sect. 5. In other words, the Laplace transform of the conditional distribution P(σ 2 − σ 1 ≤ t | X σ 1 = k) is just φ k0 (λ) (k ≥ 1) given in (5.3). More specifically, let T denote the length of the first busy period (recall that all busy periods are independent identically distributed random variables, and thus, it is enough to consider T only), and denote the cumulative distribution function of T by We then can find the Laplace transform of F k (t), i.e.
∞ 0 e −λt dF k (t), using the results obtained in the previous section. To this end, we claim the following conclusion.
and if these conditions are satisfied, then this finite-valued E k (T ) is given by Proof Let { f i j (t); i, j ∈ Z + } and {φ i j (λ); i, j ∈ Z + } denote theQ-function and Q-resolvent, respectively, of the process {Y t (ω); t ≥ 0} discussed in the previous Sect. 5. Then their hitting time properties have been revealed in the previous section.
In particular, the Laplace transform of F i (t) = f i0 (t) is given in (5.3), and thus we obtain (6.2) directly from (5.3). Moreover, again by the arguments in the previous Sect. 5, in particular regarding Theorem 5.3, we can get (6.3) by using (5.13). The finiteness conditions for E k (T ) also follow.
We now can claim the following important conclusion.
Theorem 6.2 For the queueing process {X t ; t ≥ 0} with the irreducible q-matrix Q given in (1.1)-(1.3), the busy periods {σ 2n − σ 2n−1 } are independent, identically distributed random variables whose Laplace transform of this common distribution, denoted by g T (λ), say, is given by
Proof By (6.1) and the remarks made before Theorem 5.3, we have Note that |U(λ)| ≤ 1 and thus ∞ k=1 h 0k U k (λ) is well-defined and that where h = −h 00 > 0. Substituting (6.6) into (6.5) together with some simple algebra regarding determinants immediately yields (6.4), which then ends the proof. By (6.4), we may get many properties regarding busy periods. For example, we may get an elegant form of the mean time of the busy periods as follows: The importance of such conclusions in queueing theory is well known.

Theorem 6.3 The mean busy period is finite if and only if m d > m b and H
. Moreover, under these conditions, the (finite) mean busy period is given by where, again, h = −h 00 > 0.
Proof We only need to show (6.7) since the former part is obvious. Although (6.7) is a direct consequence of the proven (6.3), we may get (6.7) more simply. In fact, by (6.1) we have that where E i (τ ) (i ≥ 1) is given in (5.13). Substituting (5.13) into the above expression together with some simple algebra immediately yields (6.7).
If m d < m b ≤ +∞, then, although the mean busy period is infinite, we may still get much information by providing an expression regarding the conditional mean busy period. By the i.i.d property, it is enough to consider E(σ 2 − σ 1 | σ 2 < ∞) only.

Theorem 6.4
If m d < m b ≤ +∞, then the conditional expectation of the busy period, under the condition that the queue reaches the idle period, is given by Proof Just note that where E i (τ | τ < ∞) (i ≥ 1) are given in ( It is interesting to note that, as Theorem 6.4 shows, so long as m d < m b ≤ +∞, the conditional expectation of the busy period is always finite even if B (1) and all H k (1) (0 ≤ k ≤ N − 1) are infinite. Note that, however, neither Theorem 6.3 nor Theorem 6.4 covers the case m d = m b . In fact, this critical case is more subtle. Indeed, although the busy period is almost surely finite, the expected value is infinite. Nevertheless, we are still able to provide interesting information regarding the asymptotic behaviour of the busy period for this subtle case. For this purpose, we first provide the following lemma. Recall that {φ k0 (λ); k ≥ 1} given in (5.3) is the Laplace transforms of the p k0 (t) for the absorbing process {Y t ; t ≥ 0} which was constructed in Theorem 5.1.
and, for any · H k (1). (6.12) (6.11) is easy and has been proved in Lemma 2.3 (see (2.12)). We now show that (6.12) is also true. Indeed, for any k where 1 ≤ k ≤ N − 1, since, when λ → 0, u k 0 (λ) is finite. Moreover, for the right-hand side of the above (6.13), we first note that which can be written as Dividing the above by √ λ, letting λ → 0 and using (6.12) together with noting (6.12) is trivial true for k = 0 yields (1), and hence (6.12) is proved.
We are now ready to prove (6.10). By (5.3), we know that .

Now, moving 1
√ λ into the first row of the numerator of the above, and then letting λ → 0 and using the proved (6.11) and (6.12), together with the trivial and proved results, that, for any 1 ≤ k ≤ N − 1, we immediately obtain that, for all i ≥ 1, which is the same as (6.10). Lemma 6.1 is thus proven.
We are now able to tackle the subtle case of m d = m b . and hence, by (7.5), all the other components of the sequence {b j ; j ≥ 0} are just zero. Hence this example is a queueing model with a Poisson arrival rate b > 0 together with a full loading service rate a > 0. That is, suppose the service is taken by some vans, say, then only when the van is fully packed will the van begin taking service. For this example, we know that B(s) = a − (a + b)s k + bs k+1 (7.6) with m d = ka and m b = b, and, for any λ > 0, By the results obtained before, we know that B λ (s) has a unique positive zero in ( Similarly, B(s) is also a polynomial with degree (k +1) and thus has exactly (k +1) zeros on the complex plane. By noting that B (1) = b − ka, we can get the following simple conclusion. Proof Obvious and thus omitted.
By the same arguments, together with using either (4.11) or (4.12), we immediately obtain (7.16) or (7.17). This ends the proof.
In order to get a more concrete and convincing example, we consider the further special case where λ i ≡ λ > 0 (0 ≤ i ≤ k − 1) and μ i ≡ μ (1 ≤ i ≤ k − 1). That is, we assume that the control sequence takes an M|M|1 structure. Then the forms (7.14)-(7.20) take particularly simple expressions. In particular, the Vandermonde determinant will play an interesting role. For simplicity, we further assume that k = 3. Then, as a direct consequence of Theorem 7.1, we can get the following simple yet interesting corollary. Corollary 7.1 Suppose that the control sequences {h i j } are given in (7.1) together with the further assumptions that λ i ≡ λ > 0 (0 ≤ i ≤ k − 1) and μ i ≡ μ (1 ≤ i ≤ k − 1) and the arrival-service sequence {b j ; j ≥ 0} is given in (7.5). Further assume that k = 3. Then we have the following conclusions: Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.