Abstract
Stochastic networks with complex structures are key modelling tools for many important applications. In this paper, we consider a specific type of network: retrial queueing systems with priority. This type of queueing system is important in various applications, including telecommunication and computer management networks with big data. The system considered here receives two types of customers, of which Type-1 customers (in a queue) have non-pre-emptive priority to receive service over Type-2 customers (in an orbit). For this type of system, we propose an exhaustive version of the stochastic decomposition approach, which is one of the main contributions made in this paper, for the purpose of studying asymptotic behaviour of the tail probability of the number of customers in the steady state for this retrial queue with two types of customers. Under the assumption that the service times of Type-1 customers have a regularly varying tail and the service times of Type-2 customers have a tail lighter than Type-1 customers, we obtain tail asymptotic properties for the numbers of customers in the queue and in the orbit, respectively, conditioning on the server’s status, in terms of the exhaustive stochastic decomposition results. These tail asymptotic results are new, which is another main contribution made in this paper. Tail asymptotic properties are very important, not only on their own merits but also often as key tools for approximating performance metrics and constructing numerical algorithms.
Similar content being viewed by others
References
Artalejo, J.R., Dudin, A.N., Klimenok, V.I.: Stationary analysis of a retrial queue with preemptive repeated attempts. Oper. Res. Lett. 28, 173–180 (2001)
Artalejo, J.R., Gómez-Corral, A.: Retrial Queueing Systems. Springer, Berlin (2008)
Asmussen, S., Klüppelberg, C., Sigman, K.: Sampling at subexponential times, with queueing applications. Stoch. Process. Their Appl. 79(2), 265–286 (1999)
Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation. Cambridge University Press, Cambridge (1989)
Borst, S.C., Boxma, O.J., Nunez-Queija, R., Zwart, A.P.: The impact of the service discipline on delay asymptotics. Perform. Eval. 54, 175–206 (2003)
Boxma, O., Denisov, D.: Sojourn time tails in the single server queue with heavy-tailed service times. Queue. Syst. 69, 101–119 (2011)
Boxma, O., Zwart, B.: Tails in scheduling. ACM Sigmetrics Perform. Eval. Rev. 34, 13–20 (2007)
Choi, B.D., Chang, Y.: Single server retrial queues with priority calls. Math. Comput. Model. 30, 7–32 (1999)
Dimitriou, I.: A mixed priority retrial queue with negative arrivals, unreliable server and multiple vacations. Appl. Math. Model. 37, 1295–1309 (2013)
de Meyer, A., Teugels, J.L.: On the asymptotic behavior of the distributions of the busy period and service time in \(M/G/1\). J. Appl. Probab. 17, 802–813 (1980)
Dudin, A.N., Lee, M.H., Dudina, O., Lee, S.K.: Analysis of priority retrial queue with many types of customers and servers reservation as a model of cognitive radio system. IEEE Trans. Commun. 65(1), 186–199 (2017)
Embrechts, P., Kluppelberg, C., Mikosch, T.: Modelling Extremal Events for Insurance and Finance. Springer, Heidelberg (1997)
Falin, G.I.: A survey of retrial queues. Queue. Syst. 7(2), 127–168 (1990)
Falin, G.I., Artalejo, J.R., Martin, M.: On the single server retrial queue with priority customers. Queue. Syst. 14(3–4), 439–455 (1993)
Feller, W.: An Introduction to Probability Theory and Its Applications, vol. II. Wiley, London (1971)
Foss, S., Korshunov, D.: Sampling at a random time with a heavy-tailed distribution. Markov Process. Relat. Fields 6(4), 543–568 (2000)
Foss, S., Korshunov, D., Zachary, S.: An Introduction to Heavy-Tailed and Subexponential Distributions. Springer, New York (2011)
Gao, S.: A preemptive priority retrial queue with two classes of customers and general retrial times. Oper. Res. Int. J. 15, 233–251 (2015)
Gómez-Corral, A.: Analysis of a single-server retrial queue with quasi-random input and nonpreemptive priority. Comput. Math. Appl. 43, 767–782 (2002)
Grandell, J.: Mixed Poisson Processes. Chapman & Hall, London (1997)
Kim, J., Kim, B.: A survey of retrial queueing systems. Ann. Oper. Res. 247(1), 3–36 (2016)
Kim, J., Kim, J., Kim, B.: Regularly varying tail of the waiting time distribution in \(M/G/1\) retrial queue. Queue. Syst. 65(4), 365–383 (2010c)
Kim, J., Kim, B., Ko, S.-S.: Tail asymptotics for the queue size distribution in an \(M/G/1\) retrial queue. J. Appl. Probab. 44(4), 1111–1118 (2007)
Lee, Y.: Discrete-time \(Geo^X/G/1\) queue with preemptive resume priority. Math. Comput. Model. 34, 243–250 (2001)
Liu, B., Min, J., Zhao, Y.Q.: Refined tail asymptotic properties for the \(M^X/G/1\) retrial queue. Submitted. (arXiv:1801.02525) (2017)
Liu, B., Zhao, Y.Q.: Analyzing retrial queues by censoring. Queue. Syst. 64(3), 203–225 (2010)
Liu, B., Zhao, Y.Q.: Second order asymptotic properties for the tail probability of the number of customers in the \(M/G/1\) retrial queue. Submitted. (arXiv:1801.09607) (2017)
Liu, B., Zhao, Y.Q.: Tail Asymptotics for a Retrial Queue with Bernoulli Schedule. Submitted. (arXiv:1804.00984) (2018)
Liu, B., Wang, J., Zhao, Y.Q.: Tail asymptotics of the waiting time and the busy period for the \(M/G/1/K\) queues with subexponential service times. Queue. Syst. 76(1), 1–19 (2014)
Liu, B., Wang, X., Zhao, Y.Q.: Tail asymptotics for \(M/M/c\) retrial queues with nonpersistent customers. Oper. Res. Int. J. 12(2), 173–188 (2012)
Müller, A., Stoyan, D.: Comparison Methods for Stochastic Models and Risks. Wiley, London (2002)
Phung-Duc, T.: Retrial Queueing Models: A survey on theory and applications, to appear In: Dohi T., Ano, K., Kasahara, S. (eds.) Stochastic Operations Research in Business and Industry, World Scientific Publisher. (http://infoshako.sk.tsukuba.ac.jp/tuan/papers/Tuan_chapter_ver3.pdf) (2017)
Sutton, C., Jordan, M.I.: Bayesian inference for queueing networks and modeling of internet services. Ann. Appl. Stat. 5(1), 254–282 (2011)
Walraevens, J., Claeys, D., Phung-Duc, T.: Asymptotics of queue length distributions in priority retrial queues. Perform. Eval. 127–128, 235–252 (2018)
Wang, J.: On the single server retrial queue with priority subscribers and server break-downs. J. Syst. Sci. Complex 21, 304–315 (2008)
Wu, J., Lian, Z.: A single-server retrial G-queue with priority and unreliable server under Bernoulli vacation schedule. Comput. Ind. Eng. 64, 84–93 (2013)
Wu, J., Wang, J., Liu, Z.: A discrete-time Geo/G/1 retrial queue with preferred and impatient customers. Appl. Math. Model. 37, 2552–2561 (2013)
Acknowledgements
We thank the anonymous reviewer for her/his constructive comments and suggestions, which significantly improved the quality of this paper. This work was supported in part by the National Natural Science Foundation of China (Grant No. 71571002), the Research Project of Anhui Jianzhu University and a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Definitions and useful results from the literature
Definition A.1
(for example, see Bingham, Goldie and Teugels [4]) A measurable function \(U:(0,\infty )\rightarrow (0,\infty )\) is regularly varying at \(\infty \) with index \(\sigma \in (-\infty ,\infty )\) (written \(U\in {\mathcal {R}}_{\sigma }\)) iff \(\lim _{t\rightarrow \infty }U(xt)/U(t)=x^{\sigma }\) for all \(x>0\). If \(\sigma =0\) we call U slowly varying, i.e., \(\lim _{t\rightarrow \infty }U(xt)/U(t)=1\) for all \(x>0\).
For a distribution function F, denote \(\bar{F}{\mathop {=}\limits ^\mathrm{def}}1-F\) for the remainder of the paper.
Definition A.2
(for example, see Foss, Korshunov and Zachary [17]) A distribution F on \((0,\infty )\) belongs to the class of subexponential distribution (written \(F\in {\mathcal {S}}\)) if \(\lim _{t\rightarrow \infty }\overline{F^{*2}}(t)/\overline{F}(t)=2\), where \(\overline{F}=1-F\) and \(F^{*2}\) denotes the second convolution of F.
Lemma A.1
(de Meyer and Teugels [10]) Under Assumption A1,
The result (A.1) is straightforward due to the main theorem in [10].
Lemma A.2
(pp.580–581 in [12]) Let N be a r.v. with \(P\{N=k\}=(1-\sigma )\sigma ^{k-1}\), \(0<\sigma <1\), \(k\ge 1\), and \(\{Y_k\}_{k=1}^{\infty }\) be a sequence of non-negative, i.i.d. r.v.s having a common subexponential distribution F. Define \(S_n=\sum _{k=1}^n Y_k\). Then, \(P\{S_N > t\} \sim (1-F(t))/(1-\sigma )\) as \(t\rightarrow \infty \).
Lemma A.3
(Proposition 3.1 in [3], or Theorem 3.1 in [16]) Let \(N_{\lambda }(t)\) be a Poisson process with rate \(\lambda \) and let T be a positive r.v. with distribution F, which is independent of \(N_{\lambda }(t)\). If \(\bar{F}(t)=P\{T>t\}\) is heavier than \(e^{-\sqrt{t}}\) as \(t\rightarrow \infty \), then \(P(N_{\lambda }(T)>j)\sim P\{T>j/\lambda \}\) as \(j\rightarrow \infty \).
Lemma A.3 holds for any distribution F with a regularly varying tail because it is heavier than \(e^{-\sqrt{t}}\) as \(t\rightarrow \infty \).
Lemma A.4
(p.181 in [20]) Let \(N_{\lambda }(t)\) be a Poisson process with rate \(\lambda \) and let T be a positive r.v. with distribution F, which is independent of \(N_{\lambda }(t)\). If \(\bar{F}(t) = P\{T>t\}\sim e^{-w t} t^{-h}L(t)\) as \(t\rightarrow \infty \) for \(w> 0\) and \(-\infty<h<\infty \), then
Lemma A.5
(p.48 in [17]) Let F, \(F_1\) and \(F_2\) be distribution functions. Suppose that \(F\in {\mathcal {S}}\). If \(\bar{F}_i(t)/\bar{F}(t)\rightarrow c_i\) as \(t\rightarrow \infty \) for some \(c_i\ge 0, \; i=1,2\), then \(\overline{F_1*F}_2(t)/\bar{F}(t)\rightarrow c_1+c_2\) as \(t\rightarrow \infty \), where the notation \(F_1*F_2\) stands for the convolution of \(F_1\) and \(F_2\).
Lemma A.6
(pp.162–163 in [20]) Let N be a discrete non-negative integer-valued r.v. with mean value \(\mu _N\), and let \(\{Y_k\}_{k=1}^{\infty }\) be a sequence of non-negative i.i.d. r.v.s with mean value \(\mu _Y\). Define \(S_0\equiv 0\) and \(S_n=\sum _{k=1}^n Y_k\). If \(P\{Y_k>x\}\sim c_Y x^{-h} L(x) \) as \(x\rightarrow \infty \) and \(P\{N>m\}\sim c_N m^{-h}L(m)\) as \(m\rightarrow \infty \), where \(h> 1\), \(c_Y\ge 0\) and \(c_N\ge 0\), then \(P\{S_N > x\}\sim (c_N\mu _Y^h + \mu _N c_Y) x^{-h}L(x)\) as \(x\rightarrow \infty .\)
Remark A.1
It is a convention that in Lemma A.6, \(c_Y=0\) and \(c_N=0\) means that \(\lim _{x\rightarrow \infty }P\{Y_k>x\}/(x^{-h} L(x))=0\) and \(\lim _{m\rightarrow \infty }P\{N>m\}/(m^{-h} L(m))=0\), respectively.
The following two criteria are from Feller (see p.441 in [15]), which are often used to verify that a function is completely monotone.
Criterion A.1 If \(\vartheta _1(\cdot )\) and \(\vartheta _2(\cdot )\) are completely monotone, so is their product \(\vartheta _1(\cdot )\vartheta _2(\cdot )\).
Criterion A.2 If \(\vartheta _3(\cdot )\) is completely monotone and \(\vartheta _4(\cdot )\) is a positive function with a completely monotone derivative \(\vartheta '_4(\cdot )\), then \(\vartheta _3(\vartheta _4(\cdot ))\) is completely monotone.
To prove Lemma C.2, let us list some notations and results which will be used. Let F(x) be any distribution on \([0,\infty )\) with the LST \(\phi (s)\). We denote the nth moment of F(x) by \(\phi _n\), \(n\ge 0\). It is well known that \(\phi _n<\infty \) iff
Based on (A.2), we introduce the notation \(\phi _n(s)\) and \(\widehat{\phi }_n(s)\), defined by
Lemma A.7
(pp.333–334 in [4]) Assume that \(n<d<n+1\), \(n\in \{0,1,2,\ldots \}\), then the following two statements are equivalent:
and
Lemma A.8
(Lemma 3.3 in [27]) Assume that \(n\in \{1,2,\ldots \}\), then the following two statements are equivalent:
and
In [27], Lemma A.8 was proved by applying Karamata’s theorem, the monotone density theorem and Theorem 3.9.1 presented in [4] (see, p.27, p.39 and pp.172–173, respectively).
Appendix B: Proofs for results in step 2
1.1 Proof of Proposition 3.1
By (2.14) and the definition of \(\alpha ^{(e)}(s)\), we can write \((1- h(u))/(1-u)= \lambda _2\alpha _1\cdot \alpha ^{(e)}(\lambda _2 - \lambda _2 u)\), from which, and by (2.7), (2.2) and (2.11), we have,
which leads to the stochastic decomposition given in (3.3) for the r.v. \(K_a\).
It follows from (2.8) that
which leads to the stochastic decomposition given in (3.4) for the r.v. \(K_b\).
It follows from (2.9) that
which leads to the stochastic decomposition given in (3.5) for the r.v. \(K_c\).
Finally, (3.2) follows immediately from (2.6).
1.2 Proof of Proposition 3.2
We divide the proof of Proposition 3.2 into three parts.
1.2.1 \(S_{\beta _i}(z_1,z_2)\) are PGFs with detailed probabilistic interpretations
Following Definition 3.1, for the \({\mathrm{split} (}N;c,1-c)\) it is easy to see that \((\sum _{k=1}^N X_k,N-\sum _{k=1}^N X_k)\) has the PGF
where \(\prod _{1}^0\equiv 1\).
Recalling (2.21), we can write, for \(i=1,2\),
which leads to (3.13) by setting \(N=N_{\lambda }(T_{\beta _i}^{(e)})\) and \(c=q\) in (B.4).
1.2.2 \(M_1(z_1,z_2)\) is a PGF with a detailed probabilistic interpretation
It follows from (2.19) that
where
Clearly, by (B.6), \((M_{11},M_{12})\) can be regarded as a random sum of two-dimensional r.v.s. provided that \(H_{\beta _1}(z_1,z_2)\) is the PGF of a two-dimensional r.v. To verify this, we rewrite (B.8) as a power series. Note that
where \(b_{\beta _1,k}\) is given in (3.9). By (2.3) and (B.9),
Substituting (B.9) and (B.10) into the numerator of the right-hand side of (B.8), we obtain
where
Note that \(q h(z_2)+p z_2=g(z_2)\) and \(q z_1+p z_2\) are the PGFs of (one or two-dimensional) r.v.s. It follows from (B.12) that for \(k\ge 1\), \(D_{k}(z_1,z_2)\) is the PGF of a two-dimensional r.v. \((D_{k,1},D_{k,2})\), which can be constructed according to (3.6).
In addition,
Namely, \((q/\rho _1)\sum _{k=1}^{\infty }kb_{\beta _1,k}=1\), which, together with (B.11), implies that \(H_{\beta _1}(z_1,z_2)\) is the PGF of a two-dimensional r.v. \((H_{\beta _1,1},H_{\beta _1,2})\), which can be constructed according to (3.7).
By (B.6), the two-dimensional r.v. \((M_{11},M_{12})\) can be constructed according to (3.14), which is a random sum of i.i.d. two-dimensional r.v.s \((H_{\beta _1,1}^{(i)},H_{\beta _1,2}^{(i)})\), \(i\ge 1\), each with the same PGF \(H_{\beta _1}(z_1,z_2)\).
1.2.3 \(M_2(z_1,z_2)\) is a PGF with a detailed probabilistic interpretation
Let
Using (B.14), (2.7) and (2.9), we can rewrite (2.20) as follows:
Similarly to (B.11), we can derive (details omitted) from (B.14) that
with \(b_{\beta _2,k}\) given in (3.10).
Similarly to (B.13), we can verify that \((p /\rho _2)\sum _{k=1}^{\infty }k b_{\beta _2,k}=1\), which, together with (B.16), implies that \(H_{\beta _2}(z_1,z_2)\) is the PGF of a two-dimensional r.v. \((H_{\beta _2,1},H_{\beta _2,2})\), which can be constructed according to (3.8).
By (B.15), the two-dimensional r.v. \((M_{21},M_{22})\) can be constructed according to (3.15).
Appendix C: Proofs of Lemmas 4.1 and 4.2 in step 3
In this section, we prove Lemma 4.1 and Lemma 4.2. Before proceeding, we provide the following facts, which will be used in our proof multiple times.
Applying Karamata’s theorem (for example, p.28 in [4]), and using Assumption A1 and Lemma A.1, respectively, gives, as \(t\rightarrow \infty \),
and
Applying Proposition 8.5 (p.181 in [20]) to the density \(\bar{F}_{\beta _2}(t)/\beta _{2,1}\) and using Assumption A2, give, as \(t\rightarrow \infty \),
Furthermore, since \(F_{\beta }(x)=q F_{\beta _1}(x)+p F_{\beta _2}(x)\) and based on Assumptions A1 and A2, we have \(P\{T_{\beta }>t\}=qP\{T_{\beta _1}>t\} +p P\{T_{\beta _2}>t\}\sim q t^{-a_1} L(t)\) as \(t\rightarrow \infty \), from which Karamata’s theorem implies that
1.1 Proof of Lemma 4.1
Recall (2.4), which relates the PGF of K to the PGF of \(R_0\). With this relationship, we first study the tail probability for K, which can be regarded as the sum of independent r.v.s \(K_a\), \(K_b\) and \(K_c\) (refer to (3.2)). By (3.3), (C.2) and applying Lemma A.3, we have
Recall (3.4), \(K_b=N_{\lambda , X_g}(T_{\beta }^{(e)}){\mathop {=}\limits ^\mathrm{d}}\sum _{i=1}^{N_{\lambda }(T_{\beta }^{(e)})} X_g^{(i)}\), where \(X_g^{(i)}\) has the common distribution \(X_g\). By (2.15), and then applying Lemma A.3 and using Lemma A.1, we know that
Similarly, applying Lemma A.3 and using (C.4), we have
based on which, by (2.16) and applying Lemma A.6, we have,
Next, we study \(P\{K_c>j\}\). By (3.5), we know that \(P\{K_c>j\}=\vartheta P\{\sum _{i=1}^{J}X_c^{(i)}>j\}\), where \(P(J=i)=(1-\vartheta )\vartheta ^{i-1}\), \(i\ge 1\), and \(X_c^{(i)}\) has the same distribution as that for \(X_c=K_a+N_{\lambda ,X_g}(T_{\beta _2}^{(e)})\). Note that \(N_{\lambda ,X_g}(T_{\beta _2}^{(e)}){\mathop {=}\limits ^\mathrm{d}}\sum _{i=1}^{N_{\lambda }(T_{\beta _2}^{(e)})} X_g^{(i)}\), where \(X_g^{(i)}\) has the common tail probability \(P\{X_g>j\}\sim Const\cdot j^{-a_1} L(j)\) and \(P\{N_{\lambda }(T_{\beta _2}^{(e)})>j\}\sim Const\cdot j^{-a_2+1}L(j)\). Therefore, by applying Lemma A.6 (and noticing that \(a_2 > a_1\) if \(r=0\) in Assumption A2), we have
By (C.5), (C.7), applying Lemma A.2 and Lemma A.5, we have, as \(j\rightarrow \infty \),
which, together with (C.5), (C.6) and (3.2), leads to
1.2 Proof of Lemma 4.2
By Proposition 3.2, \(S_{\beta _1,1}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _1}(T_{\beta _1}^{(e)})\), \(S_{\beta _1,2}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _2}(T_{\beta _1}^{(e)})\), \(S_{\beta _2,1}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _1}(T_{\beta _2}^{(e)})\) and \(S_{\beta _2,2}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _2}(T_{\beta _2}^{(e)})\). By (C.1) and applying Lemma A.3, we obtain
By (C.3) and applying Lemma A.3 and Lemma A.4, we obtain
Next, we will study the asymptotic tail probabilities of the r.v.s \(M_{ik},\ i,k=1,2\). By Proposition 3.2, we know that
To proceed further, we need to study the tail probabilities of the r.v.s \(H_{\beta _i,k}\) for \(i,k=1,2\).
1.2.1 Tail asymptotics for \(H_{\beta _1,1}\) and \(H_{\beta _2,1}\)
Taking \(z_2\rightarrow 1\) in (B.8) and (B.14), we can write
Therefore, \(H_{\beta _i,1}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _1}(T_{\beta _i}^{(e)}){\mathop {=}\limits ^\mathrm{d}}S_{\beta _i,1}\), \(i=1,2\), and
whose asymptotic tails are presented in (C.11) and (C.13), respectively.
1.2.2 Tail asymptotics for \(H_{\beta _1,2}\)
Unlike for the other r.v.s discussed earlier, more effort is required for the asymptotic tail behaviour for \(H_{\beta _1,2}\), which will be presented in Proposition C.1. Before doing that, we first present a nice bound on the tail probability of \(H_{\beta _1,2}\), which is very suggestive for an intuitive understanding of the tail property for \(H_{\beta _1,2}\).
Taking \(z_1\rightarrow 1\) in (B.12) and (B.11), we have,
It follows from (C.21) that, for \(k\ge 1\),
where \(\{Y_n\}_{n=1}^{\infty }\) and \(\{Z_n\}_{n=1}^{\infty }\) are sequences of independent r.v.s, which are independent of each other, with \(Y_n\) and \(Z_n\) having PGFs \(q +p z_2\) and \(q h(z_2)+p z_2\), respectively.
We say that Y is stochastically smaller than Z, written as \(Y\le _{st}Z\), if \(P\{Y>t\}\le P\{Z>t\}\) for all t. It is easy to see that \(Y_{n_1}\le _{st}Z_{n_2}\) for all \(n_1,n_2\ge 1\). Define
Then, by Theorem 1.2.17 (p.7 in [31]),
Furthermore, it follows from (C.22) that \(H_{\beta _1,2}{\mathop {=}\limits ^\mathrm{d}}D_{k,2}\), with probability \((q/\rho _1)k b_{\beta _1,k}\), for \(k\ge 1\).
Now, define the r.v.s \(H^L_{\beta _1,2}\) and \(H^U_{\beta _1,2}\) as follows:
Then, by (C.23),
Note that \(H_{\beta _1,2}^{L}\) and \(H_{\beta _1,2}^{U}\) have the following PGFs:
Next, we will study the asymptotic behaviour of \(P\{H_{\beta _1,2}^{L}>j\}\) and \(P\{H_{\beta _1,2}^{U}>j\}\), respectively. Let N be a r.v. with probability distribution \(P\{N=k\}=(q/\rho _1) k b_{\beta _1,k}\), \(k\ge 1\). Therefore, (C.25) and (C.26) can be written as
where N is independent of both \(Z_k\) and \(Y_k\), \(k\ge 1\).
Then, it is immediately clear that
where \(\overline{b}_{\beta _1,k}=\sum _{n=k}^{\infty } b_{\beta _1,n}\).
Using the definition of \(b_{\beta _1,n}\) in (3.9), and applying Lemma A.3, we know that \(\overline{b}_{\beta _1,k}=P\{N_{\lambda }(T_{\beta _1})>k-1\}\sim \lambda ^{a_1}k^{-a_1}L(k)\) as \(k\rightarrow \infty \), which, together with Proposition 1.5.10 in [4], implies that
Recall the following three facts: (i) \(Y_k\) is a \(0-1\) r.v., which implies that \(P\{Y_k>j\}\rightarrow 0\) as \(j\rightarrow \infty \); (ii) \(Z_k\) has the same probability distribution as \(X_g\) defined in (2.15), which implies that \(P\{Z_k>j\}=P \{X_g>j\}\sim Const\cdot j^{-a_1} L(j)\) as \(j\rightarrow \infty \); and (iii) \(E(Y_k)=p\) and \(E(Z_k)=E(X_g)=p/(1-\rho _1)<\infty \), given in (2.16). Then, by Lemma A.6, we know that
Remark C.1
It follows from (C.24) that \(P\{H_{\beta _1,2}^{L}>j\}\le P\{H_{\beta _1,2}>j\}\le P\{H_{\beta _1,2}^{U}>j\}\), whereas the asymptotic properties of \(P\{H_{\beta _1,2}^{L}>j\}\) and \(P\{H_{\beta _1,2}^{U}>j\}\) are given in (C.29) and (C.30), respectively. This suggests that \(P\{H_{\beta _1,2}>j\}\sim c\cdot \frac{\lambda _1 \lambda _2^{a_1-1}}{\rho _1(a_1-1)}\cdot j^{-a_1+1}L(j)\) as \(j\rightarrow \infty \) for some constant \(c\in \left( a_1, a_1/(1-\rho _1)^{a_1-1}\right) \). In Proposition C.1, we will verify that this assertion is true.
Proposition C.1
As \(j\rightarrow \infty \),
To prove this proposition, we need the following two lemmas (Lemma C.1 and Lemma C.2). Setting \(z_1=1\) in (B.7) and noting \(h(z_2)=\alpha (\lambda _2-\lambda _2 z_2)\), we obtain
where
Lemma C.1
\(\gamma (s)\) is the LST of a probability distribution on \([0,\infty )\).
Proof
By Theorem 1 in [15] (see p.439), the lemma is true iff \(\gamma (0)=1\) and \(\gamma (s)\) is completely monotone, i.e., \(\gamma (s)\) possesses derivatives of all orders such that \((-1)^{n}\frac{d^n}{ds^n}\gamma (s)\ge 0\) for \(s> 0\), \(n\ge 0\). It is easy to check by (C.33) that \(\tau (0)=1\). Next, we only need to verify that \(\gamma (s)\) is completely monotone. By (C.33) and (2.10) and using Taylor expansion, we have
where \(\beta _1^{(n)}(\cdot )\) represents the nth derivative of \(\beta _1(\cdot )\).
It is easy to check that, for \(n\ge 1\), both \((-1)^n\beta _1^{(n)}(s+\lambda _1)\) and \(1 +\alpha (s)+\cdots + (\alpha (s))^{n-1}\) are completely monotone, and so is their product (by Criterion A.1 in Appendix A). Therefore, \(\gamma (s)\) is completely monotone.\(\square \)
Remark C.2
Let \(T_{\gamma }\) be a r.v. whose the probability distribution has the LST \(\gamma (s)\). Then, the expression \(Ez_2^{H_{\beta _{1,2}}}=\gamma (\lambda _2-\lambda _2 z_2)\) in (C.32) implies that \(H_{\beta _{1,2}}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _2}(T_{\gamma })\).
Lemma C.2
As \(t\rightarrow \infty \),
Proof
First, let us rewrite (C.33) as
In the following, we divide the proof of Lemma C.2 into two parts, depending on whether \(a_1\) (\(>1\)) is an integer or not.
Case 1: Non-integer \(a_1 >1\). Suppose that \(n<a_1<n+1\), \(n\in \{1,2,\ldots \}\). Since \(P\{T_{\beta _1}>t\}\sim t^{-a_1}L(t)\) and \((1-\rho _1)^{a_1+1}P\{T_{\alpha }>t\}\sim t^{-a_1}L(t)\), we know that \(\beta _{1,n}<\infty \), \(\beta _{1,n+1}=\infty \), \(\alpha _{n}<\infty \) and \(\alpha _{n+1}=\infty \). Define \(\beta _{1,n}(s)\) and \(\alpha _{n}(s)\) in a manner similar to that in (A.3). Therefore,
By Lemma A.7,
Furthermore, it follows from (C.38) that
where \(u_1,u_2,\ldots ,u_{n-1}\) are constants. By (C.36), (C.37) and (C.40), we have
where \(e_1,e_2,\ldots ,e_{n-1}\) are constants. Based on the above, we define \(\gamma _{n-1}(s)\) in a manner similar to that in (A.3). Applying (C.39), we have
Then, making use of Lemma A.7, we complete the proof of Lemma C.2 for non-integer \(a_1>1\).
Case 2: Integer \(a_{1}>1\). Suppose that \(a_1=n\in \{2,3,\ldots \}\). Since \(P\{T_{\beta _1}>t\}\sim t^{-n}L(t)\) and \((1-\rho _1)^{n+1}P\{T_{\alpha }>t\}\sim t^{-n}L(t)\), we know that \(\alpha _{n-1}<\infty \) and \(\beta _{1,n-1}<\infty \), but whether \(\alpha _{n}\) or \(\beta _{1,n}\) is finite or not remains uncertain. This uncertainty is essentially determined by whether \(\int _x^{\infty }t^{-1}L(t)\mathrm{d}t\) is convergent or not. Define \(\widehat{\beta }_{1,n-1}(s)\) and \(\widehat{\alpha }_{n-1}(s)\) in a way similar to that in (A.4). Then,
By Lemma A.8, we obtain, for \(x>0\),
Furthermore, it follows from (C.44) that
where \(u'_1,u'_2,\ldots ,u'_{n-1}\) are constants. By (C.36), (C.43) and (C.46), we have
where \(e'_1,e'_2,\ldots ,e'_{n-1}\) are constants. Based on this, we define \(\widehat{\gamma }_{n-2}(s)\) in a way similar to that in (A.4). Then,
It follows from (C.47) and (C.45) that
Applying Lemma A.8, we complete the proof of Lemma C.2 for integer \(a_1=n\in \{2,3,\ldots \}\). \(\square \)
Proof of Proposition C.1
It follows directly from Remark C.2, Lemma C.2 and Lemma A.3. \(\square \)
Referring to Remark C.1, we know from (C.31) that \(c{=}(1{-}\rho _1) \rho _1^{{-}1} \left[ 1/(1{-}\rho _1)^{a_1}{-}1\right] \). Now let us confirm that \(a_1<c<a_1/(1-\rho _1)^{a_1-1}\), which is equivalent to checking that \(a_1\rho _1(1-\rho _1)^{a_1-1}+(1-\rho _1)^{a_1}<1\) and \(a_1\rho _1+(1-\rho _1)^{a_1}>1\). This is true because \(a_1\rho _1(1-\rho _1)^{a_1-1}+(1-\rho _1)^{a_1}\) is decreasing in \(\rho _1\in (0,1)\) and \(a_1\rho _1+(1-\rho _1)^{a_1}\) is increasing in \(\rho _1\in (0,1)\).
1.2.3 Tail asymptotics for \(H_{\beta _2,2}\)
As we shall see in the next subsection, our main results do not require a detailed asymptotic expression for \(P\{H_{\beta _2,2}>j\}\). It is enough to verify that it is \(o(1)\cdot j^{-a_1+1}L(j)\) as \(j\rightarrow \infty \).
Taking \(z_1\rightarrow 1\) in (B.16), we have
It follows from (C.50) that \(H_{\beta _2,2}{\mathop {=}\limits ^\mathrm{d}}D_{k,2}\), with probability \((p/\rho _2)k b_{\beta _2,k}\), for \(k\ge 1\). Define the r.v. \(H^U_{\beta _2,2}{\mathop {=}\limits ^\mathrm{d}}D^U_{k,2}\), with probability \((p/\rho _2)k b_{\beta _2,k}\), for \(k\ge 1\). Then, by (C.23), we have, \(H_{\beta _2,2}\le _{st}H_{\beta _2,2}^{U}\). Note that \(H_{\beta _2,2}^{U}\) has the PGF
Let \(N^*\) be a r.v. with probability distribution \(P\{N^*=k\}=(p/\rho _2) k b_{\beta _2,k}\), \(k\ge 1\). Therefore, (C.51) implies \(H_{\beta _2,2}^{U}{\mathop {=}\limits ^\mathrm{d}}\sum _{k=1}^{N^*-1} Z_k\), where \(N^*\) is independent of \(Z_k\), \(k\ge 1\). Similar to (C.27), we can write
where \(\overline{b}_{\beta _2,k}=\sum _{n=k}^{\infty } b_{\beta _2,n}\). By the definition of \(b_{\beta _2,n}\) given in (3.10) and applying Lemma A.3 and Lemma A.4, we know that \(\overline{b}_{\beta _2,k}=P\{N_{\lambda }(T_{\beta _2})>k-1\}=O(1)\cdot k^{-a_2}L(k)\) as \(k\rightarrow \infty \). Furthermore, by (C.52) and applying Proposition 1.5.10 in [4], we have
As pointed out in Sect. 2, \(P\{Z_k>j\}\sim Const\cdot j^{-a_1} L(j)\) as \(j\rightarrow \infty \). By Lemma A.6, we know \(P\{H_{\beta _2,2}^{U}>j\}= O(1)\cdot \max \left( j^{-a_2+1}L(j),j^{-a_1}L(j)\right) \) as \(j\rightarrow \infty \). Since \(P\{H_{\beta _2,2}>j\}\le P\{H_{\beta _2,2}^{U}>j\}\) and \(a_2>a_1\), we have
After the above preparations, we now return to the proof of the tail asymptotic properties for \(M_{ik},\ i,k=1,2\).
By (C.15) and applying Lemma A.2, together with (C.20), we have
Immediately, from (C.16) and (C.20),
By (C.17) and applying Lemma A.5, together with (C.54),
Rights and permissions
About this article
Cite this article
Liu, B., Zhao, Y.Q. Tail asymptotics for the \(M_1,M_2/G_1,G_2/1\) retrial queue with non-preemptive priority. Queueing Syst 96, 169–199 (2020). https://doi.org/10.1007/s11134-020-09666-8
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11134-020-09666-8
Keywords
- Exhaustive stochastic decomposition approach
- Tail asymptotics
- Retrial queue
- Priority queue
- Number of customers
- Stationary distribution
- Regularly varying distribution