Skip to main content
Log in

An M/PH/K queue with constant impatient time

  • Original Article
  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract

This paper is concerned with an M/PH/K queue with customer abandonment, constant impatient time, and many servers. By combining the method developed in Choi et al. (Math Oper Res 29:309–325, 2004) and Kim and Kim (Perform Eval 83–84:1–15, 2015) and the state space reduction method introduced in Ramaswami (Stoch Models 1:393–417, 1985), the paper develops an efficient algorithm for computing performance measures for the queueing system of interest. The paper shows a number of properties associated with matrices used in the development of the algorithm, which make it possible for the algorithm, under certain conditions, to handle systems with up to one hundred servers. The paper also obtains analytical properties of performance measures that are useful in gaining insight into the queueing system of interest.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Asmussen S, O’Cinneide CA (1998) Representations for matrix-geometric and matrix-exponential steady-state distributions with applications to many-server queues. Stoch Models 14:369–387

    Article  MathSciNet  MATH  Google Scholar 

  • Baccelli F, Boyer P, Hebuterne G (1984) Single server queues with impatient customers. Adv Appl Probab 16:887–905

    Article  MathSciNet  MATH  Google Scholar 

  • Barrer DY (1957a) Queuing with impatient customer and indifferent clerks. Oper Res 5:644–649

    Article  MathSciNet  Google Scholar 

  • Barrer DY (1957b) Queuing with impatient customers and ordered service. Oper Res 5:650–656

    Article  MathSciNet  Google Scholar 

  • Boxma OJ, Waal PR (1994) Multisever queue with impatient customers. In: The fundamental role of teletraffic in the evolution of telecommunication network (Proc. ITC14). North-Holland, Amsterdam, pp 743–756

  • Boots NK, Tijms H (1999) An M/M/c queue with impatient customers. TOP 7:213–220

    Article  MathSciNet  MATH  Google Scholar 

  • Brandt A, Brandt M (1999a) On the M(n)/M(n)/s queues with impatient call. Perform Eval 35:1–18

    Article  Google Scholar 

  • Brandt A, Brandt M (1999b) On a two-queue priority system with impatience and its application to a call center. Methodol Comput Appl Probab 1:191–210

    Article  MathSciNet  MATH  Google Scholar 

  • Brandt A, Brandt M (2002) Asymptotic result and a Markovian approximation for the M(n)/M(n)/s + GI system. Queueing Syst 41:73–94

    Article  MathSciNet  MATH  Google Scholar 

  • Choi BD, Kim B, Chung J (2001) M/M/1 queue with impatient customers of high priority. Queueing Syst 38:49–66

    Article  MathSciNet  MATH  Google Scholar 

  • Choi BD, Kim B, Zhu D (2004) MAP/M/c queue with constant impatient time. Math Oper Res 29:309–325

    Article  MathSciNet  MATH  Google Scholar 

  • Dai JG, He S (2010) Customer abandonment in many-server queues. Math Oper Res 35:347–362

    Article  MathSciNet  MATH  Google Scholar 

  • Dai JG, He S (2011) Queues in service systems: customer abandonment and diffusion approximation. Tutor Oper Res 2011:36–59

    Google Scholar 

  • Dai JG, Tezcan T (2008) Optimal control of parallel server system with many servers in heavy traffic. Queueing Syst 59:95–134

    Article  MathSciNet  MATH  Google Scholar 

  • Dai JG, He S, Tezcan T (2010) Many-server diffusion limits for G/Ph/n+GI queues. Ann Appl Probab 20:1854–1890

    Article  MathSciNet  MATH  Google Scholar 

  • Daley DJ (1965) General customer impatience in the queue GI/G/1. J Appl Probab 2:186–209

    Article  MathSciNet  MATH  Google Scholar 

  • de Kok AG, Tijms HC (1985) A queueing system with impatient customers. J Appl Probab 22:688–696

    Article  MathSciNet  MATH  Google Scholar 

  • Dzial T, Breuer L, da Silva Soares A, Latouche G, Remiche M (2005) Fluid queues to solve jump processes. Perform Eval 62:132–146

    Article  Google Scholar 

  • Finch PD (1960) Deterministic customer impatience in the queueing system GI/M/1. Biometrika 47(1,2): 45-52

  • Garnett O, Mandelbaum A, Reiman M (2002) Designing a call center with impatient customers. Manuf Serv Oper Manag 4(3):208–227

    Article  Google Scholar 

  • Gohberg I, Lancaster P, Rodman L (1982) Matrix polynomials. Academic Press, New York

    MATH  Google Scholar 

  • Harris TE (1956) The existence of stationary measures for certain Markov processes. In: Proceedings of 3rd Berkeley symposium, Vol II:113-124

  • He QM (2005) Age process, workload process, sojourn times and waiting time in a discrete time SM[K]/PH[K]/1/FCFS queue. Queueing Syst 49:363–403

    Article  MathSciNet  MATH  Google Scholar 

  • He QM (2014) Fundamentals of matrix-analytic methods. Springer, New York

    Book  MATH  Google Scholar 

  • Jurkevic OM (1970) On the investigation of many-server queueing systems with bounded waiting time. Izv. Akad. Nauk SSSR Techniceskaja Kibernetika (Russian) 5:50–58

    Google Scholar 

  • Jurkevic OM (1971) On many-server systems with stochastic bounds for the waiting time. Izv. Akad. Nauk SSSR Techniceskaja Kibernetika (Russian) 4:39–46

    Google Scholar 

  • Kawanishi K, Takine T (2016) MAP/M/c and M/PH/c queues with constant impatience times. Queueing Syst 82:381–420

    Article  MathSciNet  MATH  Google Scholar 

  • Kim B, Kim J (2015) A single server queue with Markov modulated service rates and impatient customers. Perform Eval 83–84:1–15

    Google Scholar 

  • König D, Schmidt V (1990) Extended and conditional versions of the PASTA property. Adv Appl Probab 22:510–512

    Article  MathSciNet  MATH  Google Scholar 

  • Latouche G (1987) A note on two matrices occurring in the solution of quasi-birth-and-death processes. Stoch Models 3:251–257

    MathSciNet  MATH  Google Scholar 

  • Latouche G, Ramaswami V (1999) Introduction to matrix analytic methods in stochastic modeling. ASA-SIAM Series on Statistic and Applied Probability, SIAM, Philadelphia, PA

    Book  MATH  Google Scholar 

  • Mandelbaum A, Zeltyn S (2013) Data-stories about (im) patient customers in tele-queues. Queueing Syst 75(2–4):115–146

    Article  MathSciNet  MATH  Google Scholar 

  • Meini B (2013) On the numerical solution of a structured nonsymmetric algebraic Riccati equation. PEVA 70(9):682–690

    Google Scholar 

  • Movaghar A (1998) On queueing with customer impatience until the beginning of service. Queueing Syst 29:337–350

    Article  MathSciNet  MATH  Google Scholar 

  • Neuts MF (1981) Matrix-geometric solution in stochastic model: an algorithmic application. The Johns Hopkins University Press, Baltimore, MD

    MATH  Google Scholar 

  • Ramaswami V (1985) Independent Markov processes in parallel. Stoch Models 1:419–432

    Article  MathSciNet  MATH  Google Scholar 

  • Ramaswami V, Lucantoni DM (1985) Algorithms for the multi-server queue with phase type service. Stoch Models 1:393–417

    Article  MATH  Google Scholar 

  • Stanford RE (1990) On queues with impatience. Adv Appl probab 22:768–769

    Article  MathSciNet  Google Scholar 

  • Van Houdt B (2012) Analysis of the adaptive MMAP[K]/PH[K]/1 queue: a multi-type queue with adaptive arrivals and general impatience. Eur J Oper Res 220:695–704

    Article  MathSciNet  MATH  Google Scholar 

  • Xiong W, Jagerman D, Altiok T (2008) M/G/1 queue with deterministic reneging times. Perform Eval 65:308–316

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank three anonymous reviewers and the associate editor for their insightful comments and suggestions on this paper. The authors would also like to thank Dr. Stan Dimitrov for sharing computing resource with us.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qi-Ming He.

Additional information

We thank NSERC.

Appendices

Appendix A: transition blocks for the count-server-for-phase approach

In this appendix, we construct the transition blocks in (3) explicitly. First, we need to specify how states in \(\varOmega (0)\cup \varOmega (1)\cup {\ldots }\cup \varOmega (\textit{K})\) are organized. In general, we define, for k = 0, 1, ..., K, and m = 1, 2, ..., K,

$$\begin{aligned} \varOmega (k,m)=\left\{ (n_{1} ,\ldots ,n_{m} ):\; \; \text{ integer } n_{i} \ge 0,\; i=1,2,\ldots ,m,\; \sum _{i=1}^{m}n_{i} =k\right\} .\qquad \end{aligned}$$
(49)

Note that \(\varOmega \)(k) = \(\varOmega \)(k, m \({}_{s}\)), for k = 0, 1, 2, ..., K. We organize the states in \(\varOmega \)(k, m) lexicographically. Then we have

$$\begin{aligned} \varOmega (\textit{k}, \textit{m}) = \cup _{i=0}^{k} (\varOmega (k-i, m-1)\times \{i\}). \end{aligned}$$
(50)

It is easy to see that, for m = 1, we have \(\varOmega \)(k, 1) = \(\{\) k \(\}\), and for k = 0, \(\varOmega (0, m) = \{(0, {\ldots }, 0)\}\).

We begin with \(\{\) S \({}^{+}\)(k, m \({}_{s}\)), k = 0, 1, ..., K–1\(\}\). The basic components to construct those matrices are \(\{\) \(\lambda \), \({\varvec{\beta }}\) = (\(\beta \) \({}_{1}\), ..., \(\beta _{m_{s} } \))\(\}\), since the corresponding transitions are triggered by the arrival of a customer. The vector \({\varvec{\beta }}\) has to be utilized to specify the service phase of the arriving customer. An effective way to construct those matrices is to generate them iteratively. To that end, we need to construct matrices \(\{\) S \({}^{+}\)(k, m), k = 0, 1, ..., K–1, m = 1, 2, ..., m \({}_{s}\}\) for transitions from \(\varOmega \)(k, m) to \(\varOmega \)(k+1, m). We further decompose the transitions into transitions from \(\{\varOmega \)(k, m–1)\(\times \{\)0\(\}\), \(\varOmega \)(k–1, m–1)\(\times \{\)1\(\}\), ..., \(\varOmega \)(0, m–1)\(\times \{\) k \(\} \}\) to \(\{ \varOmega \)(k+1, m–1)\(\times \{\)0\(\}\), \(\varOmega \)(k, m–1)\(\times \{\)1\(\}\), ..., \(\varOmega \)(0, m–1)\(\times \{\) k+1\(\} \}\), respectively. Specifically, for S \({}^{+}\)(k, m), the construction components are \(\{\) \(\lambda \beta \) \({}_{1}\), ..., \(\lambda \beta {}_{m}\) \(\}\), and \(S^+(k,m)\) is given by

$$\begin{aligned} \begin{array}{l} {\begin{array}{ccccc} {\ }_{{\varOmega (k+1,m-1)\times \{ 0\} } } &{} {\mathbf{\; }_{\varOmega (k,m-1)\times \{ 1\} } } &{} {\mathbf{\; \; \; \; }\cdots } &{} {\mathbf{\; \; \; \; \; }_{\varOmega (1,m-1)\times \{ k\} } } &{} {_{\varOmega (0,m-1)\times \{ k+1\} } } \end{array}\; } \\ \quad \quad \left( \begin{array}{cccccc} _{{S^{+} (k,m-1)}} &{} \ \ \ \ _{{\lambda \beta _{m} I}} &{} {} &{} {} &{} {} &{} {} \\ {} &{} _{{S^{+} (k-1,m-1)}} &{} \ \ _{{\lambda \beta _{m} I}} &{} {} &{} {} &{} {} \\ {} &{} {\ddots } &{} {\ddots } &{} {\ddots } &{} {} &{} {} \\ {} &{} {} &{} {} &{} \ \ \ _{{S^{+} (1,m-1)}} &{} _{{\lambda \beta _{m} I}} &{} {} \\ {} &{} {} &{} {} &{} {} &{} _{{S^{+} (0,m-1)}} &{} \ \ \ \ \ \ _{{\lambda \beta _{m} }} \end{array}\right) \end{array} \end{aligned}$$
(51)

and S \({}^{+}\)(0, m) = \(\lambda \)(\(\beta \) \({}_{1}\), ..., \(\beta {}_{m}\)), for m = 1, 2, ..., m \({}_{s}\), and S \({}^{+}\)(k, 1) = \(\lambda \beta \) \({}_{1}\), for k = 0, 1, ..., K–1.

Now, we construct \(\{\) S \({}^{-}\)(k, m), k = 1, 2, ..., K, m = 1, 2, ..., m \({}_{s} \}\), which are for transitions from \(\varOmega \)(k, m) to \(\varOmega \)(k–1, m). The corresponding transitions are triggered by a service completion. Thus, the construction is based on \(\{ t_{1}^{0} ,\; t_{2}^{0} \), ..., \(t_{m_{s} }^{0} \}\) (Note: Recall that T \({}^{0}\) = \(\left( t_{j}^{0} \right) _{m_{s} \times 1} \)), and we obtain \(S^-(k,m)\) as

$$\begin{aligned} \begin{array}{l} {\begin{array}{cccc} {_{\varOmega (k-1,m-1)\times \{ 0\} } } &{} {_{\mathbf{\; \; \; }\varOmega (k-2,m-1)\times \{ 1\} } } &{} {\ }_{\cdots } \mathbf{\ \; \; }_{\varOmega (1,m-1)\times \{ k-2\} } &{} {_{\varOmega (0,m-1)\times \{ k-1\} } } \ \end{array}} \\ \left( \begin{array}{ccccc} _{{S^{^{\_ } } (k,m-1)}} &{} {} &{} {} &{} {} &{} {} \\ _{{t_{m}^{0} I}} &{} {\ \ \ \ \ \ \ \ _{S^{^{\_ } } (k-1,m-1)}} &{} {} &{} {} &{} {} \\ {} &{} {\ \ \ddots } &{} {\ \ \ \ \ddots } &{} {} &{} {} \\ {} &{} {} &{} {\ \ \ \ddots } &{} {\ddots } &{} {} \\ {} &{} {} &{} {} &{} {\ \ \ \ \ _{(k-1)t_{m}^{0} I}} &{} \ \ \ \ \ _{{S^{^{\_ } } (1,m-1)}} \\ {} &{} {} &{} {} &{} {} &{} {\ \ \ \ \ \ \ \ _{kt_{m}^{0} }} \end{array}\right) \end{array} \end{aligned}$$
(52)

and S \({}^{-}\)(1, m) = (\(t_{1}^{0} ,\; t_{2}^{0} \), ..., \(t_{m}^{0} \))\(\prime \), for m = 1, 2, ..., m \({}_{s}\), and S \({}^{-}\)(k, 1) = \(kt_{1}^{0} \), for k = 1, 2, ..., K.

Finally, we construct \(\{\) S(k, m), k = 0, 1, ..., K, m = 1, 2, ..., m \({}_{s} \}\), which are for transitions from \(\varOmega \)(k, m) to \(\varOmega \)(k, m). We decompose the transitions into three types, based on the decomposition of the states in \(\varOmega \)(k, m):

  1. (i)

    Type “+”: transitions from \(\varOmega \)(j, m–1)\(\times \{\) k–j \(\}\) to \(\varOmega \)(j+1, m–1)\(\times \{\) k–j–1\(\}\);

  2. (ii)

    Type “–”: transitions from \(\varOmega \)(j, m–1)\(\times \{\) k–j \(\}\) to \(\varOmega \)(j–1, m–1)\(\times \{\) k–j+1\(\}\); and

  3. iii)

    Transitions from \(\varOmega \)(j, m–1)\(\times \{\) k–j \(\}\) to \(\varOmega \)(j, m–1)\(\times \{\) k–j \(\}\).

Then the transition matrix S(km) from \(\varOmega \)(k, m) to \(\varOmega \)(k, m) can be written as

$$\begin{aligned} \begin{array}{l} {\begin{array}{ccccc} {\mathbf{\; }_{\varOmega (k,m-1)\times \{ 0\} } } &{} {\mathbf{\; \; }_{\varOmega (k-1,m-1)\times \{ 1\} } } &{} {\mathbf{\; \; \; \; \; }\cdots } &{} {\mathbf{\; \; \; \; }_{\varOmega (1,m-1)\times \{ k-1\} } } &{} {{}_{\varOmega (0,m-1)\times \{ k\} } } \end{array}} \\ \left( \begin{array}{ccccc} _{{S(k,m-1)}} &{} \ \ _{{{\hat{S}}^{-} (k,m-1)}} &{} {} &{} {} &{} {} \\ _{{{\hat{S}}^{+} (k-1,m-1)}} &{} _{{S(k-1,m-1)}} &{} \ \ \ _{{{\hat{S}}^{-} (k-1,m-1)}} &{} {} &{} {} \\ {} &{} {\ddots } &{} {\ddots } &{} {\ddots } &{} {} \\ {} &{} {} &{} _{{(k-1){\hat{S}}^{+} (1,m-1)}} &{} \ _{S(1,m-1)} &{} \ \ _{{\hat{S}}^{-} (1,m-1)} \\ {} &{} {} &{} {} &{} \ \ _{k{\hat{S}}^{+} (0,m-1)} &{} \ \ \ _{S(0,m-1)} \end{array}\right) \\ {\mathbf{\; \; \; \; \; \; \; \; \; }+\; \left( \begin{array}{ccccc} {0} &{} {} &{} {} &{} {} &{} {} \\ {} &{} _{t_{m,m} I} &{} {} &{} {} &{} {} \\ {} &{} {} &{} {\ddots } &{} {} &{} {} \\ {} &{} {} &{} {} &{} _{(k-1)t_{m,m} I} &{} {} \\ {} &{} {} &{} {} &{} {} &{} _{kt_{m,m} } \end{array}\right) .} \end{array} \end{aligned}$$
(53)

If m = 1, we have S(k, 1) = kt \({}_{1,1}\), for k = 0, 1, 2, ..., K. We need to construct two sets of matrices \(\{ {\hat{S}}^{+} (k,m)\), k = 1, 2, ..., K, m = 1, 2, ..., m \({}_{s}\)–1\(\}\) and \(\{ {\hat{S}}^{-} (k,m)\), k = 0, 1, ..., K–1, m = 1, 2, ..., m \({}_{s}\)–1\(\}\). Note that \(\{ {\hat{S}}^{+} (k,m)\), k = 1, 2, ..., K, m = 1, 2, ..., m \({}_{s}\)–1\(\}\) are for the transitions from phase m to phases \(\{\)1, 2, ..., m–1\(\}\), and \(\{ {\hat{S}}^{-} (k,m)\), k = 1, 2, ..., K, m = 1, 2, ..., m \({}_{s}\)–1\(\}\) are for the transitions from phases \(\{\)1, 2, ..., m–1\(\}\) to phase m. We use the construction methods for \(\{\) S \({}^{+}\)(k, m), k = 0, 1, ..., K–1, m = 1, 2, ..., m \({}_{s} \}\) and \(\{\) S \({}^{-}\)(k, m), k = 1, 2, ..., K, m = 1, 2, ..., m \({}_{s} \}\) in this construction.

  1. (i)

    The construction of \(\{ {\hat{S}}^{+} (k,m), \textit{k} = 1, 2, {\ldots }, \textit{K}, \textit{m} = 1, 2, {\ldots }, \textit{m}{}_{s}-1\}\) is similar to that of \(\{\textit{S}{}^{+}(\textit{k}, \textit{m}), \textit{k} = 0, 1, {\ldots }, \textit{K--}1, \textit{m} = 2, 3, {\ldots }, \textit{m}{}_{s} \}\), except that \(\{{\lambda \beta }{}_{1}, {\ldots }, {\lambda \beta {}_{m}}\}\) is replaced with \(\{\textit{t}{}_{m} {}_{+}{}_{1} {}_{,1}, \textit{t}{}_{m} {}_{+}{}_{1,2}, {\ldots }, \textit{t}{}_{m}{}_{+1} {}_{,}{{}_{m}}\}\). In addition, we have \({\hat{S}}^{+} (0,m)=(t_{m+1,1} ,\; \cdots ,\; t_{m+1,m} )\), for \(\textit{m} = 1, 2, {\ldots }, \textit{m}{}_{s}\)–1, and \({\hat{S}}^{+} (k,1)=t_{2,1} \), for k = 0, 1, ..., K–1.

  2. (ii)

    The construction of \(\{ {\hat{S}}^{-} (k,m)\), k = 0, 1, ..., K–1, m = 2, ..., m \({}_{s}\)–1\(\}\) is similar to that of \(\{\) S \({}^{-}\)(k, m), k = 0, 1, ..., K–1, m = 2, 3, ..., m \({}_{s} \}\), except that \(\{ t_{1}^{0} ,\; t_{2}^{0} \), ..., \(t_{m_{s} -1}^{0} \}\) is replaced with \(\{\textit{t}{}_{1,}{{}_{m}}{}_{+1}, \textit{t}{}_{2,}{{}_{m}}{}_{+1}, {\ldots }, \textit{t}{}_{m}{}_{,}{{}_{m}}{}_{+1} \}\). In addition, we have \({\hat{S}}^{-} (1,m)=(t_{1,m+1} ,\; \ldots ,\; t_{m,m+1} )'\), for m = 1, 2, ..., m \({}_{s}\)–1, and \({\hat{S}}^{-} (k,1)=kt_{1,2} \), for k = 1, 2, ..., K.

Finally, we summarize the above construction methods to outline the steps to construct \(\{\) S \({}^{+}\)(k, m \({}_{s}\)), k = 0, 1, ..., K-1\(\}\), \(\{\) S \({}^{-}\)(k, m \({}_{s}\)), k = 1, 2, ..., K \(\}\), and \(\{\) S(k, m \({}_{s}\)), k = 0, 1, 2, ..., K \(\}\).

Algorithm A.1

Construction of transition blocks for the count-server-for-phase approach

  1. A.I.1

    Compute \(\{\) S \({}^{+}\)(k, m \({}_{s}\)), k = 0, 1, ..., K–1\(\}\):

    1. (i)

      S \({}^{+}\)(k, 1) = \(\lambda \beta \) \({}_{1}\), for k = 0, 1, ..., K;

    2. (ii)

      Use Eq. (51) to construct \(\{\) S \({}^{+}\)(k, m), for k = 0, 1, ..., K \(\}\), for m = 2, 3, ..., m \({}_{s}\).

  2. A.I.2

    Compute \(\{\) S \({}^{-}\)(k, m \({}_{s}\)), k = 1, 2, ..., K \(\}\):

    1. (i)

      S \({}^{-}\)(k, 1) = \(kt_{1}^{0} \), for k = 1, 2, ..., K;

    2. (ii)

      Use equation (52) to construct \(\{\) S \({}^{-}\)(k, m), for k = 0, 1, ..., K \(\}\), for m = 2, 3, ..., m \({}_{s}\).

  3. A.I.3

    Compute \(\{\) S(k, m \({}_{s}\)), k = 1, 2, ..., K \(\}\):

    • If m = 1, we have S(k, 1) = kt \({}_{1,1}\), for k = 0, 1, 2, ..., K.

    • For m = 2, 3, ..., m \({}_{s}\),

      1. (i)

        Construct \(\{ {\hat{S}}^{+} (k,j), \textit{k} = 1, 2, {\ldots }, \textit{K}, \textit{j} = 1, 2, {\ldots }, \textit{m--}1\}\) using Eq. (51) with (\(\textit{t}{}_{m}{}_{+1} {}_{,1}, \textit{t}{}_{m} {}_{+}{}_{1,2}, {\ldots }, \textit{t}{}_{m}{}_{+1} {}_{,}{}_{m}\)) in place of \(\lambda \)(\(\beta \) \({}_{1}\), ..., \(\beta {}_{m}\)).

      2. (ii)

        Construct \(\{ {\hat{S}}^{-} (k,m), \textit{k} = 0, 1, {\ldots }, \textit{K--}1, \textit{m} = 2, {\ldots }, \textit{m}{}_{s}-1\}\) using Eq. (52) with (t \({}_{1} {}_{,}\) \({}_{m}\) \({}_{+1}\), t \({}_{2,}\) \({}_{m}\) \({}_{+1}, {\ldots }, \textit{t}{}_{m}{}_{,}\) \({}_{m}\) \({}_{+1}\))\(\prime \) in place of (\(t_{1}^{0} ,\; t_{2}^{0} \), ..., \(t_{m}^{0} \))\(\prime \).

      3. (iii)

        Construct \(\{\textit{S}(\textit{k}, \textit{m}), \textit{k} = 0, 1, {\ldots }, \textit{K}\}\).

    end

Appendix B. Matrices R and G, and Vectors u\({}_{1}\) and u\({}_{2}\)

The following solution approach was introduced in Choi et al. (2004), and used in solving the M/PH/1 case in Kim and Kim (2015). The theoretical basis for this solution approach is the following theorem (Theorem 2.15 and 2.16 in Gohberg et al. 1982).

Theorem B.1

(Gohberg et al. (1982)) Consider second order matrix differential equation

$$\begin{aligned} \frac{\text{ d }^{2} }{\text{ d }x^{2} } \mathbf{u}(x)+\frac{\text{ d }}{\text{ d }x} \mathbf{u}(x)B_{1} +\mathbf{u}(x)B_{2} = \mathbf{0}, \end{aligned}$$
(54)

where u(x) is the row vector function to be found, and B \({}_{1}\) and B \({}_{2}\) are matrices. Suppose that X \({}_{1}\) and X \({}_{2}\) are matrices that are solutions of the auxiliary equation

$$\begin{aligned} X^{2} +XB_{1} +B_{2} = \mathbf{0}. \end{aligned}$$
(55)

If X \({}_{1}\) and X \({}_{2}\) have no common eigenvalues, then the general solution of Eq. (54) is given by

$$\begin{aligned} \mathbf{u}(x)=\mathbf{u}_{1} \exp \{ X_{1} x\} +\mathbf{u}_{2} \exp \{ X_{2} x\} \end{aligned}$$
(56)

where u \({}_{1}\) and u \({}_{2}\) are two constant vectors.

For our problem (15), we have

   B \({}_{1}\) = –(\(\lambda \)I + Q \(\otimes \) I + M \(\otimes \) S(K, m \({}_{s}\))) and

   B \({}_{2}\) = \(\lambda \)Q\(\otimes \) I + \(\lambda \)M\(\otimes \)(S(K, m \({}_{s}\))+S \({}^{-}\)(K, m \({}_{s}\))S \({}^{+}\)(K–1, m \({}_{s}\)) /\(\lambda \)).

Let X \({}_{1}\) = \(\lambda \)(I R) and X \({}_{2} {}_{ }\)= \(\lambda \)G + Q \(\otimes \) I + M \(\otimes \) S(K, m \({}_{s}\)), where R and G are defined in Sect. 3. It can be shown that the two matrices are solutions to Eq. (55). Then one can use Eq. (56) to find a solution to Eq. (15), which satisfies all boundary conditions. For that purpose, we need that exp\(\{\) X \({}_{1}\) x \(\}\) and exp\(\{\) X \({}_{2}\) x \(\}\) provides 2m \({}_{e}\) m \({}_{s}\) \({}^{K}\) independent solutions to (15). To ensure that, we need the following results.

Proposition B.2

If \(\rho \) \(\ne \) 1, matrices \(\lambda \)(IR) and \(\lambda \)G + Q \(\otimes \) I + M \(\otimes \) S(K, m \({}_{s}\)) have no common eigenvalues. If \(\rho \) = 1, matrices \(\lambda \)(IR) and \(\lambda \)G + Q \(\otimes \) I + M \(\otimes \) S(K, m \({}_{s}\)) have one common eigenvalue zero with algebraic multiplicity one.

Proof

First, we consider the case with \(\rho \) < 1. Since sp(R) = 1, it is clear that the real parts of all eigenvalues of \(\lambda \)(IR) are nonnegative. By Proposition 4.2, the maximal real part of all eigenvalues of \(\lambda \)G + Q \(\otimes \) I + M \(\otimes \) S(K, m \({}_{s}\)) is negative. Note that the eigenvalue of \(\lambda \)G + Q \(\otimes \) I + M \(\otimes \) S(K, m \({}_{s}\)) with the maximal real part has to be real. Thus, the two matrices have no common eigenvalues.

If \(\rho \) > 1, we have sp(R) = sp(G) < 1. Then all eigenvalues of \(\lambda \)(IR) have a positive real part. On the other hand, by Proposition 4.2, all eigenvalues of \(\lambda \)G + Q \(\otimes \) I + M \(\otimes \) S(K, m \({}_{s}\)) have a nonpositive real part. Therefore, the two matrices have no common eigenvalue.

If \(\rho \) = 1, zero is an eigenvalue of both \(\lambda \)(IR) and \(\lambda \)G + Q \(\otimes \) I + M \(\otimes \) S(K, m \({}_{s}\)). Similar to Choi et al. (2004), it can be shown that the algebraic multiplicity of the two matrices is one. This completes the proof of Proposition B.2. \(\square \)

By Proposition B.2, if \(\rho \) \(\ne \) 1, it can be shown that Eq. (56) gives 2m \({}_{e}\) m \({}_{s}\) \({}^{K}\) independent solutions to (15). Then all we need to do is to find \(\{\) u \({}_{1}\), u \({}_{2} \}\) for a solution to (10). To do so, we use the function dp \({}_{K}\) \({}_{+1}\)(x)/dx, which can be found in two ways: using Eqs. (10) and (16). Equalizing the resulted expressions at \(x=0\) and \(x=\tau -\) leads to the following linear system for \(\{\) u \({}_{1}\), u \({}_{2} \}\):

$$\begin{aligned}&\displaystyle {\mathbf{0}=\mathbf{u}_{1} e^{\lambda (R-I)\tau } \left( D_{K} (I\otimes S^{+} (K-1,m_{s} ))\frac{1}{\lambda } -R\right) } \nonumber \\&\displaystyle \mathbf{\; \; \; \; \; }+\mathbf{u}_{2} \left( G-I+\frac{1}{\lambda } \left( Q\otimes I+M\otimes S(K,m_{s})\right) \right) \nonumber \\&\displaystyle \mathbf{\; \ \ \ \ } + \mathbf{u}_2\frac{1}{\lambda } \left( D_{K} (I\otimes S^{+} (K-1,m_{s} ))\right) ; \nonumber \\&\displaystyle \mathbf{0}=\mathbf{u}_{1} \left( Q\otimes I+M\otimes S(K,m_{s} )+\lambda R\right) \nonumber \\&\displaystyle \ \ \ \ \ \ \ +\mathbf{u}_{2} \lambda e^{\left( \lambda G+Q\otimes I+M\otimes S(K,m_{s} )\right) \tau } \left( I-G\right) . \end{aligned}$$
(57)

Finally, we use the law of total probability to normalize \(\{\) u \({}_{1}\), u \({}_{2} \}\), i.e., \(\sum _{k=0}^{K}{} \mathbf{p}_{k} \mathbf{e} +\int _{0}^{\tau }{} \mathbf{p}_{K+1} (x)\text{ d }x\mathbf{e} =1\). By routine calculations, we obtain

$$\begin{aligned} \displaystyle \sum _{k=0}^{K}{} \mathbf{p}_{k} \mathbf{e}= & {} \mathbf{p}_{K} \left( I+\sum _{k=0}^{K-1}\left( \prod _{j=0}^{K-(k+1)}D_{K-j} \right) \right) \mathbf{e} \nonumber \\ \displaystyle= & {} \left( \mathbf{u}_{1} \exp \{ \lambda (R-I)\tau \} +\mathbf{u}_{2} \right) \frac{1}{\lambda } \left( I+\sum _{k=0}^{K-1}\left( \prod _{j=0}^{K-(k+1)}D_{K-j} \right) \right) \mathbf{e},\qquad \end{aligned}$$
(58)

and

$$\begin{aligned}&\displaystyle \int _{0}^{\tau }{} \mathbf{p}_{K+1} (x)\text{ d }x \nonumber \\&\quad \displaystyle =\mathbf{u}_{1} \int _{0}^{\tau }e^{\{ \lambda (R-I)(\tau -x)\}} \text{ d }x +\mathbf{u}_{2} \int _{0}^{\tau }\lambda e^{\left( \lambda G+Q\otimes I+M\otimes S(K,m_{s} )\right) x} \text{ d }x . \end{aligned}$$
(59)

Using properties given in Sect. 4, the integrals in Eq. (59) can be obtained.

Proposition B.3

We have, for \(\rho \) < 1,

$$\begin{aligned}&\displaystyle \int _{0}^{\tau }\exp \{ \lambda (R-I)(\tau -x)\} \text{ d }x \nonumber \\&\quad \displaystyle =\frac{1}{\lambda } \left( e^{\lambda (R-I)\tau } -I+\lambda \tau {\xi ({\varvec{\pi }} \otimes {\varvec{\phi }} )}\right) \left( R-I+ {{{\varvec{\xi }} ({\varvec{\pi }} \otimes {\varvec{\phi }} )}} \right) ^{-1} ; \end{aligned}$$
(60)
$$\begin{aligned}&\displaystyle \int _{0}^{\tau }e^{\left( \lambda G+Q\otimes I+M\otimes S(K,m_{s} )\right) x} \text{ d }x \nonumber \\&\quad \displaystyle \ \ =\left( e^{\left( \lambda G+Q\otimes I+M\otimes S(K,m_{s} )\right) x} -I\right) \left( \lambda G+Q\otimes I+M\otimes S(K,m_{s} )\right) ^{-1},\qquad \end{aligned}$$
(61)

for \(\rho \) > 1,

$$\begin{aligned} \displaystyle {\int _{0}^{\tau }\exp \{ \lambda (R-I)(\tau -x)\} \text{ d }x =\frac{1}{\lambda } \left( e^{(R-I)\tau } -I\right) \left( R-I\right) ^{-1} ;}\end{aligned}$$
(62)
$$\begin{aligned} \begin{array}{l} \displaystyle {\int _{0}^{\tau }e^{\left( \lambda G+Q\otimes I+M\otimes S(K,m_{s} )\right) x} \text{ d }x } \\ \displaystyle \ \ \ \ \ \ \ =\left( e^{\left( \lambda G+Q\otimes I+M\otimes S(K,m_{s} )\right) \tau } -I+\tau {{{\varvec{\zeta }} ({\varvec{\pi }} \otimes {\varvec{\phi }})}} \right) \\ \displaystyle \ \ \ \ \ \ \ \ \ \ \ \times \left( \lambda G+Q\otimes I+M\otimes S(K,m_{s} +)+ {{{\varvec{\zeta }} ({\varvec{\pi }} \otimes {\varvec{\phi }} )}} \right) ^{-1}, \end{array} \end{aligned}$$
(63)

and, for \(\rho \) = 1, Eqs. (60) and (63) hold.

Proof

The proof is based on Propositions 4.3 and  4.4. Details are omitted.

By Proposition B.3, the linear system with (19), (20), and (21) for \(\{\) u \({}_{1}\), u \({}_{2} \}\) is obtained for the case with \(\rho \) \(\ne \) 1.

If \(\rho \) = 1, by Proposition B.2, (56) gives 2m \({}_{e}\) m \({}_{s}\) \({}^{K}\) – 1 independent solutions to (15). We need to find one more solution to (15). By Proposition 4.2, it is easy to verify that

$$\begin{aligned} \mathbf{v}(x)=({\varvec{\pi }} \otimes {\varvec{\phi }} )\left( \lambda x I + (A_{0} -A_{2} )\left( A_{0} +A_{1} +A_{2} +\mathbf{e}{{\otimes ({\varvec{\pi }} \otimes {\varvec{\phi }})}}\right) ^{-1} \right) \end{aligned}$$
(64)

is another independent solution to (15). Then the solution to (15) can be expressed as

$$\begin{aligned} \mathbf{u}(x)=\mathbf{u}_{1} \exp \{ X_{1} x\} +\mathbf{u}_{2} \exp \{ X_{2} x\} + u_{3} \mathbf{v}(x), \end{aligned}$$
(65)

where \(u_3\) is a constant. Similar to the case with \(\rho \) \(\ne \) 1, a linear system for \(\{\) u \({}_{1}\), u \({}_{2}\), \(u_{3}\}\) can be established by using two boundary conditions (i.e., conditions at \(x=0\) and \(x=\tau -\)) and the law of total probability. Once \(\{\) u \({}_{1}\), u \({}_{2}\), \(u_3\}\) is obtained, a solution to (15) can be obtained. Details are omitted.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, QM., Zhang, H. & Ye, Q. An M/PH/K queue with constant impatient time. Math Meth Oper Res 87, 139–168 (2018). https://doi.org/10.1007/s00186-017-0612-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00186-017-0612-2

Keywords

Mathematics Subject Classification

Navigation