Skip to main content
Log in

Multi-component matching queues in heavy traffic

  • Published:
Queueing Systems Aims and scope Submit manuscript

Abstract

We consider multi-component matching systems in heavy traffic consisting of \(K\ge 2\) distinct perishable components which arrive randomly over time at high speed at the assemble-to-order station, and they wait in their respective queues according to their categories until matched or their “patience" runs out. An instantaneous match occurs if all categories are available, and the matched components leave immediately thereafter. For a sequence of such systems parameterized by n, we establish an explicit definition for the matching completion process, and when all the arrival rates tend to infinity in concert as \(n\rightarrow \infty \), we obtain a heavy traffic limit of the appropriately scaled queue lengths under mild assumptions, which is characterized by a coupled stochastic integral equation with a scalar-valued nonlinear term. We demonstrate some crucial properties for certain coupled equations and exhibit numerical case studies. Moreover, we establish an asymptotic Little’s law, which reveals the asymptotic relationship between the queue length and its virtual waiting time. Motivated by the cost structure of blood bank drives, we formulate an infinite-horizon discounted cost functional and show that the expected value of this cost functional for the nth system converges to that of the heavy traffic limiting process as n tends to infinity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Kashyap, B.R.: The double-ended queue with bulk service and limited waiting space. Oper. Res. 14(5), 822–834 (1966)

    Article  Google Scholar 

  2. Kaspi, H., Perry, D.: Inventory systems of perishable commodities. Adv. Appl. Probab. 15(3), 674–685 (1983)

    Article  Google Scholar 

  3. Perry, D., Stadje, W.: Perishable inventory systems with impatient demands. Math. Methods Oper. Res. 50(1), 77–90 (1999)

    Article  Google Scholar 

  4. Xie, B., Gao, Y.: On the long-run average cost minimization problem of the stochastic production-inventory model (2023). (preprint)

  5. Lee, C., Liu, X., Liu, Y., Zhang, L.: Optimal control of a time-varying double-ended production queueing model. Stoch. Syst. 11, 140–173 (2021)

    Article  Google Scholar 

  6. Bar-Lev, S.K., Boxma, O., Mathijsen, B., Perry, D.: A blood bank model with perishable blood and demand impatience. Stoch. Syst. 7(2), 237–263 (2017)

    Article  Google Scholar 

  7. Boxma, O.J., David, I., Perry, D., Stadje, W.: A new look at organ transplantation models and double matching queues. Probab. Eng. Inf. Sci. 25(2), 135–155 (2011)

    Article  Google Scholar 

  8. Khademi, A., Liu, X.: Asymptotically optimal allocation policies for transplant queueing systems. SIAM J. Appl. Math. 81(3), 1116–1140 (2021)

    Article  Google Scholar 

  9. Özkan, E., Ward, A.R.: Dynamic matching for real-time ride sharing. Stoch. Syst. 10(1), 29–70 (2020)

    Article  Google Scholar 

  10. Reed, J., Ward, A.R.: Approximating the GI/GI/1+ GI queue with a nonlinear drift diffusion: hazard rate scaling in heavy traffic. Math. Oper. Res. 33(3), 606–644 (2008)

    Article  Google Scholar 

  11. Koçağa, Y.L., Ward, A.R.: Admission control for a multi-server queue with abandonment. Queueing Syst. 65, 275–323 (2010)

    Article  Google Scholar 

  12. Weerasinghe, A.: Diffusion approximations for g/m/n+ gi queues with state-dependent service rates. Math. Oper. Res. 39(1), 207–228 (2014)

    Article  Google Scholar 

  13. Liu, X.: Diffusion approximations for double-ended queues with reneging in heavy traffic. Queueing Syst. 91(1), 49–87 (2019)

    Article  Google Scholar 

  14. Liu, X., Weerasinghe, A.: Admission control for double-ended queues. arXiv:2101.06893 (2021)

  15. Mairesse, J., Moyal, P.: Editorial Introduction to the Special Issue on Stochastic Matching Models, Matching Queues and Applications. Springer, Berlin (2020)

    Book  Google Scholar 

  16. Conolly, B., Parthasarathy, P., Selvaraju, N.: Double-ended queues with impatience. Comput. Oper. Res. 29(14), 2053–2072 (2002)

    Article  Google Scholar 

  17. Liu, X., Gong, Q., Kulkarni, V.G.: Diffusion models for double-ended queues with renewal arrival processes. Stoch. Syst. 5(1), 1–61 (2015)

    Article  Google Scholar 

  18. Castro, F., Nazerzadeh, H., Yan, C.: Matching queues with reneging: a product form solution. Queueing Syst. 96(3–4), 359–385 (2020)

    Article  Google Scholar 

  19. Weiss, G.: Directed FCFS infinite bipartite matching. Queueing Syst. 96(3–4), 387–418 (2020)

    Article  Google Scholar 

  20. Kohlenberg, A., Gurvich, I.: The cost of impatience in dynamic matching: Scaling laws and operating regimes. Available at SSRN 4453900 (2023)

  21. Xie, B., Wu, R.: Controlling of multi-component matching queues with buffers (2023). (Working paper)

  22. Harrison, J.M.: Assembly-like queues. J. Appl. Probab. 10(2), 354–367 (1973)

    Article  Google Scholar 

  23. Plambeck, E.L., Ward, A.R.: Optimal control of a high-volume assemble-to-order system. Math. Oper. Res. 31(3), 453–477 (2006)

    Article  Google Scholar 

  24. Gurvich, I., Ward, A.: On the dynamic control of matching queues. Stochastic Syst. 4(2), 479–523 (2015)

    Article  Google Scholar 

  25. Rahme, Y., Moyal, P.: A stochastic matching model on hypergraphs. Adv. Appl. Probab. 53(4), 951–980 (2021)

    Article  Google Scholar 

  26. Büke, B., Chen, H.: Stabilizing policies for probabilistic matching systems. Queueing Syst. 80, 35–69 (2015)

    Article  Google Scholar 

  27. Mairesse, J., Moyal, P.: Stability of the stochastic matching model. J. Appl. Probab. 53(4), 1064–1077 (2016)

    Article  Google Scholar 

  28. Nazari, M., Stolyar, A.L.: Reward maximization in general dynamic matching systems. Queueing Syst. 91, 143–170 (2019)

    Article  Google Scholar 

  29. Jonckheere, M., Moyal, P., Ramírez, C., Soprano-Loto, N.: Generalized max-weight policies in stochastic matching. Stochastic Syst. 13(1), 40–58 (2023)

    Article  Google Scholar 

  30. Green, L.: A queueing system with general-use and limited-use servers. Oper. Res. 33(1), 168–182 (1985)

    Article  Google Scholar 

  31. Adan, I., Foley, R.D., McDonald, D.R.: Exact asymptotics for the stationary distribution of a Markov chain: a production model. Queueing Syst. 62(4), 311–344 (2009)

    Article  Google Scholar 

  32. Adan, I., Bušić, A., Mairesse, J., Weiss, G.: Reversibility and further properties of FCFS infinite bipartite matching. Math. Oper. Res. 43(2), 598–621 (2018)

    Article  Google Scholar 

  33. Fazel-Zarandi, M.M., Kaplan, E.H.: Approximating the first-come, first-served stochastic matching model with ohm’s law. Oper. Res. 66(5), 1423–1432 (2018)

    Article  Google Scholar 

  34. Brémaud, P.: Point Processes and Queues: Martingale Dynamics. Springer Series in Statistics, vol. 50. Springer, New York (1981)

    Book  Google Scholar 

  35. Ethier, S.N., Kurtz, T.G.: Markov Processes: Characterization and Convergence. Wiley, New York (2009)

    Google Scholar 

  36. Pang, G., Talreja, R., Whitt, W.: Martingale proofs of many-server heavy-traffic limits for Markovian queues. Probab. Surv. 4, 193–267 (2007)

    Article  Google Scholar 

  37. Mandelbaum, A., Momčilović, P.: Queues with many servers and impatient customers. Math. Oper. Res. 37(1), 41–65 (2012)

    Article  Google Scholar 

  38. Atar, R., Mandelbaum, A., Reiman, M.I., et al.: Scheduling a multi class queue with many exponential servers: asymptotic optimality in heavy traffic. Ann. Appl. Probab. 14(3), 1084–1134 (2004)

    Article  Google Scholar 

  39. Krichagina, E.V., Taksar, M.I.: Diffusion approximation for GI/G/1 controlled queues. Queueing Syst. 12(3), 333–367 (1992)

    Article  Google Scholar 

  40. Xie, B.: Topics of queueing theory in heavy traffic. Ph.D. Thesis, Iowa State University (2022)

  41. Protter, P.E.: General stochastic integration and local times. In: Stochastic Integration and Differential Equations, pp. 153– 236. Springer, New York ( 2005)

  42. Whitt, W.: Stochastic-Process Limits: An Introduction to Stochastic-Process Limits and their Application to Queues. Springer, New York (2002)

    Book  Google Scholar 

  43. Gans, N., Koole, G., Mandelbaum, A.: Telephone call centers: tutorial, review, and research prospects. Manuf. Serv. Oper. Manag. 5(2), 79–141 (2003)

    Article  Google Scholar 

  44. Chung, K.L., Williams, R.J., Williams, R.: Introduction to Stochastic Integration, vol. 2. Springer, New York (1990)

    Book  Google Scholar 

  45. Dai, J., He, S.: Customer abandonment in many-server queues. Math. Oper. Res. 35(2), 347–362 (2010)

    Article  Google Scholar 

Download references

Acknowledgements

The author would like to acknowledge his advisor, Ananda Weerasinghe, for his guidance, patience, enthusiasm, and inspiration throughout the research and the writing of the paper. The author also would like to acknowledge Xin Liu and Ruoyu Wu for their suggestions on the content and structure of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bowen Xie.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Proofs

Appendix A: Proofs

1.1 Appendix A.1: Proof of Lemma 3

Proof of Lemma 3

Observe that \(I_i^n(t)\) is a stochastic process with continuous non-decreasing non-negative sample paths. We also know that \(\{I_i^n(t) < x\}\in \bar{{\mathcal {F}}}_x^n\) for all \(x\ge 0\) and \(t\ge 0\), since to know \(I_i^n(t)\), we need all the information of \(Q_i^n(s)\) for \(0\le s\le t\), which depends on \(I_i^n(s)\) for all \(0\le s\le t\) by (10). Thus, to evaluate \(I_i^n(t)<x\), it suffices to consider \(\{N_i(u): 0\le u\le x\}\). This concludes that \(I_i^n(t)\) is an \(\bar{{\mathcal {F}}}_x^n\)-stopping time for each \(t\ge 0\). By (10) and the non-negativity of \(G_i^n\) in (4) and \(R^n\) in (9), we have a crude inequality \(Q_i^n(t) = Q_i^n(0) + A_i^n(t) - G_i^n(t) - R^n(t) \le Q_i^n(0) + A_i^n(t)\). Using this inequality and since \(Q_i^n(0)\) is deterministic, we further have

$$\begin{aligned} \begin{aligned} E\left[ \delta _i^n\int _0^tQ_i^n(s)\textrm{d}s\right]&\le t\delta _i^n\left( Q_i^n(0) + E\left[ A_i^n(t)\right] \right) =t\delta _i^n\left( Q_i^n(0) + \lambda _i^n t\right) <\infty , \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} E\left[ N_i\left( \delta _i^n\int _0^t Q_i^n(s)\textrm{d}s\right) \right]&\le E\left[ N_i\left( t\delta _i^n(Q_i^n(0) + A_i^n(t))\right) \right] \\ {}&=t\delta _i^n(Q_i^n(0) + \lambda _i^n t)<\infty . \end{aligned} \end{aligned}$$

Since all the conditions of Lemma 3.2 in [36] are fulfilled, we conclude that

$$\begin{aligned} N_i\left( \delta _i^n\int _0^t Q_i^n(s)\textrm{d}s\right) - \delta _i^n\int _0^tQ_i^n(s)\textrm{d}s, \end{aligned}$$

is a square-integrable martingale with respect to \((\bar{{\mathcal {F}}}_{I_i^n}^n)\). Consequently, \({\hat{M}}_i^n\) is a square-integrable \({\mathcal {F}}_t^n\)-martingale with quadratic variation process in (25) since the increments of arrival process \(A_i^n(t+s) - A_i^n(t)\) for \(s\ge 0\) is independent of \(Q_i^n(s)\) for \(0\le s\le t\). \(\square \)

1.2 Appendix A.2: Proof of Proposition 4

Proof of Proposition 4

Since \({\hat{M}}_i^n\) is a martingale, by the Burkholder’s inequality (see Theorem 45 in Protter [41]) and (25), we have

$$\begin{aligned} E\left[ \Vert {\hat{M}}_i^n\Vert _T^2\right] \le \tilde{C} E\left[ [{\hat{M}}_i^n, {\hat{M}}_i^n](T)\right] = \tilde{C} E\left[ \delta _i^n\int _0^T {\bar{Q}}_i^n(s)\textrm{d}s\right] , \end{aligned}$$

where \(\tilde{C}\) is some positive constant. Since \(A_i^n\) is a Poisson arrival process and \(\lambda _i^n/n\rightarrow \lambda _0\) by (7), we further have

$$\begin{aligned} \begin{aligned} E\left[ \delta _i^n\int _0^T {\bar{Q}}_i^n(s)\textrm{d}s\right] \le T\delta _i^n \left( {\bar{Q}}_i^n(0) + \frac{1}{\sqrt{n}}E\left[ \Vert {\hat{A}}_i^n\Vert _T\right] + \frac{\lambda _i^n}{n}T\right) \le C_1(1+T^l), \end{aligned} \end{aligned}$$

where \(C_1\) and \(l\ge 2\) are constants independent of T and n. This concludes (26). Consequently, by the Chebyshev’s inequality, we have

$$\begin{aligned} \lim _{a\rightarrow \infty }\limsup _{n\rightarrow \infty } P\left[ \Vert {\hat{M}}_i^n\Vert _T^2>a\right] = 0. \end{aligned}$$

This completes the proof. \(\square \)

1.3 Appendix A.3: Proof of Proposition 5

Proof of Proposition 5

Considering the martingale representation (21), we Assume

$$\begin{aligned} B_T^n = \sum _{k=1}^K \left( {\hat{Q}}_k^n(0) + \Vert {\hat{A}}_k^n\Vert _T + \left|\frac{\lambda _k^n - \lambda _0 n}{\sqrt{n}}\right|T + \Vert {\hat{M}}_k^n\Vert _T\right) . \end{aligned}$$
(51)

By (7), (15), and (26), we can represent \(B_T^n:= B_T^n(\omega )\) as a square-integrable random variable with the second moment bound \(E\left[ (B_T^n)^2\right] \le C_2(1+T^b)\), where \(C_2\) and \(b\ge 2\) are constants independent of T and n. Next, we intend to find a moment bound for \(\varvec{{\hat{Q}}^n}\) as the following:

$$\begin{aligned} \begin{aligned} \sum _{k=1}^K \vert {{\hat{Q}}_k^n(t)} \vert \le B_T^n + c_0 \int _0^t \sum _{k=1}^K \vert {{\hat{Q}}_k^n(s)} \vert \textrm{d}s + K\vert {{\hat{R}}^n(t)} \vert , \end{aligned} \end{aligned}$$

assuming \(\sup _{1\le k\le K} (\delta _k^n)\le c_0\) for some constant \(c_0>0\) and for all \(n>0\) since \(\lim _{n\rightarrow \infty }\delta _i^n = \delta _i\).

Now it suffices to consider the last term on the right-hand side. Notice that (22) suggests that for any \(k\in \{1, \ldots , K\}\),

$$\begin{aligned} \begin{aligned} {\hat{R}}^n(t)&\le \left|{\hat{Q}}_k^n(0) + {\hat{A}}_k^n(t) + \frac{\lambda _k^n - \lambda _0 n}{\sqrt{n}}t - {\hat{M}}_k^n(t) - \delta _k^n \int _0^t {\hat{Q}}_k^n(s)\textrm{d}s\right|\\&\le B_T^n + \sum _{k=1}^K \delta _k^n \int _0^t \vert {{\hat{Q}}_k^n(s)} \vert \textrm{d}s. \end{aligned} \end{aligned}$$

The first inequality holds for all \(k\in \{1, \ldots , K\}\) since the scalar-valued process \({\hat{R}}^n\) is defined to be the minimum value as described in (22). Moreover, (22) also suggests an identical upper bound.

These implies an upper bound of \(\vert {{\hat{R}}^n(t)} \vert \), namely

$$\begin{aligned} \vert {{\hat{R}}^n(t)} \vert \le B_T^n + \sum _{k=1}^K \delta _k^n\int _0^t \vert {{\hat{Q}}_k^n(s)} \vert \textrm{d}s. \end{aligned}$$
(52)

This together with previous inequalities of \(\sum _{k=1}^K \vert {{\hat{Q}}_k^n(t)} \vert \), we further have

$$\begin{aligned} \begin{aligned} \sum _{k=1}^K \vert {{\hat{Q}}_k^n(t)} \vert \le (1+K)B_T^n + c_0(1+K) \int _0^t \sum _{k=1}^K \vert {{\hat{Q}}_k^n(s)} \vert \textrm{d}s. \end{aligned} \end{aligned}$$

We apply the Gronwall’s inequality to function \(t\mapsto \sum _{k=1}^K \vert {{\hat{Q}}_k^n(t)} \vert \) to obtain

$$\begin{aligned} \sum _{k=1}^K\vert {{\hat{Q}}_k^n(t)} \vert \le (1+K)B_T^n \exp {\left( c_0(1+K) T \right) }, \end{aligned}$$
(53)

which further yields

$$\begin{aligned} \Vert \varvec{{\hat{Q}}^n}(t)\Vert = \left( \sum _{k=1}^K \vert {{\hat{Q}}_k^n(t)} \vert ^2\right) ^{\frac{1}{2}} \le \sum _{k=1}^K \vert {{\hat{Q}}_k^n(t)} \vert \le (1+K)B_T^n \exp {\left( c_0(1+K) T \right) }.\nonumber \\ \end{aligned}$$
(54)

Consequently, we have the moment-bound result:

$$\begin{aligned} E\left[ \Vert \varvec{{\hat{Q}}^n}\Vert _T^2\right] \le (1+K)^2 C_2(1+T^b) \exp {\left( 2c_0(1+K) T \right) }, \end{aligned}$$
(55)

which further implies (27). The stochastic boundedness follows by employing Chebyshev’s inequality. This completes the proof. \(\square \)

1.4 Appendix A.4: Proof of Theorem 6

Proof of Theorem 6

For brevity, we intend to consider the case of \(h(\varvec{x}(t)) = (\delta _1 x_1(t), \delta _2 x_2(t), \ldots , \delta _K x_K(t))^\intercal \) for \(t\ge 0\), which is a special case of the integral term in the heavy traffic limit obtained in (16). For fixed \(\varvec{y}(t)\in D^K[0, T]\), we define a functional \(M: D^K[0, T]\rightarrow D^K[0, T]\) by

$$\begin{aligned} M(\varvec{x}(t)) = \varvec{y}(t) - \int _0^t h(\varvec{x}(s))\textrm{d}s - R(t)\varvec{I}, \end{aligned}$$
(56)

where \(R(\cdot )\) is defined in (29). To demonstrate the existence of a unique solution, it suffices to show M is a contraction mapping on \( D^K[0, T]\) embedded with the uniform topology. Suppose there are two solutions to the integral representation (28), namely \(\varvec{x}^{(1)}(t)\) and \(\varvec{x}^{(2)}(t)\) for \(t\ge 0\). Accordingly, we have

$$\begin{aligned} R^{(k)}(t) = \Psi (\varvec{x}^{(k)}, \varvec{y})(t)= \min _{1\le j \le K} \left\{ y_j(t) - \int _0^t \delta _j x^{(k)}_j(s)\textrm{d}s\right\} , \end{aligned}$$
(57)

for \(k = 1, 2\). The functional M defined in (56) suggests

$$\begin{aligned} \begin{aligned} \Vert M(\varvec{x}^{(1)}(t)) - M(\varvec{x}^{(2)}(t)) \Vert&\le \sum _{k=1}^K \delta _k \int _0^t \left|x_k^{(1)}(s) - x_k^{(2)}(s)\right|\textrm{d}s \\&+ K\left|R^{(1)}(t) - R^{(2)}(t) \right|. \end{aligned} \end{aligned}$$

The complication comes from the second term \(\vert {R^{(1)}(t) - R^{(2)}(t)} \vert \). To find an upper bound, we assume there exists some \(l\in [1, K]\) depends on t so that it achieves the minimum in \(R^{(2)}(t)\). By (57), we have for \(t\in [0, T]\),

$$\begin{aligned} \begin{aligned} R^{(1)}(t) - R^{(2)}(t)&\le y_l(t) - \int _0^t \delta _l x^{(1)}_j(s)\textrm{d}s - \left( y_l(t) - \int _0^t \delta _l x^{(2)}_l(s)\textrm{d}s\right) \\&\le \left|\int _0^t \delta _l\left( x^{(1)}_l(s) - x^{(2)}_l(s)\right) \textrm{d}s\right|\\&\le \sup _{1\le j\le K} (\delta _j) \cdot \int _0^t \sum _{j=1}^K \left|x^{(1)}_j(s) - x^{(2)}_j(s)\right|\textrm{d}s. \end{aligned} \end{aligned}$$

Similarly, we can obtain an identical upper bound for \(R^{(2)}(t) - R^{(1)}(t)\). Thus,

$$\begin{aligned} \left|R^{(1)}(t) - R^{(2)}(t)\right|\le \sup _{1\le k\le K} (\delta _k) \cdot \int _0^t \sum _{k=1}^K \left|x^{(1)}_k(s) - x^{(2)}_k(s)\right|\textrm{d}s. \end{aligned}$$
(58)

Therefore, we have

$$\begin{aligned} \begin{aligned} \Vert M(\varvec{x}^{(1)}(t))- M(\varvec{x}^{(2)}(t))\Vert&\le (1+K) \sup _{1\le k \le K} (\delta _k) \int _0^t\sum _{k=1}^K \left|x^{(1)}_k(s) - x^{(2)}_k(s)\right|\textrm{d}s\\&\le \epsilon (T) \Vert x^{(1)}(t) - x^{(2)}(t)\Vert _T, \end{aligned} \end{aligned}$$

where \(\epsilon (T) = (1+K)\sqrt{K}\left( \sup _{1\le k \le K} (\delta _k)\right) T\). This yields

$$\begin{aligned} \Vert M(\varvec{x}^{(1)}(t))- M(\varvec{x}^{(2)}(t))\Vert _T \le \epsilon (T) \Vert x^{(1)}(t) - x^{(2)}(t)\Vert _T, \end{aligned}$$

One may pick \(T_1>0\) such that \(\epsilon (T_1)<1\), and then the functional M formulates a contraction mapping for \(t\in [0, T_1]\) on \( D^K[0, T_1]\) with uniform topology, which leads to the existence of a unique solution to (28) by the Banach fixed-point theorem. If we partition the time interval [0, T] into several length \(T_1\) subintervals, we can apply the above arguments on each one of those length \(T_1\) subintervals to obtain a unique solution for all \(t\in [0, T]\). These guarantees a unique solution \(\varvec{x}\in D^K[0, T]\) to the fixed point problem \(M(\varvec{x}) = \varvec{x}\).

The continuity of f can be deduced by considering \(\Vert f(\varvec{y}(t_n)) - f(\varvec{y}(t))\Vert \). Analogous to previous discussions, we end up with the following inequality:

$$\begin{aligned} \begin{aligned} \Vert \varvec{x}(t_n) - \varvec{x}(t)\Vert&= \Vert f(\varvec{y}(t_n)) - f(\varvec{y}(t))\Vert \\&\le (1+K)\sqrt{K} \Vert \varvec{y}(t_n) - \varvec{y}(t)\Vert + (1+K)\sqrt{K} \sup _{1\le k\le K} (\delta _k)\\&\int _t^{t_n} \Vert \varvec{x}(s)\Vert \textrm{d}s. \end{aligned} \end{aligned}$$

If we impose the boundedness for \(\varvec{x}(\cdot )\), \(\varvec{x}\) is continuous if \(\varvec{y}\) is continuous. \(\square \)

1.5 Appendix A.5: Proof of Corollary 7

Proof of Corollary 7

The proof of this extension relies on verifying the uniform integrability of a proper integrand. Since (6), (7), and (14), we have \(\varvec{\xi ^n}\Rightarrow \varvec{\xi }\) in \( D^K[0, T]\) as \(n\rightarrow \infty \). By Skorokhod’s representation theorem, we can simply assume that \(\varvec{\xi ^n}\) converges to \(\varvec{\xi }\) a.s. in some special probability space. For given \(\varvec{\xi ^n}\) and \(\varvec{\xi }\) in conjunction with Theorem 6, we obtain \(\varvec{{\hat{Q}}^n}\) and \(\varvec{X}\) associated with the corresponding input processes \(\varvec{\xi ^n}\) and \(\varvec{\xi }\) so that they solve (28), respectively. Therefore, we have

$$\begin{aligned}{} & {} \sum _{j=1}^K \vert {{\hat{Q}}_j^n(t) - X_j(t)} \vert \le \sum _{j=1}^K \vert {\xi _j^n(t) - \xi _j(t)} \vert + \int _0^t \sum _{j=1}^K \nonumber \\{} & {} \quad \vert {\delta _j^n{\hat{Q}}_j^n(s) - \delta _jX_j(s)} \vert \textrm{d}s + K\vert {{\hat{R}}^n(t) - R(t)} \vert . \end{aligned}$$
(59)

To find an upper bound, the difficulty also comes from the last term \(\vert {{\hat{R}}^n(t) - R(t)} \vert \). To this end, it suffices to find an upper bound for \(\vert {{\hat{R}}^n(\cdot ) - {\hat{R}}(\cdot )} \vert \). Consider two differences without absolute value separately. We assume that there exist indices \(l_1\) and \(l_2\) depend on t such that the minimum entry in \({\hat{R}}^n(t)\) is attained at \(l_1\) and the minimum entry in R(t) is attained at \(l_2\). Hence,

$$\begin{aligned} {\hat{R}}^n(t) - R(t){} & {} = \min _{1\le k\le K}\left\{ \xi _k^n(t) - \int _0^t \delta _k^n{\hat{Q}}_k^n(s)ds\right\} - \min _{1\le k\le K} \left\{ \xi _k(t) - \int _0^t \delta _k X_k(s)\textrm{d}s\right\} \\{} & {} \le \xi _{l_2}^n(t) - \int _0^t \delta _{l_2}^n{\hat{Q}}_{l_2}^n(s)\textrm{d}s - \left( \xi _{l_2}(t) - \int _0^t \delta _{l_2} X_{l_2}(s)\textrm{d}s\right) \\{} & {} \le \vert {\xi _{l_2}^n(t) - \xi _{l_2}(t)} \vert + \int _0^t \vert {\delta _{l_2}^n{\hat{Q}}_{l_2}^n(s) - \delta _{l_2} X_{l_2}(s)} \vert \textrm{d}s \\{} & {} \le \sum _{j=1}^K \vert {\xi _{j}^n(t) - \xi _{j}(t)} \vert + \int _0^t \sum _{j=1}^K\vert {\delta _{j}^n{\hat{Q}}_{j}^n(s) - \delta _{j} X_{j}(s)} \vert \textrm{d}s. \end{aligned}$$

Notice that the first inequality holds since \({\hat{R}}^n(t) \le \xi _k^n(t) - \int _0^t \delta _k^n{\hat{Q}}_k^n(s)\textrm{d}s\) for any \(k\in \{1, \ldots , K\}\) and \(t\ge 0\). Similarly, we can obtain an upper bound for \(R(t) - {\hat{R}}^n(t)\). Consequently, we have the following upper bound:

$$\begin{aligned} \vert {{\hat{R}}^n(t) - R(t)} \vert \le \sum _{j=1}^K \vert {\xi _{j}^n(t) - \xi _{j}(t)} \vert + \int _0^t \sum _{j=1}^K\vert {\delta _{j}^n{\hat{Q}}_{j}^n(s) - \delta _{j} X_{j}(s)} \vert \textrm{d}s. \end{aligned}$$
(60)

This fact and (59) suggest that

$$\begin{aligned}{} & {} \Vert \varvec{{\hat{Q}}^n}(t) - \varvec{X}(t)\Vert \\{} & {} \quad \le (1+K)\left( \sum _{j=1}^K \vert {\xi _{j}^n(t) - \xi _{j}(t)} \vert + \int _0^t \sum _{j=1}^K\vert {\delta _{j}^n{\hat{Q}}_{j}^n(s) - \delta _{j} X_{j}(s)} \vert \textrm{d}s\right) \\{} & {} \quad \le (1+K)\sqrt{K} \left[ \left( \sum _{j=1}^K \vert {\xi _{j}^n(t) - \xi _{j}(t)} \vert ^2\right) ^{\frac{1}{2}} + \int _0^t \left( \sum _{j=1}^K\vert {\delta _{j}^n{\hat{Q}}_{j}^n(s) - \delta _{j} X_{j}(s)} \vert ^2\right) ^{\frac{1}{2}}\textrm{d}s\right] \\{} & {} \quad \le (1+K)\sqrt{K} \left( \Vert \varvec{\xi ^n}(t) - \varvec{\xi }(t)\Vert + C_0 \int _0^t \Vert \varvec{{\hat{Q}}^n}(s) - \varvec{X}(s)\Vert \textrm{d}s\right) , \end{aligned}$$

where we assume \(\left( \sup _{1\le k\le K} \delta _k^n\right) \vee \left( \sup _{1\le k\le K} \delta _k\right) \le C_0\) for some \(C_0\) positive constant. Using the Gronwall’s inequality, we obtain

$$\begin{aligned} \Vert \varvec{{\hat{Q}}^n}(t) - \varvec{X}(t)\Vert \le (1+K)\sqrt{K} \Vert \varvec{\xi ^n}- \varvec{\xi }\Vert _T e^{(1+K)\sqrt{K}C_0 t}. \end{aligned}$$
(61)

Now, if we have the convergence of the right-hand side of (61), it is straightforward to show the convergence of the left-hand side term. Notice that we have assumed almost sure convergence of \(\varvec{\xi ^n}\), which further yields \(\Vert \varvec{\xi ^n} - \varvec{\xi }\Vert _T \rightarrow 0\) in probability as \(n\rightarrow \infty \). We intend to show the convergence also holds in \(L^p[0, T]\) for \(1\le p<2l\) where \(l\ge 1\) is any constant. That is the convergence holds for any \(p\ge 1\). Here since we need to find higher moment bounds for appropriate processes in the proof, we tend to present constant l for generality. Then, Vitali’s convergence theorem suggests that if the pth order integrand is uniformly integrable and in conjunction with convergence in probability, it is straightforward to conclude the convergence in \(L^p[0, T]\).

We are left to show the uniform integrability. It is trivial that

$$\begin{aligned} E\left[ \Vert \varvec{\xi ^n} - \varvec{\xi }\Vert _T^{2l}\right] \le c E\left[ \Vert \varvec{\xi ^n}\Vert _T^{2l} + \Vert \varvec{\xi }\Vert _T^{2l}\right] , \end{aligned}$$
(62)

where \(c>0\) is a generic constant, and we intend to find a moment bound for those two terms separately. Since the moment bound of the second term can be derived by the moment bound of the first term with the help of Fatou’s lemma, it suffices to consider \(E\left[ \Vert \varvec{\xi ^n}\Vert _T^{2l}\right] \). (7) and (15) suggest

$$\begin{aligned} \begin{aligned} E\left[ \Vert \varvec{\xi ^n}\Vert _T^{2l}\right] \le c \left( 1 + T^{2l} + E\left[ \Vert \varvec{{\hat{A}}^n}\Vert _T^{2l}\right] + E\left[ \Vert \varvec{{\hat{M}}^n}\Vert _T^{2l}\right] \right) , \end{aligned} \end{aligned}$$

where \(c>0\) is a generic constant depends on K. Let \(e=\{e(t):= t, t\ge 0\}\) be the identity map. Since the centered and scaled arrival processes \(\{{\hat{A}}_j^n\}_{1\le j\le K}\) are independent Poisson processes as assumed in Assumption 2, and \(A_j^n - \lambda _j^n e\) is a \(({\mathcal {F}}_t^n)_{t\ge 0}\) adapted martingale for each \(j\in \{1, \ldots , K\}\), the Burkholder’s inequality (see [36]) renders

$$\begin{aligned} \begin{aligned} E\left[ \Vert {\hat{A}}_j^n\Vert _T^{2l}\right]&= \frac{1}{n^{l}}E\left[ \Vert A_j^n - \lambda _j^n e\Vert _T^{2l}\right] \le \frac{1}{n^l} E\left[ \left( [A_j^n - \lambda _j^n e, A_j^n - \lambda _j^n e](T)\right) ^l\right] . \end{aligned} \end{aligned}$$

The quadratic variation of compensated Poisson process implies \([A_j^n - \lambda _j^n e, A_j^n - \lambda _j^n e](T) = A_j^n(T)\) and \(E\left[ (A_j^n(T))^l\right] \le c(\lambda _j^n T)^l\). As a consequence,

$$\begin{aligned} \sup _{n\ge 1}E\left[ \Vert {\hat{A}}_j^n\Vert _T^{2l}\right] \le c T^l, \end{aligned}$$
(63)

where \(c>0\) is a generic constant independent of T and n. Similarly, since \({\hat{M}}_j^n\) is also a \(({\mathcal {F}}_t^n)_{t\ge 0}\)-martingale for each \(j\in \{1, \ldots , K\}\) and analogous to the proof of Proposition 4, the Burkholder’s inequality yields

$$\begin{aligned} E\left[ \Vert {\hat{M}}_j^n\Vert _T^{2l}\right] \le c E\left[ \left( [{\hat{M}}_j^n, {\hat{M}}_j^n](T)\right) ^l\right] , \end{aligned}$$

where \(c>0\) is a generic constant. Hence, since \(Q_j^n(0)\) is deterministic and using (25), a crude inequality \(Q_j^n(s)\le Q_j^n(0) + A_j^n(s)\) implies

$$\begin{aligned} \begin{aligned} E\left[ \Vert {\hat{M}}_j^n\Vert _T^{2l}\right]&\le \frac{c}{n^l} E\left[ \left( N_j\left( \delta _j^n\int _0^T Q_j^n(s)ds\right) \right) ^l\right] \\&\le \frac{c}{n^l} E\left[ \left( N_i\left( \delta _i^n T (Q_j^n(0) + A_j^n(T))\right) \right) ^l\right] \\&= \frac{c}{n^l} E\left[ E\left[ \left( N_i(\delta _i^n T (Q_j^n(0) + A_j^n(T)))\right) ^l \vert Q_j^n(0) + A_j^n(T)\right] \right] \\&\le \frac{c}{n^l} T^l \left( (Q_j^n(0))^l + E\left[ (A_j^n(T))^l\right] \right) \\&\le c T^l \left( 1 + T^l\right) , \end{aligned} \end{aligned}$$

where \(c>0\) is a generic constant. Therefore, we obtain the (2l)th moment bound condition

$$\begin{aligned} \sup _{n\ge 1} E\left[ \Vert \varvec{\xi ^n}\Vert _T^{2l}\right] \le c(1+T^d), \end{aligned}$$
(64)

where \(c>0\) is a generic constant, and c and \(d\ge 2l\ge 2\) are both constants independent of T and n. Hence, using (62), we have

$$\begin{aligned} \sup _{n\ge 1} E\left[ \Vert \varvec{\xi ^n} - \varvec{\xi }\Vert _T^{2l}\right] \le c(1+T^d), \end{aligned}$$
(65)

where \(c>0\) is a generic constant and c and \(d\ge 2l \ge 2\) are independent of T and n. This implies the uniform integrability of \(\Vert \varvec{\xi ^n} - \varvec{\xi }\Vert _T^p\) for \(1\le p < 2l\). As a consequence, \(E\left[ \Vert \varvec{\xi ^n} - \varvec{\xi }\Vert _T^p\right] \rightarrow 0\) as \(n\rightarrow \infty \) on a special probability space. Using (61), we further obtain \(E\left[ \Vert \varvec{{\hat{Q}}^n} - \varvec{X}\Vert _T^p\right] \rightarrow 0\). This completes the proof. \(\square \)

1.6 Appendix A.6: Proof of Proposition 8

Proof of Proposition 8

To avoid redundant algebraic manipulations and for brevity, we intend to demonstrate the element indexed by \(i=1\) with \(K=4\) case, and other elements can be obtained by following the same fashion.

To show the coupled process is a semimartingale, it suffices to prove that each component admits a semimartingale decomposition. The limiting processes in (18) can be rewritten as

$$\begin{aligned} X_i(t) = \xi _i(t) - \min \{\xi _1(t), \xi _2(t), \xi _3(t), \xi _4(t)\}, \end{aligned}$$
(66)

for \(i=1, 2, 3, 4\) and \(t\ge 0\), where

$$\begin{aligned} \xi _i(t) = X_i(0) + \beta _i t + \sigma _i W_i(t), \end{aligned}$$
(67)

for each i and \(t\ge 0\). Consider the case of \(i=1\). (66) further suggests that for \(t\ge 0\),

$$\begin{aligned} \begin{aligned} X_1(t)&= \xi _1(t) + \max \{-\xi _1(t), \max \{-\xi _2(t), \eta _3(t)\}\}, \end{aligned} \end{aligned}$$
(68)

where \(\eta _3(t):= \max \{-\xi _3(t), -\xi _4(t)\}\). Observe that \(\eta _3(t)\) can be further rewritten as

$$\begin{aligned} \eta _3(t) = -\xi _4(t) + (\xi _4(t) - \xi _3(t))^+. \end{aligned}$$

By utilizing Tanaka’s formula (see Sect. 7.3 in [44]), we apply Itô’s lemma to the function \(f(x) = x^+\) for \(x\in {\mathbb {R}}\) and obtain

$$\begin{aligned} \begin{aligned} \eta _3(t)&= \max \{-\xi _3(t), -\xi _4(t)\} = -\xi _4(t) + (\xi _4(t)-\xi _3(t))^+ \\&= - X_4(0) - \sigma _4 W_4(t) - \beta _4 t + (X_4(0)-X_3(0))^+ \\&\quad + \sqrt{\sigma _3^2 + \sigma _4^2} \int _0^t \mathbb {1}_{[Y_3(s)>0]}dB_{34}(s) + (\beta _4-\beta _3)\int _0^t \mathbb {1}_{[Y_3(s)>0]}\textrm{d}s + \frac{1}{2}L_t^{(1)}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} Y_3(t):= \xi _4(t) - \xi _3(t) = (X_4(0)- X_3(0)) + \sqrt{\sigma _3^2 + \sigma _4^2} B_{34}(t) + (\beta _4 - \beta _3)t, \end{aligned}$$
(69)

and \(L_t^{(1)}\) is the local time process for \(Y_3(t)\) at the origin, which increases only at time t when \(\xi _3(t) = \xi _4(t)\). Here, \(B_{34}(\cdot )\) is a Brownian motion depends on two independent standard Brownian motions \(W_3\) and \(W_4\) obtained in Proposition 2. Similarly, let \(\eta _2(t):= \max \{-\xi _2(t), \eta _3(t)\}\) in (68) and Tanaka’s formula again yields a similar expression with a new local time process. Following the same fashion, one can move on to the last layer \(\max \{-\xi _1, \eta _2\}\), and iteratively, we can obtain a semimartingale decomposition. The decomposition for general \(K \ge 2\) can be done similarly. \(\square \)

1.7 Appendix A.7: Proof of Theorem 9

First, we prove some results of interest, which will play an important role in the proof of Little’s law. Then, we present the proof of Theorem 9.

Corollary 12

Let \(T>0\) and for each \(i\in \{1, \ldots , K\}\), we have that \({\hat{G}}_i^n\) is stochastically bounded and \({\hat{G}}_i^n(\cdot )\) converges weakly to \(\delta _i \int _0^{\cdot } X_i(s)\textrm{d}s\) in D[0, T] as \(n\rightarrow \infty \).

Proof

We prove the result for the ith queue, and other queues can be proved in a very similar approach. By (20), we have

$$\begin{aligned} {\hat{G}}_i^n(t) = {\hat{M}}_i^n(t) + \delta _i^n\int _0^t {\hat{Q}}_i^n(s)ds, \end{aligned}$$
(70)

for \(t\ge 0\). Using Propositions 4 and 5, we derive the second moment bound result:

$$\begin{aligned} \begin{aligned} E\left[ \Vert {\hat{G}}_i^n\Vert _T^2\right]&\le 2\left( E\left[ \Vert {\hat{M}}_i^n\Vert _T^2\right] + (\delta _i^n)^2 T^2 E\left[ \Vert {\hat{Q}}_i^n\Vert _T^2\right] \right) \\&\le 2C_1(1+T^l) + 2C^2 K(1+K)^2 C_2 T^2 (1+T^b) \exp {(2C(1+K)T)}, \end{aligned} \end{aligned}$$

where C, \(C_1\), \(C_2\), l, and b are constants independent of T and n as described in (26) and (55). Therefore, using the Chebyshev’s inequality, we have \(\lim _{a\rightarrow \infty }\limsup _{n\rightarrow \infty } P \) \( \left[ \Vert {\hat{G}}_i^n\Vert _T>a\right] = 0\).

Next, we show the weak convergence. Since \({\hat{M}}_i^n\) is a \({\mathcal {F}}^n\)-martingale by Lemma 3 and as in the proof of Proposition 4, using the Burkholder’s inequality, we have

$$\begin{aligned} E\left[ \Vert {\hat{M}}_i^n\Vert _T^2\right] \le C E\left[ \delta _i^n \int _0^T {\bar{Q}}_i^n(s)ds\right] \le \frac{C\delta _i^n T}{\sqrt{n}}\left( E\left[ \Vert {\hat{Q}}_i^n\Vert _T^2\right] \right) ^{\frac{1}{2}}. \end{aligned}$$

Using the moment bound result of \({\hat{Q}}_i^n\) in Proposition 5 and we have assumed that \(\lim _{n\rightarrow \infty }\delta _i^n = \delta _i>0\), we obtain \(E\left[ \Vert {\hat{M}}_i^n\Vert _T^2\right] \) converges to zero as \(n\rightarrow \infty \). By Chebyshev’s inequality, we further have \(\Vert {\hat{M}}_i^n\Vert _T\) converges to zero in probability as \(n\rightarrow \infty \). Since Theorem 1 implies the weak convergence of \({\hat{Q}}_i^n\) in D[0, T], the continuity of integral mappings further suggests that \(\delta _i^n\int _0^{\cdot } {\hat{Q}}_i^n(s)\textrm{d}s\) converges weakly to \(\delta _i\int _0^{\cdot } X_i(s) \textrm{d}s\) in D[0, T]. As a consequence, \({\hat{G}}_i^n(\cdot )\) converges weakly to \(\delta _i\int _0^{\cdot } X_i(s) \textrm{d}s\) in D[0, T] as \(n\rightarrow \infty \). \(\square \)

Now, with the facts obtained above, we are ready to see some crucial properties for the virtual waiting time processes introduced in (39).

Proposition 13

Under the assumptions of Theorem 1 and for each \(i\in \{1, \ldots , K\}\), we have that \({\hat{V}}_i^n\) is stochastically bounded and consequently, \(\Vert V_i^n\Vert _T\rightarrow 0\) in probability as \(n\rightarrow \infty \).

Proof

This argument is similar to the idea of proving Proposition 4.4 in [45]. Let \(M>0\) be arbitrary. If \(0<M<{\hat{V}}_i^n(t)\) for some \(t\in [0, T]\), then we have \(V_i^n(t)>\frac{M}{\sqrt{n}}\), which suggests that the queue length of category i at time \(t+\frac{M}{\sqrt{n}}\) is not empty and

$$\begin{aligned} Q_i^n\left( t+\frac{M}{\sqrt{n}}\right) \ge A_i^n\left( t+\frac{M}{\sqrt{n}}\right) -A_i^n(t) - \mathring{G}_i^n\left( t+\frac{M}{\sqrt{n}}\right) , \end{aligned}$$
(71)

where \(\mathring{G}_i^n\left( t, t+\frac{M}{\sqrt{n}}\right) \) represents the amount of abandoned components from the ith queue for those arrivals during \([t, t+\frac{M}{\sqrt{n}})\). It counts those abandoned items that arrived after time t and abandoned before time \(t+\frac{M}{\sqrt{n}}\). We further observe that the number of abandoned components among those arrivals is less than the number of abandoned components by time \(t+\frac{M}{\sqrt{n}}\), namely \(0\le \mathring{G}_i^n\left( t+\frac{M}{\sqrt{n}}\right) \le G_i^n\left( t+\frac{M}{\sqrt{n}}\right) \), since those arrivals before time t may abandon the system during the time interval \([t, t+\frac{M}{\sqrt{n}})\). Therefore, together with a simple computation, we have a diffusion-scaled inequality:

$$\begin{aligned} {\hat{Q}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) \ge {\hat{A}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) - {\hat{A}}_i^n(t) + \frac{\lambda _i^n}{n} M - {\hat{G}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) . \end{aligned}$$
(72)

Let \(0<\delta <1\). Since we assumed \(\lambda _i^n/n\rightarrow \lambda _0\) as \(n\rightarrow \infty \) by (7), we can find an \(\alpha >0\) and \(N\ge 1\) so that for any \(n\ge N\), we have \(0<\frac{M}{\sqrt{n}}<\delta \) and \(\frac{\lambda _i^n}{n}>3\alpha > 0\) hold. Hence, for any \(n\ge N\), we have the following inclusion:

$$\begin{aligned}{} & {} \left[ \Vert {\hat{V}}_i^n\Vert _T> M\right] \subseteq \left[ \left|{\hat{Q}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) \right|+ \left|{\hat{A}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) - {\hat{A}}_i^n(t)\right|\right. \nonumber \\{} & {} \left. \quad + \left|{\hat{G}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) \right|> 3\alpha M\right] . \end{aligned}$$
(73)

Therefore,

$$\begin{aligned} \begin{aligned} P\left[ \Vert {\hat{V}}_i^n\Vert _T> M\right]&\le P\left[ \left|{\hat{Q}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) \right|>\alpha M\right] \\&\quad + P\left[ \left|{\hat{A}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) - {\hat{A}}_i^n(t)\right|> \alpha M\right] \\&\quad + P\left[ \left|{\hat{G}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) \right|> \alpha M\right] \\&\le P\left[ \Vert {\hat{Q}}_i^n\Vert _{T+1}>\alpha M\right] \\&\quad + P\left[ \Vert {\hat{A}}_i^n(t) - {\hat{A}}_i^n(s)\Vert _{0<s<t<(s+\delta )\wedge (T+1)}> \alpha M\right] \\&\quad + P\left[ \Vert {\hat{G}}_i^n\Vert _{T+1}> \alpha M\right] . \end{aligned} \end{aligned}$$

Since the weak convergence of \({\hat{A}}_i^n\) in (14), we can further obtain the tightness of \({\hat{A}}_i^n\) and it also satisfies \(\lim _{\delta \rightarrow 0}\limsup _{n\rightarrow \infty } P\left[ \omega ({\hat{A}}_i^n, \delta , T) > \epsilon \right] =0\). Using this fact and together with Proposition 5 and Corollary 12, we obtain stochastic boundedness of \({\hat{V}}_i^n\). Consequently, \(\lim _{n\rightarrow \infty } \Vert V_i^n\Vert _T = 0\) in probability. \(\square \)

Now, we are ready to prove Theorem 9.

Proof of Theorem 9

We will prove the result in terms of category i and the cases for other categories remain identical. Consider the state of the ith queue at time \(t+V_i^n(t)\) for any \(t\in [0, T]\). We observe that the queue length at time \(t+V_i^n(t)\) equals the number of arrivals during \([t, t+V_i^n(t))\) minus the number of abandoned items among those arrivals and this relation can be characterized by the following equality:

$$\begin{aligned} Q_i^n(t+V_i^n(t)) = A_i^n(t+V_i^n(t)) - A_i^n(t) - \mathring{G}_i^n(t, t+V_i^n(t)), \end{aligned}$$
(74)

where \(\mathring{G}_i^n(t, t+V_i^n(t))\) represents the amount of abandoned components who arrived after time t and abandoned before \(t+V_i^n(t)\). We scale both sides of (74) by \(1/\sqrt{n}\) and with a simple algebraic manipulation, we can obtain

$$\begin{aligned} {\hat{Q}}_i^n(t+V_i^n(t)) = {\hat{A}}_i^n(t+V_i^n(t)) - {\hat{A}}_i^n(t) + \frac{\lambda _i^n}{n}{\hat{V}}_i^n(t) - \hat{\mathring{G}}_i^n(t, t+V_i^n(t)),\nonumber \\ \end{aligned}$$
(75)

where the diffusion-scaled \({\hat{Q}}_i^n\) and \({\hat{A}}_i^n\) are as defined in (11), and

$$\begin{aligned} \hat{\mathring{G}}_i^n(t, t+V_i^n(t)):= \frac{1}{\sqrt{n}} \mathring{G}_i^n(t, t+V_i^n(t)). \end{aligned}$$
(76)

Consider the last term \(\hat{\mathring{G}}_i^n\), we observe that

$$\begin{aligned} 0\le \hat{\mathring{G}}_i^n(t, t+V_i^n(t)) \le \frac{1}{\sqrt{n}}\left( G_i^n(t+V_i^n(t)) - G_i^n(t)\right) = {\hat{G}}_i^n(t+V_i^n(t)) - {\hat{G}}_i^n(t),\nonumber \\ \end{aligned}$$
(77)

since those who arrived before time t may abandon right after time t and still before time \(t+V_i^n(t)\), and those abandoned items are not counted in \(\mathring{G}_i^n(t, t+V_i^n(t))\). With this observation, we have

$$\begin{aligned} \begin{aligned}&\Vert {\hat{Q}}_i^n(t+V_i^n(t)) - \lambda _0 {\hat{V}}_i^n(t)\Vert _T \\&\quad \le \Vert {\hat{A}}_i^n(t+V_i^n(t)) - {\hat{A}}_i^n(t)\Vert _T + \left|\frac{\lambda _i^n}{n} - \lambda _0\right|\Vert {\hat{V}}_i^n\Vert _T + \Vert {\hat{G}}_i^n(t+V_i^n(t)) - {\hat{G}}_i^n(t)\Vert _T. \end{aligned} \end{aligned}$$

Since \({\hat{A}}_i^n\) satisfies (14) and using Corollary 12, we have the tightness of \({\hat{A}}_i^n\) and \({\hat{G}}_i^n\), and they satisfy for any \(\epsilon >0\),

$$\begin{aligned} \begin{aligned} \lim _{\delta \rightarrow 0 }\limsup _{n\rightarrow \infty } P\left[ \omega ({\hat{A}}_i^n, \delta , T)>\epsilon \right]&= 0, \\ \lim _{\delta \rightarrow 0 }\limsup _{n\rightarrow \infty } P\left[ \omega ({\hat{G}}_i^n, \delta , T)>\epsilon \right]&= 0. \end{aligned} \end{aligned}$$
(78)

Moreover, we assumed that \(\lim _{n\rightarrow \infty }\vert {\lambda _i^n/n - \lambda _0} \vert = 0\) by (7). Since \({\hat{V}}_i^n\) is stochastically bounded and as a consequence, \(\Vert V_i^n\Vert _T\rightarrow 0\) in probability as proved in Proposition 13, above facts imply that \(\Vert {\hat{Q}}_i^n(t+V_i^n(t)) - \lambda _0{\hat{V}}_i^n(t)\Vert _T \rightarrow 0\) in probability as \(n\rightarrow \infty \).

Now, we are left to show \(\Vert {\hat{Q}}_i^n(t+V_i^n(t)) - {\hat{Q}}_i^n(t)\Vert _T \rightarrow 0\) in probability. By Theorem 1, we have the tightness of \({\hat{Q}}_i^n\), which also satisfies \(\lim _{\delta \rightarrow 0}\limsup _{n\rightarrow \infty } P\left[ \omega ({\hat{Q}}_i^n, \right. \) \(\left. \delta , T)>\epsilon \right] = 0 \) for any \(\epsilon >0\). Thus, it is straightforward to show the above relation together with the fact that \(\Vert V_i^n\Vert _T \rightarrow 0\) in probability as proved in Proposition 13. This completes the proof. \(\square \)

1.8 Appendix A.8: Proof of Proposition 10

Proof of Proposition 10

We prove the case for the ith queue, and other queues remain identical. Here, we first show the stochastic boundedness directly, and then we come back to prove the moment bound condition (41) for each \(t\in [0, T]\) by utilizing the order-preserving property.

First, we intend to show the stochastic boundedness. For i fixed and let \(M>0\) be arbitrary. If \(0<M<{\hat{V}}_i^n(t)\) holds for some \(t\in [0, T]\), we know that the queue length at time \(t+\frac{M}{\sqrt{n}}\) is not empty, namely \(Q_i^n\left( t+\frac{M}{\sqrt{n}}\right) >0\), and satisfies

$$\begin{aligned} Q_i^n\left( t+\frac{M}{\sqrt{n}}\right) \ge A_i^n\left( t+\frac{M}{\sqrt{n}}\right) - A_i^n(t). \end{aligned}$$
(79)

With a simple algebraic manipulation by centering and scaling, we obtain a diffusion-scaled inequality

$$\begin{aligned} {\hat{Q}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) \ge {\hat{A}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) - {\hat{A}}_i^n(t) + \frac{\lambda _i^n}{n} M. \end{aligned}$$
(80)

Let \(0<\delta <1\) and since we assumed \(\lambda _i^n/n\rightarrow \lambda _0\) as \(n\rightarrow \infty \), we can find an \(\alpha >0\) and \(N\ge 1\) such that for any \(n\ge N\), we have \(0<\frac{M}{\sqrt{n}}<\delta \) and \(\frac{\lambda _i^n}{n}> 2\alpha > 0\) hold. Therefore, for any \(n\ge N\), we have

$$\begin{aligned} \begin{aligned} P\left[ \Vert {\hat{V}}_i^n\Vert _T> M\right]&\le P\left[ \left|{\hat{Q}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) \right|+ \left|{\hat{A}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) - {\hat{A}}_i^n(t)\right|> 2\alpha M\right] \\&\le P\left[ \left|{\hat{Q}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) \right|>\alpha M\right] + P\left[ \left|{\hat{A}}_i^n\left( t+\frac{M}{\sqrt{n}}\right) - {\hat{A}}_i^n(t)\right|> \alpha M\right] \\&\le P\left[ \Vert {\hat{Q}}_i^n\Vert _{T+1}>\alpha M\right] + P\left[ \Vert {\hat{A}}_i^n(t) - {\hat{A}}_i^n(s)\Vert _{0<s<t<(s+\delta )\wedge (T+1)} > \alpha M\right] . \end{aligned} \end{aligned}$$

The weak convergence of \({\hat{A}}_i^n\) suggests the tightness of \({\hat{A}}_i^n\) and the convergence of the modulus of continuity operator of \({\hat{A}}_i^n\), i.e., \(\lim _{\delta \rightarrow 0}\limsup _{n\rightarrow \infty } P\left[ \omega ({\hat{A}}_i^n, \delta , T) > \epsilon \right] =0\). Using these facts and the second moment bound condition of \({\hat{Q}}_i^n\), we have the stochastic boundedness for \({\hat{V}}_i^n\) and consequently, \(\lim _{n\rightarrow \infty } \Vert V_i^n\Vert _T = 0\) in probability.

Now, we are left to show the second moment bound result (41) for each \(t\in [0, T]\). For each \(t\in [0, T]\) fixed and given condition \(A_i^n(t) = k\), we have \(t_{ik}^n< t < t_{i, k+1}^n\) and \(V_i^n(t) = (\max _{j\ne i} \{ t_{j, k+1}^n \} - t)^+\), where \(t_{i, k}^n\) represents the arrival time of the kth component of category i in the nth system. It can be defined as \(t_{i, k}^n = \sum _{j=1}^k \tau _{i, j}^n\), where \(\tau _{i, j}^n\)’s are inter-arrival times. Notice that for a hypothetical component of category i who arrived at time t and \(t>t_{j, k+1}^n\) for all \(j\ne i\), it needs not to wait and there would be a match immediately since other queues have a component of index \(k+1\) waiting to be matched. However, if \(t < t_{j, k+1}^n\) for some \(j\ne i\), then its waiting time would be their maximum difference \((\max _{j\ne i} \{t_{i, k+1}^n\} - t)\). With these facts, we can compute the following conditional moments:

$$\begin{aligned} \begin{aligned} E\left[ (V_i^n(t))^2 \vert A_i^n(t) = k\right] \le E\left[ \max _{j\ne i}\left\{ \left( t_{j, k+1}^n - t\right) ^2\right\} \right] \le \sum _{j\ne i} E\left[ \left( t_{j, k+1}^n - t\right) ^2\right] . \end{aligned} \end{aligned}$$

Since we assume renewal arrivals, we let \(E[\tau _{jk}^n] = 1/\lambda _j^n\) and \(\text {Var}(\tau _{jk}^n) = c_j/(\lambda _j^n)^2\) for some \(c_j>0\), and \(\{\tau _{jk}^n\}_{k\ge 1}\) are independent with each other, which further yields

$$\begin{aligned}{} & {} E[t_{j, k+1}^n] = E\left[ \sum _{l=1}^{k+1} \tau _{jl}^n\right] = \frac{k+1}{\lambda _j^n},\\{} & {} \quad \text {Var}(t_{j, k+1}^n) = \frac{c_j(k+1)}{(\lambda _j^n)^2}. \end{aligned}$$

Therefore, with a simple trick of adding and subtracting \((k+1)/\lambda _j^n\) term, we have

$$\begin{aligned} E\left[ \left( t_{j, k+1}^n - t\right) ^2\right]{} & {} = E\left[ \left( t_{j, k+1}^n - \frac{k+1}{\lambda _j^n} + \frac{k+1}{\lambda _j^n} - t\right) ^2\right] \nonumber \\ {}{} & {} \le 2\left( \frac{c_j(k+1)}{(\lambda _j^n)^2} + \left( \frac{k+1}{\lambda _j^n} - t\right) ^2\right) . \end{aligned}$$
(81)

Hence, (81) together with above inequality, we obtain

$$\begin{aligned} E\left[ (V_i^n(t))^2 \vert A_i^n(t) = k\right] \le 2\sum _{j\ne i} \left( \frac{c_j(k+1)}{(\lambda _j^n)^2} + \left( \frac{k+1}{\lambda _j^n} - t\right) ^2\right) . \end{aligned}$$
(82)

Consequently, we have

$$\begin{aligned} \begin{aligned}&E\left[ \vert {{\hat{V}}_i^n(t)} \vert ^2\right] = \sum _{k=0}^{\infty } E\left[ \vert {{\hat{V}}_i^n(t)} \vert ^2 \vert A_i^n(t)=k\right] \cdot P\left[ A_i^n(t) = k\right] \\&\quad \le 2n \sum _{k=0}^{\infty } \sum _{j\ne i} \left( \frac{c_j(k+1)}{(\lambda _j^n)^2} + \left( \frac{k+1}{\lambda _j^n} - t\right) ^2\right) \cdot P\left[ A_i^n(t) = k\right] \\&\quad = 2n \sum _{j\ne i}\left( \frac{c_j}{(\lambda _j^n)^2} E\left[ A_i^n(t)+1\right] + E\left[ \left( \frac{A_i^n(t) + 1}{\lambda _j^n} - t\right) ^2\right] \right) \\&\quad \le 2\sum _{j\ne i} \left( c_j\left( \frac{n}{\lambda _j^n}\right) ^2 E\left[ \bar{A}_i^n(t) + \frac{1}{n}\right] \right. \\&\quad \left. + \left( \frac{n}{\lambda _j^n}\right) ^2 E\left[ \left( {\hat{A}}_i^n(t) + \frac{\lambda _i^n - \lambda _j^n}{\sqrt{n}} t + \frac{1}{\sqrt{n}}\right) ^2\right] \right) , \end{aligned} \end{aligned}$$

which further yields (41). This completes the proof. \(\square \)

1.9 Appendix A.9: Proof of Theorem 11

Proof of Theorem 11

Here, we mainly verify the uniform integrability of appropriate integrands by considering the expectations separately under some restrictions for the cost function (45).

First, we show that

$$\begin{aligned} \lim _{n\rightarrow \infty } E\left[ \sum _{j=1}^K\int _0^{\infty } e^{-\gamma s} C_j({\hat{Q}}_j^n(s))ds\right] = E\left[ \sum _{j=1}^K \int _0^{\infty } e^{-\gamma s} C_j(X_j(s))ds\right] . \nonumber \\ \end{aligned}$$
(83)

Since \({\hat{Q}}_j^n\) converges weakly to \(X_j\) in D[0, T], using the Skorokhod representation theorem, we can simply assume that \({\hat{Q}}_j^n\) converges to \(X_j\) a.s. in some special probability space. By the continuous mapping theorem, we obtain \(\lim _{n\rightarrow \infty } C_j({\hat{Q}}_j^n(t)) = C_j(X_j(t))\) a.s.

Next, we verify the uniform integrability of the integrand \(e^{-\alpha t}C_j({\hat{Q}}_j^n(t))\) so that it guarantees the interchange of integral and limit. Since cost function \(C_j(\cdot )\) admits polynomial growth as assumed in (45), we have \(C_j({\hat{Q}}_j^n(t))\le c_j(1+\vert {{\hat{Q}}_j^n(t)} \vert ^p)\), where \(c_j>0\) and \(1\le p < 2l\) are constants independent of T and n as in (45). Since \(1\le p<2l\) for \(l\ge 1\) as assumed, let \(\delta >0\) so that \(1+\delta = 2l/p\). We will explain the reason for involving l at the end of this section. Similar to Proposition 5, we can derive a higher order moment bound for \(B_T^n\) random variable introduced in (51), namely \(E\left[ B_T^{2l}\right] \le c(1+T^{d})\). Following the same proof, we can strengthen the moment-bound condition of the queue lengths by

$$\begin{aligned} E\left[ \Vert \varvec{{\hat{Q}}^n}\Vert _T^{2l}\right] \le c(1+K)^{2l} (1+T^d) \cdot \exp {\left( 2lc_0(1+K) T \right) }, \end{aligned}$$
(84)

where \(d \ge 1\) and \(l\ge 1\) are constants independent of T and n, and \(c>0\) is a genetic constant. Notice that if we pick \(l=1\), we may obtain a special case proved in (27). This result further renders

$$\begin{aligned} E\left[ \vert {{\hat{Q}}_j^n(s)} \vert ^{2l}\right] \le E\left[ \Vert {\hat{Q}}_j^n\Vert _s^{2l}\right] \le K^l E\left[ \Vert \varvec{{\hat{Q}}^n}\Vert _s^{2l}\right] \le c K^l(1+K)^{2l} (1+s^d) e^{2lc_0(1+K)s}. \end{aligned}$$

Hence, since \(\gamma > 2lc_0(1+K)\) as assumed, we obtain

$$\begin{aligned} \begin{aligned}&E\left[ \int _0^{\infty } e^{-\gamma s} \vert {C_j({\hat{Q}}_j^n(s))} \vert ^{1+\delta }ds\right] \\&\quad \le c\int _0^{\infty } e^{-\gamma s}E\left[ \left( 1 + \vert {{\hat{Q}}_j^n(s)} \vert ^p\right) ^{1+\delta }\right] \textrm{d}s \\&\quad \le c\int _0^{\infty } e^{-\gamma s}\left( 1 + E\left[ \vert {{\hat{Q}}_j^n(s)} \vert ^{2l}\right] \right) \textrm{d}s \\&\quad \le c\int _0^{\infty } e^{-\gamma s}\textrm{d}s + c K^l (1+K)^{2l}\int _0^{\infty } (1+s^d) e^{-(\gamma - 2lC(1+K))s} \textrm{d}s \\&\quad < \infty , \end{aligned} \end{aligned}$$

where \(c>0\) is a generic constant, and K, \(c_0\), and \(d\ge 1\) are constants independent of n. This verifies the uniform integrability. Therefore, (83) follows.

Second, we show that

$$\begin{aligned} \lim _{n\rightarrow \infty } E\left[ \sum _{j=1}^K p_j\int _0^{\infty } e^{-\gamma s}d{\hat{G}}_j^n(s)\right] = E\left[ \sum _{j=1}^K p_j \delta _j \int _0^{\infty } e^{-\gamma s} X_j(s) ds\right] . \end{aligned}$$
(85)

Using Fubini–Tonelli’s theorem, we derive that \(\gamma \int _{t=0}^{\infty } \int _t^{\infty } e^{-\gamma s}\textrm{d}s d{\hat{G}}_j^n(t) = \gamma \int _0^{\infty } \) \( \int _{t=0}^s e^{-\gamma s}d{\hat{G}}_j^n(t)\textrm{d}s\), which further implies

$$\begin{aligned} \int _0^{\infty } e^{-\gamma t}d{\hat{G}}_j^n(t) = \gamma \int _0^{\infty }e^{-\gamma t}{\hat{G}}_j^n(t)\textrm{d}t \end{aligned}$$
(86)

a.s. Notice that this can also be verified using integration by parts and the moment bound of \({\hat{G}}_k^n\) obtained in Corollary 12. Now, it suffices to show that

$$\begin{aligned} \lim _{n\rightarrow \infty } E\left[ \sum _{j=1}^K \gamma p_j \int _0^{\infty } e^{-\gamma t} {\hat{G}}_j^n(t)\textrm{d}t \right] = E\left[ \sum _{j=1}^K p_j\delta _j\int _0^{\infty } e^{-\gamma t} X_j(t)\textrm{d}t\right] . \nonumber \\ \end{aligned}$$
(87)

As in the proof of Corollary 12, since \(\Vert {\hat{M}}_j^n\Vert _T\) converges to zero in probability and \(\delta _j^n\int _0^{\cdot }{\hat{Q}}_j^n(s)\textrm{d}s\) converges weakly to \(\delta _j\int _0^{\cdot } X_j(s)\textrm{d}s\) in D[0, T], we conclude that \({\hat{G}}_j^n(\cdot )\) converges weakly to \(\delta _j\int _0^{\cdot } X_j(s)\textrm{d}s\) in D[0, T]. Given \({\hat{G}}_j^n(\cdot ) \ge 0\) is non-decreasing, we are left to verify the uniform integrability of \({\hat{G}}_j^n(T)\) as follows:

$$\begin{aligned} \begin{aligned} E\left[ ({\hat{G}}_j^n(T))^2\right]&\le 2\left( E\left[ \Vert {\hat{M}}_j^n\Vert _T^2\right] + (\delta _j^n)^2 T^2 E\left[ \Vert {\hat{Q}}_j^n\Vert _T^2\right] \right) \\&\le 2C_1(1+T^l) + 2C^2 K(1+K)^2 C_2 T^2 (1+T^b) \exp {(2c_0(1+K)T)}, \end{aligned} \end{aligned}$$

where \(C_1\), \(C_2\), \(l\ge 1\) and \(b\ge 1\) are constants independent of T and n (see (26) and (27)). Here the first inequality is obtained by the definition of \({\hat{M}}_j^n(\cdot )\) introduced in (20). Consequently, \(\lim _{n\rightarrow \infty }E\left[ {\hat{G}}_j^n(T)\right] = \delta _j E\left[ \int _0^TX_j(s)ds\right] \). By this limit, the above moment bound condition, and assumption \(\gamma >2c_0(1+K)\), we obtain

$$\begin{aligned} \lim _{n\rightarrow \infty } \gamma \int _0^{\infty } e^{-\gamma t}E\left[ {\hat{G}}_j^n(t)\right] \textrm{d}t = \gamma \int _0^{\infty } e^{-\gamma t} E\left[ \int _0^t \delta _j X_j(s)ds\right] \textrm{d}t, \end{aligned}$$
(88)

by verifying the uniform integrability of integrand, namely

$$\begin{aligned} \begin{aligned}&E\left[ \int _0^{\infty } e^{-\gamma t} \vert {{\hat{G}}_j^n(t)} \vert ^2 \textrm{d}t\right] \\&\quad \le 2\int _0^{\infty } e^{-\gamma t} E\left[ \Vert {\hat{M}}_j^n\Vert _t^2\right] \textrm{d}t + 2(\delta _j^n)^2 \int _0^{\infty } e^{-\gamma t} E\left[ \Vert {\hat{Q}}_j^n\Vert _t^2\right] t^2 \textrm{d}t \\&\quad \le 2C_1\int _0^{\infty } e^{-\gamma t} (1+t^l) \textrm{d}t + 2C^2 K(1+K)^2 C_2 \int _0^{\infty } t^2 (1+t^b) e^{-(\gamma - 2c_0(1+K)) t} \textrm{d}t \\&\quad < \infty , \end{aligned} \end{aligned}$$

since \(\gamma >2c_0(1+K)\) assumed above. Using Fubini’s theorem, we can rewrite the above conclusion as

$$\begin{aligned} \lim _{n\rightarrow \infty } \gamma E\left[ \int _0^{\infty } e^{-\gamma t}{\hat{G}}_j^n(t)\textrm{d}t \right] = E\left[ \int _0^{\infty } e^{-\gamma t}\delta _j X_j(t)\textrm{d}t\right] . \end{aligned}$$
(89)

Hence, (87) follows as well as (85). As a consequence, (48) immediately follows from (83) and (85). \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xie, B. Multi-component matching queues in heavy traffic. Queueing Syst (2024). https://doi.org/10.1007/s11134-024-09907-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11134-024-09907-0

Keywords

Mathematics Subject Classification

Navigation