Skip to main content
Log in

Throughput optimal scheduling policies in networks of constrained queues

  • Published:
Queueing Systems Aims and scope Submit manuscript

Abstract

This paper considers a fairly general model of constrained queuing networks that allows us to represent both Markov Modulated Bernoulli Processes arrivals and time-varying service constraints. We derive a set of sufficient conditions for throughput optimality of scheduling policies, which encompass and generalize all the results previously obtained in the field. This leads to the definition of new classes of (non-diagonal) throughput optimal scheduling policies. We prove the stability of queues by extending the traditional Lyapunov drift criterion methodology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In this paper the terms server and queue will be used interchangeable.

  2. Contentions graphs are typically defined as follows: Definition 1 The contention graph \(G_I(\mathcal{V}^I, \mathcal{E}^I)\) is an undirected graph in which (i) vertexes \(v \in \mathcal{V}^I\) correspond to network (virtual) queues; (ii) an edge connects two vertexes \(v\) and \(v^{\prime }\), if the corresponding queues cannot simultaneously be served.

  3. We assume that of \(S^D_t\) and \(S^A_t\) evolve independently, even if this assumption is not strictly needed to obtain our results.

  4. In this paper \( \mathbb {N}\) denotes the set of non-negative integers, \( \mathbb {R}\) denotes the set of real numbers, and \( \mathbb {R}^+\) denotes the set of non-negative real numbers.

  5. \(C^k[ \mathbb {R}\rightarrow \mathbb {R}]\) denotes the class of real valued functions that are \(k\)-th times continuously differentiable. Furthermore given a sufficiently smooth function \(g(x)\): \( \mathbb {R}\rightarrow \mathbb {R}\) we denote by \(g^{\prime }(x)\) its first derivative, with \(g^{\prime \prime }(x)\) its second derivative, and with \(g^{(h)}(x)\) its \(h\)-th derivative.

  6. Given two functions \(f(x)\ge \! 0\) and \(g(x)\ge \! 0\): \(f(x)\!=o(g(x))\) means \(\lim _{x \rightarrow \infty }{f(x)}/{g(x)}=0\); \(f(x)=O(g(x))\) means \(\limsup _{x \rightarrow \infty }{f(x)}/{g(x)}=c<\infty \).

  7. For any function \(F: \mathbb {R}^{+M} \rightarrow \mathbb {R}\) we use \(\lim _{\Vert X\Vert \rightarrow \infty }F(X)=l\) with \(l\in \mathbb {R}\cup \{\infty \}\) as shorthand notation to mean that \(\lim _{\Vert \alpha \Vert \rightarrow \infty }F(\alpha X_0)=l\) for any \(X_0 \in \mathbb {R}^{+M} \) with \(\Vert X_0\Vert =1\).

  8. \( \mathbb {N}\) denotes the set of non negative integers.

  9. We recall that for any function \(F: \mathbb {R}^{+M} \rightarrow \mathbb {R}\) we use \(\lim _{\Vert X\Vert \rightarrow \infty }F(X)=l\) with \(l\in \mathbb {R}\cup \{\infty \}\) as shorthand notation to mean that \(\lim _{\Vert \alpha \Vert \rightarrow \infty }F(\alpha X_0)=l\) for any \(X_0 \in \mathbb {R}^{+M} \) with \(\Vert X_0\Vert =1\).

References

  1. Ajmone Marsan, M., Leonardi, E., Mellia, M., Neri, F.: On the stability of isolated and interconnected input-queueing switches under multi-class traffic. IEEE Trans. Inform. Theory 51(3), 1167–1174 (2005)

    Article  Google Scholar 

  2. Chaporkar, P., Kar, K., Sarkar, S.: Throughput Guarantees in Maximal Scheduling in Wireless Networks. In: Allerton Conference on Communication, Control and Computing (2005)

  3. Dai, J.G., Lin, W.: Maximum pressure policies in stochastic processing networks. Oper. Res. 53, 197–218 (2005)

    Article  Google Scholar 

  4. Dimakis, A., Walrand, J.: Sufficient conditions for stability of longest-queue-first scheduling: second-order properties using fluid limits. Adv. Appl. Probab. 38(2), 505–521 (2006)

    Article  Google Scholar 

  5. Eryilmaz, A., Srikant, R., Perkins, J.R.: Stable scheduling policies for fading wireless channels. IEEE/ACM Trans. Networking 13(2), 411–424 (2005)

    Article  Google Scholar 

  6. Giaccone, P., Prabhakar, B., Shah, D.: Randomized-scheduling algorithms for high-aggregate bandwidth switches. IEEE J. Sel. Areas Commun. 21(4), 546–559 (2003)

    Article  Google Scholar 

  7. Gupta, G., Sanghavi, S., Shroff, N.: Node Weighted Scheduling. ACM SIGMETRICS, 2009, June (2009)

  8. Kesslassy, I., Mckeown, N.: Analysis of Scheduling Algorithms that Provide 100 % Throughput in Input-Queued Switches. In: Allerton Conference on Communication Control and Computer, October (2001)

  9. Kushner, H.J.: Stochastic Stability and Control. Academic Press, New York (1967)

    Google Scholar 

  10. Leonardi, E.: Throughput Optimal Scheduling Policies in Networks of Constrained Queues. Technical report. http://arxiv.org/abs/1304.2554 (2013)

  11. Leonardi, E., Mellia, M., Neri, F., Ajmone Marsan, M.: Bounds on delays and queue lengths in input-queued cell switches. J. ACM 50(4), 520–550 (2003)

    Article  Google Scholar 

  12. Mekkittikul, A., McKeown, N.: A Practical Scheduling Algorithm to Achieve 100 % Throughput in Input-Queued Switches. INFOCOM, April (1998)

  13. Meyn, S.: Stability and asymptotic optimality of generalized maxweight policies. J. Control Optim. 47(6), 3259–3294 (2009)

    Article  Google Scholar 

  14. Meyn, S.P., Tweedie, R.L.: (1993), Markov Chains and Stochastic Stability. Cambridge University Press, Cambridge (Second Edition 2009)

  15. Modiano, E., Shah, D., Zussman, G.: Maximizing Throughput in Wireless Networks via Gossiping. ACM SIGMETRICS (2006)

  16. Neely, M.J., Modiano, E., Rohrs, C.E.: Dynamic power allocation and routing for time varying wireless networks. IEEE J. Sel. Areas Commun. 23(1), 89–103 (2005)

    Article  Google Scholar 

  17. Ross, K., Bambos, N.: Projective cone scheduling (PCS) algorithms for packet switches of maximal throughput. Trans. Networking 17(3), 976–989 (2009)

    Article  Google Scholar 

  18. Shah, D., Wischik, D. J.: Optimal Scheduling Algorithms for Input-Queued Switches. INFOCOM, April (2006)

  19. Shah, D., Wischik, D.: Switched networks with maximum weight policies: fluid approximation and multiplicative state space collapse. Ann. Appl. Probab. 22(1), 70–127 (2012)

    Google Scholar 

  20. Shah, D., Tsitsiklis, J., Zhong, Y.: Qualitative Properties of \(\alpha \)-Weighted Scheduling Policies. ACM SIGMETRICS (2010)

  21. Tassiulas, L.: Scheduling and performance limits of networks with constantly changing topology. IEEE Trans. Inf. Theory 43(3), 1067–1073 (1997)

    Article  Google Scholar 

  22. Tassiulas, L.: Linear Complexity Algorithms for Maximum Throughput in Radio Networks and Input Queued Switches. INFOCOM, April (1998)

  23. Tassiulas, L., Ephremides, A.: Stability properties of constrained queuing systems and scheduling policies for maximum throughput in multi-hop radio networks. IEEE Trans. Autom. Control 37, 1936–1948 (1992)

    Article  Google Scholar 

  24. Wu, X., Srikant, R.: Scheduling Efficiency of Distributed Greedy Scheduling Algorithms in Wireless Networks. INFOCOM, April (2006)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to E. Leonardi.

Appendix: Proofs of Theorems in Sect. 5

Appendix: Proofs of Theorems in Sect. 5

Before proceeding with the proofs of the Theorems in Sect. 5, we recall some standard consequences of Taylor’s Theorem, of which we will be made extensive use, and we prove four useful Lemmas:

Proposition 1

Let \(G(X)\) with \(G(X): [ \mathbb {R}^{+M}\rightarrow \mathbb {R}]\) be \(h\)-times continuously differentiable over an open ball \(\mathcal{B}\) centered at a vector \(X\). Then, for any \(Y\) such that \(X+Y \in \mathcal{B}\),

$$\begin{aligned} G(X + Y) = \sum _{i=0}^{h-1} \frac{1}{i!}Y^i(\partial ^i G)(X) + R^{(h)}_G(X,Y), \end{aligned}$$
(29)

where the \(h\)-order remainder \( R^{(h)}_G(X,Y)\) is given by \(R^{(h)}_G(X,Y)= \frac{1}{h!} Y^h (\partial ^h G)(X+\beta Y)\), for some \(\beta =[0,1]\).

In particular if \(G(X)\) is twice continuously differentiable over an open ball \(\mathcal{B}\) centered at a vector \(X\), recalling that \(\nabla G(X)\) denotes the gradient of G at \(X\), and \(H_G(X)\) denotes the Hessian of the function \(G\) at \(X\), for any \(Y\) such that \(X+Y \in \mathcal{B}\), we have

$$\begin{aligned} G(X + Y) = G(X) + R^{(1)}_G(X,Y) \end{aligned}$$

with \(R^{(1)}_G(X,Y) = \langle \nabla G(X+ \beta Y) \cdot Y\rangle \) for some \(\beta \in [0, 1]\), and

$$\begin{aligned} G(X + Y) = G(X) + \langle \nabla G(X) \cdot Y\rangle + R^{(2)}_G(X,Y) \end{aligned}$$
(30)

\(R^{(2)}_G(X,Y)= \frac{1}{2} Y H_G(X+\beta Y) Y^\mathrm{T} \) for some \(\beta \in [0,1]\). The above Taylor expansion can be generalized to vectorial functions applying (29) component-wise. In particular we will make use of the following result. Given \(G(X)\) twice continuously differentiable over an open ball \(\mathcal{B}\) centered at a vector \(X\), for any \(Y\) such that \(X+Y \in \mathcal{B}\), and any \(Z \in \mathbb {R}^{N} \) we have

$$\begin{aligned} \langle \nabla G(X+Y)\cdot Z \rangle = \langle \nabla G(X) \cdot Z \rangle + R^{(1)}_{\nabla G}(X,Y,Z) \end{aligned}$$
(31)

with \(R^{(1)}_{\nabla G}(X,Y,Z)= \langle ( \nabla \langle \nabla G(X+\beta Y)\cdot Z \rangle ) \cdot Y\rangle = \frac{1}{2} Z H_G(X+\beta Y) Y^\mathrm{T} \) for some \(\beta \in [0, 1]\).

Lemma 1

If \(G(X)\) satisfies conditions of Theorem 5, then

$$\begin{aligned} {\mathop {\lim }_{\Vert X\Vert \rightarrow \infty }} \langle \nabla G(X) \cdot \tilde{X} \rangle =\infty \end{aligned}$$

\(\tilde{X}\) being the normalized vector parallel to \(X\)

Proof

The proof can be immediately obtained by applying l’Hopital’s rule to the indefinite form (18):

$$\begin{aligned} {\mathop {\lim }_{\alpha \rightarrow \infty }} \frac{G(X)}{\Vert X\Vert }= {\mathop {\lim }_{\alpha \rightarrow \infty }} \frac{G( \alpha \tilde{X})}{\alpha } \end{aligned}$$

and recalling that \(\lim _{\alpha \rightarrow \infty } \langle \nabla G(\alpha \tilde{X}) \cdot \tilde{X} \rangle =\lim _{\Vert X \Vert \rightarrow \infty } \langle \nabla G(X) \cdot \tilde{X} \rangle \) exists in light of (19). Observe as immediate consequence of previous statement we get

$$\begin{aligned} {\mathop {\lim }_{\Vert X \Vert \rightarrow \infty }} \Vert \nabla G(X) \Vert =\infty . \end{aligned}$$

\(\square \)

Lemma 2

If \(G(X)\) satisfies the conditions of Theorem 5 then

$$\begin{aligned} G(X+Y)-G(X)=R^{(1)}(X,Y) = \left\{ \begin{array}{l} O(\Vert \nabla G(X)\Vert )\\ o(G(X)) \end{array} \right. \qquad \text{ as } \Vert X\Vert \rightarrow \infty , \end{aligned}$$
(32)

whenever \(Y\) is an arbitrary bounded vector. If \(G(X)\) satisfies the conditions of Theorem 6, then

$$\begin{aligned} \mathop {\mathbb {E}}\limits [G(X+Y)]- G(X)= \mathop {\mathbb {E}}\limits [R^{(1)}_G(X,Y)] =\left\{ \begin{array}{l} O(\Vert \nabla G(X)\Vert )\\ o(G(X)) \end{array} \right. \qquad \text{ as } \Vert X\Vert \rightarrow \infty , \end{aligned}$$
(33)

whenever \(Y\) is a random vector with finite polynomial moments \(\mathop {\mathbb {E}}\limits [\Vert Y\Vert ^h]<\infty \) \(\forall h\). Similarly, if \(G(X)\) satisfies the conditions of Theorem 5, then

$$\begin{aligned} \langle \nabla G(X\!+\!Y),Z \rangle \!-\! \langle \nabla G(X), Z \rangle \!=\! R^{(1)}_{\nabla G}(X,Y,Z)\!=\!\left\{ \begin{array}{l} O(\Vert H_G(X)\Vert )\\ o(\Vert \nabla G(X)\Vert ) \end{array} \right. \qquad \text{ as } \Vert X\Vert \rightarrow \infty , \end{aligned}$$
(34)

whenever \(Z\) and \(Y\) are two arbitrary bounded vectors. If \(G(X)\) satisfies the conditions of Theorem 6, then

$$\begin{aligned} \mathop {\mathbb {E}}\limits \langle \nabla G(X\!+\!Y),Z \rangle \!-\! \langle \nabla G(X), Z \rangle \!=\! \mathop {\mathbb {E}}\limits [R^{(1)}_{\nabla G}(X,Y,Z)]\!=\!\left\{ \begin{array}{l} O(\Vert H_G(X)\Vert )\\ o(\Vert \nabla G(X)\Vert ) \end{array} \right. \qquad \text{ as } \Vert X\Vert \rightarrow \infty , \end{aligned}$$
(35)

whenever \(Z\) is an arbitrary bounded vector, and \(Y\) is a random vector with finite polynomial moments (i.e., \(\mathop {\mathbb {E}}\limits [\Vert Y\Vert ^h]<\infty \) \(\forall h)\). At last, if \(G(X)\) satisfies the conditions of Theorem 5, then

$$\begin{aligned} R^{(2)}(X,Y)= O( \Vert H_G(X)\Vert ) \qquad R^{(2)}(X,Y) =o( \Vert \nabla G(X)\Vert ) \end{aligned}$$
(36)

for any vector \(Y\). If \(G(X)\) satisfies the conditions of Theorem 6, then

$$\begin{aligned} \mathop {\mathbb {E}}\limits [R^{(2)}(X,Y)]= O( \Vert H_G(X)\Vert ) \qquad \mathop {\mathbb {E}}\limits [R^{(2)}(X,Y)]=o( \Vert \nabla G(X)\Vert ), \end{aligned}$$
(37)

whenever \(Y\) is a random vector with finite polynomial moments (i.e., \(\mathop {\mathbb {E}}\limits [\Vert Y\Vert ^h]<\infty \) \(\forall h\)).

Proof

Properties (32) and (34) are an immediate consequence of the sub-exponential behavior of \(G(X)\), i.e., (19). Now focusing on (33), observe that expanding \(G(X)\) in Taylor series around \(X\), we obtain

$$\begin{aligned} \mathop {\mathbb {E}}\limits [G(X + Y)] = G(X) + \mathop {\mathbb {E}}\limits [\sum _{i=1}^{h_0-1} \frac{1}{i!}Y^i(\partial ^i G)(X)] + \mathop {\mathbb {E}}\limits [R^{(h_0)}_G(X,Y)], \end{aligned}$$

where \(\mathop {\mathbb {E}}\limits [R^{(h_0)}_G(X,Y)]=\frac{1}{h_0!} \mathop {\mathbb {E}}\limits [Y^{h_0} (\partial ^{h_0} G)(X+\beta Y)] \le \frac{1}{h_0!} \mathop {\mathbb {E}}\limits [\Vert Y^{h_0}\Vert ] \sup _{Z\in \mathbb {R}^{+M}}\Vert (\partial ^{h_0} G)(Z)\Vert < \infty \), because by assumptions \(\mathop {\mathbb {E}}\limits [Y^{h_0}]\) is bounded as well as \(\sup _{Z\in \mathbb {R}^{+M}} \Vert (\partial ^{h_0} G)(Z)\Vert < \infty \) (recalling (22)). Thus the last term is negligible with respect to \(G(X)\) and \(\Vert \nabla G(X)\Vert \), since both \(G(X) \rightarrow \infty \) (by hypothesis) and \(\Vert \nabla G(X)\Vert \rightarrow \infty \) (by Lemma 1) as \(\Vert X\Vert \rightarrow \infty \) b). Furthermore, \(\mathop {\mathbb {E}}\limits [\sum _{i=1}^{h_0-1} \frac{1}{i!}Y^i(\partial ^i G)(X)]= \sum _{i=1}^{h_0-1} \frac{1}{i!} \mathop {\mathbb {E}}\limits [Y^i](\partial ^i G)(X)= O(\Vert \nabla G(X) \Vert )= o( G(X))\), since \(\mathop {\mathbb {E}}\limits [Y^i]<\infty \) and \(\Vert (\partial ^i G)(X)\Vert = o(\partial ^{i-1} G(X))\) for any \(1\le i < h_0\), from (23). Thus (33) is proved. (35) can be proved repeating the same arguments to every component of \(\nabla G(X)\).

The proof of (36) and (37) is reported in companion technical report [10]. \(\square \)

Lemma 3

If \(G(X)\) satisfies the conditions of Theorem 5 (and in particular condition (20)), then:

$$\begin{aligned} {\mathop {\max }_{\mathcal{D}_\mathcal{F}(X_t)}} \langle \nabla G(X_t)(I-R)^\mathrm{T} \cdot D \rangle \;\;\ge \; {\mathop {\max }_{ D \in \mathcal{D}}} \langle \nabla G(X_t)(I-R)^\mathrm{T} \cdot D \rangle + \left\{ \begin{array}{c} o(\Vert \nabla G(X_t)\Vert )\\ O(\Vert H_G(X_t)\Vert ) \end{array} \right. \end{aligned}$$
(38)

i.e., there is always an “almost” optimal feasible departure vector satisfying the condition \(D_t\le X_t\).

Proof

We denote by \(\tilde{D}= {{\mathrm{arg\,max}}}_{ D \in \mathcal{D}} \langle \nabla G(X_t)(I-R)^\mathrm{T} \cdot D \rangle \), and by \(D^{*}= \min (\tilde{D}, X_t )\). Observe that \(\tilde{D}\) can be always assumed to be feasible, since by assumption every vertex of \(\mathcal{D}\) corresponds by assumption to a feasible vector. As a consequence, also \(D^{*}\) is, by construction, feasible. Note that

$$\begin{aligned} \langle \nabla G (X_t)(I-R)^\mathrm{T} \cdot D^{*} \rangle \; \;\le \;\; \langle \nabla G (X_t)(I-R)^\mathrm{T} \cdot D_t^{\nabla G\mathrm{max}} \rangle \end{aligned}$$
(39)

since by construction \(D^*\) is feasible, \(D^{*} \in \mathcal{D}\) and \(D^{*} \le X_t\). \(\square \)

Furthermore note that by construction \(X^{*}_t =X_t- D^{*}\) and \(\tilde{D}- D^{*}\) are orthogonal, since the non-null components of \(\tilde{D}- D^*=\max (\tilde{D}-X_t, 0)\) coincide with the null of \(X^{*}_t= \max (X_t- \tilde{D}, 0)\). Thus according to (20)

$$\begin{aligned} \langle \nabla G (X^{*}_t)(I-R)^\mathrm{T}\cdot \tilde{D}- D^*\rangle \;\le 0 \end{aligned}$$
(40)

now expanding in Taylor series \( \nabla G (X)\) around point \(X_t\) we obtain

$$\begin{aligned} \nabla G(X^{*}_t)= \nabla G(X_{t}) + R^{(1)}_{\nabla G}(X_t,-D^{*}). \end{aligned}$$

Since \(D^{*}\) is bounded in norm, from (34) we can conclude that the remainder \(R^{(1)}_{\nabla G}(X_t,-D^{*}_t)\) is \(o(\Vert \nabla G (X_t)\Vert )\) and \(O(\Vert H_G (X_t)\Vert )\) and thus

$$\begin{aligned} \langle \nabla G (X_t)(I-R)^\mathrm{T} \cdot (\tilde{D}- D^{*})\rangle = \langle \nabla G (X^{*})(I-R)^\mathrm{T} \cdot (\tilde{D}- D^{*}) \rangle +\left\{ \begin{array}{c} o(\Vert \nabla G(X_t)\Vert )\\ O(\Vert H_G(X_t)\Vert ) \end{array} \right. \end{aligned}$$
(41)

from which the assertion follows recalling (39) and (40). Indeed

$$\begin{aligned} \langle \nabla G (X_t)(I-R)^\mathrm{T} \cdot (\tilde{D}- D_t^{\nabla G\mathrm{max}}) \rangle \;\; \mathop {\le }\limits ^{(39)} \langle \nabla G (X_t)(I-R)^\mathrm{T} \cdot (\tilde{D}- D^{*}) \rangle \\ \mathop {=}\limits ^{(40)}\ \langle \nabla G (X^{*}_t)(I-R)^\mathrm{T} \cdot (\tilde{D}- D^{*})\rangle +\left\{ \begin{array}{c} o(\Vert \nabla G(X_t) \Vert )\\ O(\Vert H_G(X_t) \Vert ) \end{array} \right. \mathop {\le }\limits ^{(40)} \left\{ \begin{array}{c} o( \Vert \nabla G(X_t)\Vert )\\ O(\Vert H_G(X_t)\Vert ) \end{array} \right. \end{aligned}$$

Similarly we can prove

Lemma 4

If \(G(X)\) satisfies the conditions of Theorem 5 (and in particular condition (20)), then, for any vector \(D \in \mathcal{D}\), defined \(D^{*}= \min (D, X_t )\) we have

$$\begin{aligned} \langle \nabla G(X_t)(I-R)^\mathrm{T} \cdot D^* \rangle \;\ = \langle \nabla G(X_t)(I-R)^\mathrm{T} \cdot D \rangle + \left\{ \begin{array}{c} o(\Vert \nabla G(X_t)\Vert )\\ O(\Vert H_G(X_t)\Vert ) \end{array} \right. \end{aligned}$$
(42)

Proof

Repeating the same arguments of Lemma 3 we can prove the assertion as direct consequence of the fact that vector \(D- D^*\) is, by construction, orthogonal to vector \(X_t-D^*\), and (20) applies. \(\square \)

Proof of Theorem 5

First, observe that since arrivals are assumed i.i.d., and service constraints are assumed to be static, we have \(\mathcal{H}=\mathcal{X}\).

The idea of the proof is rather simple; \(G(X)\) can be interpreted as a Lyapunov function for the system. The stability of the network of queues follows from the fact that drift conditions of Theorem 4 are verified.

First, we evaluate the drift of \(G(X_t)\) for large values of \(X_t\). By definition

$$\begin{aligned}&\Delta \mathcal{L}\!=\!\mathop {\mathbb {E}}\limits [G(X_{t+1})\!-\! G(X_t)\mid X_t]\!=\!\mathop {\mathbb {E}}\limits \left[ G\left( X_{t}\!+\!A_t\!-\!D_t^{\nabla G\mathrm{max}}(I-R)\right) \big | X_t \right] \!-\! G(X_t)\nonumber \\ \end{aligned}$$

and approximating \(G(X_t+A_t-D_t^{\nabla G\mathrm{max}}(I-R))\) with its first order Taylor polynomial centered at \(X_t\), we get

$$\begin{aligned}&\left[ G\left( X_{t}+A_t-D_t^{\nabla G\mathrm{max}}(I-R)\right) \big | X_t \right] \nonumber \\&\quad =G(X_{t})+ \langle \nabla G(X_t)\cdot \left[ \left( A_t-D_t^{\nabla G\mathrm{max}}(I-R)\right) \right] \rangle \nonumber \\&\qquad + R^{(2)}_{G}\left( X_t, A_t-D_t^{\nabla G\mathrm{max}}(I-R)\right) \end{aligned}$$
(43)

where the remainder \(R^{(2)}_{G}(X_t, A_t-D_t^{\nabla G\mathrm{max}}(I-R))\), satisfies

$$\begin{aligned} {\mathop {\lim }_{\Vert X_t\Vert \rightarrow \infty }} \frac{\Big |\Big |R^{(2)}_{G}\left( X_t,A_t-D_t^{\nabla G\mathrm{max}}(I-R)\right) \Big |\Big |}{\Vert \nabla G(X_t)\Vert }=0 \end{aligned}$$

in light of (36) (Lemma 2), since both \(A_t\) and \(D_t^{\nabla G\mathrm{max}}\) are bounded norm vectors. Thus

$$\begin{aligned}&\mathop {\mathbb {E}}\limits \left[ G\left( X_{t}+A_t-D_t^{\nabla G\mathrm{max}}(I-R)\right) \big | X_t \right] \nonumber \\&\quad =G(X_{t})+ \langle \nabla G(X_t)\cdot \mathop {\mathbb {E}}\limits \left[ \left( A_t-D_t^{\nabla G\mathrm{max}}(I-R)\right) \right] \rangle + o(\Vert \nabla G(X_t)\Vert ) \end{aligned}$$
(44)

with,

$$\begin{aligned}&\langle \nabla G(X_t)\cdot \mathop {\mathbb {E}}\limits [(A_t-D_t^{\nabla G\mathrm{max}}(I-R))]\rangle = \langle \nabla G(X_t)\cdot \varLambda -D_t^{\nabla G\mathrm{max}}(I-R) \rangle \\&\quad =\langle \nabla G(X_t)\cdot \varLambda \rangle - \langle \nabla G(X_t)\cdot D_t^{\nabla G\mathrm{max}}(I-R)\rangle . \end{aligned}$$

Since by assumption \(\varLambda (I-R)^{-1}\) lies in the interior of \(\mathcal{D}\), an \(\epsilon ^{\prime }>0\) can be found, such that also \(\varLambda (I-R)^{-1}+\epsilon ^{\prime } \tilde{D}\) lies in \(\mathcal{D}\), with \(\tilde{D}={{\mathrm{arg\,max}}}_{\mathcal{D}} \langle \nabla G(X_t)\cdot D(I-R)\rangle \).

we obtain

$$\begin{aligned} \langle \nabla G(X_t)\cdot D_t^{\nabla G\mathrm{max}}(I-R)\rangle&= {\mathop {\max }_{\mathcal{D}_\mathcal{F}(X_t)}} \langle \nabla G(X_t)(I-R)^{T} \cdot D\rangle \nonumber \\&= {\mathop {\max }_{\mathcal{D}}} \langle \nabla G(X_t)(I-R)^\mathrm{T} \cdot D \rangle + o(\Vert \nabla G(X_t)\Vert ) \end{aligned}$$
(45)

where the second equation holds by virtue of Lemma 3; now

$$\begin{aligned}&{\mathop {\max }_{\mathcal{D}}} \langle \nabla G(X_t)(I-R)^\mathrm{T} \cdot D \rangle \ge \; \; \langle \nabla G(X_t)(I-R)^\mathrm{T} \cdot \varLambda (I-R)^{-1}+ \epsilon ^{\prime } \tilde{D} \rangle \nonumber \\&\quad \ge \langle \nabla G(X_t) \cdot \varLambda \rangle + \epsilon \Vert \nabla G(X_t)\Vert \end{aligned}$$
(46)

for a suitable \(\epsilon >0\). In particular last equality is consequence of (21) and the definition of \(\tilde{D}\), in light of which, we can claim \(\frac{\langle \nabla G(X_t)(I-R)^\mathrm{T} \cdot \tilde{D} \rangle }{\Vert \nabla G(X_t) \Vert }= \frac{\epsilon }{\epsilon ^{\prime }} \) for some \(\epsilon >0\). Now, combining (45) and (46) we obtain

$$\begin{aligned} \langle \nabla G(X_t)\cdot D_t^{\nabla G\mathrm{max}}(I-R)\rangle \ge \langle \nabla G(X_t) \cdot \varLambda \rangle + \epsilon \Vert \nabla G(X_t)\Vert + o(\Vert \nabla G(X_t)\Vert )\nonumber \\ \end{aligned}$$
(47)

In conclusion

$$\begin{aligned} E\left[ G(X_{t+1})- G(X_t)\mid X_t \right] \le - \epsilon \Vert \nabla G(X_t)\Vert + o(\Vert \nabla G(X_t)\Vert ) \end{aligned}$$

for large \(X_t\), and therefore (12) is satisfied, since for any \(\epsilon ^{\prime \prime }<\epsilon \), a sufficiently large \(b>0\) can be found such that

$$\begin{aligned} E\left[ G(X_{t+1})- G(X_t)\mid X_t \right] \le - \epsilon ^{\prime \prime } \Vert \nabla G(X_t)\Vert \end{aligned}$$

for \(\Vert X_t\Vert > b\).

Furthermore for any \(X_t:\Vert X_t\Vert \le b\), \(G(X_{t+1})-G(X_t)= G(X_{t}+A_t-D_t^{\nabla G\mathrm{max}}(I-R))-G(X_t)\) is bounded. Indeed once again we recall vector \(\Vert A_t-D_t^{\nabla G\mathrm{max}}(I-R)\Vert \) is bounded in norm. Let \(\zeta \) be a bound for \(\Vert A_t-D_t^{\nabla G\mathrm{max}}(I-R)\Vert \). Now \(\Vert X_{t+1}\Vert =\Vert X_t + A_t-D_t^{\nabla G\mathrm{max}}(I-R) \Vert \le \Vert X_t \Vert + \Vert A_t-D_t^{\nabla G\mathrm{max}}(I-R) \Vert \le b+ \zeta \).

Thus being \(G(X)\) continuous, and thus bounded over compact domains both from above and below: \( G(X_{t+1})-G(X_t)\le \max _{X_t: \Vert X_t\Vert \le b+\zeta }G(X) - \min _{X_t: \Vert X_t\Vert \le b}G(X)\). The \(\Vert \nabla G(X)\Vert \)-stability of the system of queues immediately follows, since \(\lim _{\Vert X\Vert \rightarrow \infty } \nabla G(X)=\infty \) (as result of Lemma 1) \(\square \)

Proof of Theorem 6

The generalization to the case in which \(S_t\) is a non-trivial Markov Chain can be carried out by sampling the process \(Y_t\) in correspondence of the instants \(\{t_k \}\) at which \(S_{t_k}=S_0\) for some specific state \(S_0\). From theory of DTMC (recalling that \(S_t\) has a finite number of states) immediately follows that \(\{t_k\}\) forms a sequence of non-defective regeneration times for the system. Thus applying Corollary 1 we can prove the stability of the system of queues. To simplify the notation we assume traffic to be single-hop along our proof; however, we wish to emphasize that the proof for the more general case goes exactly along the same lines and can easily recovered by replacing the departure vector at time \(t\), \(D_t^{\nabla G\mathrm{max}}\) with \(D_t^{\nabla G\mathrm{max}}(I-R)\) in the following derivation.

Again we select \(G(X)\) as Lyapunov function. Approximating \(G(X)\) with its second order Taylor expansion, we get

$$\begin{aligned}&\mathop {\mathbb {E}}\limits [G(X_{t_{k+1}})\mid Y_{t_k}] = G(X_{t_{k}})+ < \nabla G(X_{t_k})\cdot \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1} \big ( A_t-D_t^{\nabla G\mathrm{max}}\Big )\right] \nonumber \\&\quad \quad \quad \quad \quad \quad \quad \quad \quad \, \, \,>+\mathop {\mathbb {E}}\limits \left[ R^{(2)}_G\left( X_{t_k}, \sum _{t=t_{k}}^{t_{k+1}-1}( A_t-D_t^{\nabla G\mathrm{max}})\right) \right] \end{aligned}$$
(48)

Now, since all polynomial moments of vector \(\sum _{t_{k}}^{t_{k+1}-1}( A_t-D_t^{\nabla G\mathrm{max}})\) are, by construction, finite, (this because every vector \( A_t-D_t^{\nabla G\mathrm{max}} \) is bounded in norm and polynomial moments of \(z_k=t_{k+1}-t_{k}\) are finite,) from (37) we obtain that \(\mathop {\mathbb {E}}\limits \left[ R^{(2)}_G\left( X_{t_k}, \sum _{t_{k}}^{t_{k+1}-1}( A_t-D_t^{\nabla G\mathrm{max}})\right) \right] = o(\Vert \nabla G(X_{t_k})\Vert )\), i.e.

$$\begin{aligned}&\mathop {\mathbb {E}}\limits [ G(X_{t_{k+1}})\mid Y_{t_k}] =G(X_{t_{k}}) + \langle \nabla G(X_{t_k})\cdot \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1}\Big ( A_t-D_t^{\nabla G\mathrm{max}}\Big )\right] \rangle \nonumber \\&\quad \qquad \qquad \qquad \qquad \qquad +o(\Vert \nabla G(X_{t_k})\Vert ) \end{aligned}$$
(49)

Furthermore,

$$\begin{aligned}&\langle \nabla G(X_{t_k})\cdot \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1}\Big ( A_t-D_t^{\nabla G\mathrm{max}}\Big )\right] \rangle \nonumber \\&= \langle \nabla G(X_{t_k})\cdot \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1}\Big ( A_t-D_{t_k}^{\nabla G\mathrm{max}}+ D_{t_k}^{\nabla G\mathrm{max}}- D_t^{\nabla G\mathrm{max}}\Big )\right] \rangle \nonumber \\&= \langle \nabla G(X_{t_k})\cdot \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1}\Big ( A_t-D_{t_k}^{\nabla G\mathrm{max}}\Big )\right] \rangle \nonumber \\&\quad + \langle \nabla G(X_{t_k})\cdot \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1}\Big ( D_{t_k}^{\nabla G\mathrm{max}}- D_t^{\nabla G\mathrm{max}}\Big )\right] \rangle \end{aligned}$$
(50)

with

$$\begin{aligned}&\langle \nabla G(X_{t_k})\cdot \mathop {\mathbb {E}}\limits \left[ \sum _{t_{k}}^{t_{k+1}-1}( A_t-D_{t_k}^{\nabla G\mathrm{max}})\right] \rangle = \langle \nabla G(X_{t_k})\cdot \mathop {\mathbb {E}}\limits [z_k](\varLambda -D_{t_k}^{\nabla G\mathrm{max}})\rangle \nonumber \\&\quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \le - \epsilon \mathop {\mathbb {E}}\limits [z_k] \Vert \nabla G(X_{t_k})\Vert \end{aligned}$$
(51)

where the (first) equality follows from classical reward-renewal arguments, while the following inequality is obtained with similar arguments as in proof of Theorem 5. In particular observe that \( \langle \nabla G(X_t)\cdot \varLambda -D_t^{\nabla G\mathrm{max}}\rangle = \langle \nabla G(X_t)\cdot \varLambda \rangle - \langle \nabla G(X_t)\cdot D_t^{\nabla G\mathrm{max}}\rangle \), and since by assumption \(\varLambda \) lies in the interior of \(\mathcal{D}\), an \(\epsilon ^{\prime }>0\) can be found, such that also \(\varLambda +\epsilon ^{\prime } \tilde{D}\) lies in \(\mathcal{D}\), with \(\tilde{D}= {{\mathrm{arg\,max}}}_{\mathcal{D}} \langle \nabla G(X_{t_k})\cdot D\rangle \). Now, recalling Lemma 3 we have \( \langle \nabla G(X_t)\cdot D_t^{\nabla G\mathrm{max}}\rangle = \max _{\mathcal{D}_\mathcal{F}(X_t)} \langle \nabla G(X_t) \cdot D\rangle = \max _{ D \in \mathcal{D}} \langle \nabla G(X_t) \cdot D \rangle + o(\Vert \nabla G(X_t)\Vert )\;\; \ge \; \; \langle \nabla G(X_t) \cdot \varLambda + \epsilon ^{\prime } \tilde{D} \rangle + o(\Vert \nabla G(X_t)\Vert ) \ge \langle \nabla G(X_t) \cdot \varLambda \rangle + \epsilon \Vert \nabla G(X_t)\Vert \), for an opportune \(\epsilon >0\); last inequality follows from (21).

$$\begin{aligned}&\left\langle \nabla G(X_{t_k})\cdot \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1} \Big ( D_{t_k}^{\nabla G\mathrm{max}}- D_t^{\nabla G\mathrm{max}}\Big )\right] \right\rangle \nonumber \\&\quad = \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1}\langle \nabla G(X_{t_k}) \cdot D_{t_k}^{\nabla G\mathrm{max}}- D_t^{\nabla G\mathrm{max}} \rangle \right] \nonumber \\&\quad = \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1}\langle \nabla G(X_{t_k})- \nabla G(X_{t}) + \nabla G(X_{t}) \cdot D_{t_k}^{\nabla G\mathrm{max}}- D_t^{\nabla G\mathrm{max}} \rangle \displaystyle \right] \nonumber \\&\quad = \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1} \langle \nabla G(X_{t_k})- \nabla G(X_{t}) \cdot D_{t_k}^{\nabla G\mathrm{max}}- D_t^{\nabla G\mathrm{max}}\rangle \displaystyle \right] \nonumber \\&\qquad + \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1}\langle \nabla G(X_{t}) \cdot D_{t_k}^{\nabla G\mathrm{max}}- D_t^{\nabla G\mathrm{max}} \rangle \right] . \end{aligned}$$
(52)

Now \( \mathop {\mathbb {E}}\limits \big [\langle \nabla G(X_{t_k})- \nabla G(X_{t}) \cdot D_{t_k}^{\nabla G\mathrm{max}}- D_t^{\nabla G\mathrm{max}}\rangle \big ] =o( \Vert \nabla G(X_{t_k})\Vert ) \) as an immediate consequence of (35); in this regard, we recall that by hypothesis polynomial moments \(\mathop {\mathbb {E}}\limits [\Vert X_{t_k}-X_{t}\Vert ^h]\) are finite for any \(h\); this again because \(\{t_k\}\) is a non-defective sequence of stopping times, and arrival vector is bounded. At last observe that the term \(\langle \nabla G(X_{t}) \cdot D_{t_k}^{\nabla G\mathrm{max}}- D_t^{\nabla G\mathrm{max}} \rangle = \langle \nabla G(X_{t})\cdot D_{t_k}^{\nabla G\mathrm{max}} \rangle - \max _{\mathcal{D}} \langle \nabla G(X_{t})\cdot D \rangle + o(\Vert \nabla G(X_{t})\Vert )\) in light of Lemma 3 with \(\langle \nabla G(X_{t})\cdot D_{t_k}^{\nabla G\mathrm{max}} \rangle -\max _{\mathcal{D}} \langle \nabla G(X_{t})\cdot D \rangle \le 0\)

As a conclusion, recalling (35), we have

$$\begin{aligned} \mathop {\mathbb {E}}\limits [G(X_{t_{k+1}})\mid X_{t_k}]-G(X_{t_{k}})\le - \epsilon \mathop {\mathbb {E}}\limits [z_k]\ \Vert \nabla G(X_{t_k})\Vert + o( \Vert \nabla G(X_{t_k})|). \end{aligned}$$

Therefore, (14) is satisfied, since for any \(\epsilon ^{\prime \prime }<\epsilon \mathop {\mathbb {E}}\limits [z_k]\), a \(b>0\) can be found such that

$$\begin{aligned} E\left[ G(X_{t_{k+1}})- G(X_{t_k})\mid X_t \right] \le - \epsilon ^{\prime \prime } \Vert \nabla G(X_{t_k})\Vert \end{aligned}$$

for \(\Vert X_t\Vert > b\).

At last, to show that (13) is satisfied too, observe that for any \(Y_{t_k}: \Vert X_{t_k} \Vert \le b\):

$$\begin{aligned}&\mathop {\mathbb {E}}\limits \Big [G(X_{t_{k+1}})\Big ]= \mathop {\mathbb {E}}\limits \left[ G\Big (X_{t_k}+\sum _{t=t_{k}}^{t_{k+1}-1}(A_t-D_t^{\nabla G\mathrm{max}})\Big )\right] \\&\quad \mathop {=}\limits ^{(49)} G(X_{t_{k}})+ \langle \nabla G(X_{t_k})\cdot \mathop {\mathbb {E}}\limits \left[ \sum _{t=t_{k}}^{t_{k+1}-1} \Big ( A_t-D_t^{\nabla G\mathrm{max}}\Big ) \right] \rangle \\&\qquad + \sum _{i=2}^{h_0-1} \frac{1}{i!} \mathop {\mathbb {E}}\limits \left[ \left( \sum _{t=t_{k}}^{t_{k+1}-1} \Big (A_t-D_t^{\nabla G\mathrm{max}}\Big )\right) ^i \right] (\partial ^i G)(X_{t_k})\\&\qquad \quad \!+\! \frac{1}{h_0!} \mathop {\mathbb {E}}\limits \left[ \left( \sum _{t=t_{k}}^{t=t_{k+1}-1} \Big (A_t-D_t^{\nabla G\mathrm{max}}\Big )\right) ^{h_0} (\partial ^{h_0} G)\left( X_{t_k}\!+\!\ \alpha \sum _{t=t_{k}}^{t_{k+1}-1} \Big (A_t\!-\!D_t^{\nabla G\mathrm{max}}\Big )\right) \right] \end{aligned}$$

can be easily shown to be bounded by \(G(X_{t_k})+v_0\) for an appropriate \(v_0>0\), since (i) \( \mathop {\mathbb {E}}\limits \left[ \left( \sum _{t_{k}}^{t_{k+1}-1} (A_t-D_t^{\nabla G\mathrm{max}})\right) ^i\right] \) are bounded for every \(i\), as before; (ii) \(G(X_{t_k})\) and its derivatives \((\partial ^i G)(X_k)\) for \(i\le h_0\) are by assumption bounded over compact domains (in particular they are bounded over the domain \(X: \Vert X\Vert \le b\)), because \(G(X)\in C^{h_0}[ \mathbb {R}^M\rightarrow \mathbb {R}]\); (iii) \( (\partial ^{h_0} G)(X_{t_k}+\ \alpha \sum _{t_{k}}^{t_{k+1}-1} (A_t-D_t^{\nabla G\mathrm{max}}) )\) is bounded as before in light of (22). The \(\Vert \nabla G(X)\Vert \)-stability of the system of queues immediately follows from Corollary 1, since \(\lim _{\Vert X\Vert \rightarrow \infty } \nabla G(X)=\infty \) (as result of Lemma 1). \(\square \)

Proof of Corollary 2

Consider the Lyapunov function \(\mathcal{L}(X)= \frac{1}{h+1}G(X)^{h+1}\); denoting by \(Z_t=A_t-D_t(I-R))\):

$$\begin{aligned} G(X_{t+1})= G(X_{t}+Z_t)= G(X_t)+ \langle \nabla G(X_t) \cdot Z_t \rangle + R^{(2)}_G(X_t, Z_t). \end{aligned}$$

Now recalling (32) and (36), since by construction \(Z_t\) is a bounded in norm vector, we can claim that \(\langle \nabla G(X_t) \cdot Z_t \rangle =o(G(X_t)) \), and \(R^{(2)}_G(X_t, Z_t)=o(\Vert \nabla G(X_t))\Vert )\) as \(X_t \rightarrow \infty \). Thus

$$\begin{aligned}&\mathop {\mathbb {E}}\limits \left[ \mathcal{L}(X_{t+1})\mid X_t\right] \,\,\,= \frac{1}{h+1} \mathop {\mathbb {E}}\limits \big [\big (G(X_t+Z_t)\big )^{h+1}\big ]\\&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad = \frac{1}{h+1} \mathop {\mathbb {E}}\limits \left[ \Big ( G(X_t)+ \langle \nabla G(X_t) \cdot Z_t \rangle + o(\Vert \nabla G(X) \Vert ) \Big )^{h+1} \right] \\&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad = \frac{1}{h+1} \left[ \big (G(X_t)\big )^{h+1}+ (h+1)\langle \nabla G(X_t) \cdot \mathop {\mathbb {E}}\limits [Z_t ]\rangle \big ( G(X_t)\big )^h \right. \\&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \left. + o( \Vert \nabla G(X) \Vert \big (G(X_t)\big )^h \right] . \end{aligned}$$

Now considering \(X_t\) sufficiently large, such that \(G(X_t)\) is positive (we recall that \(G(X)\rightarrow \infty \), for \(\Vert X\Vert \rightarrow \infty \), and thus, it must be positive outside some compact set), from (47), we have

$$\begin{aligned} (G(X_t))^h \langle \nabla G(X_t)\cdot \varLambda _t-D_t^{\nabla G\mathrm{max}}(I-R)\rangle \;\; \le -\epsilon (G(X_t))^h\Vert \nabla G(X_t) \Vert \end{aligned}$$

for some \(\epsilon >0\).

The \(\Vert X\Vert ^h\)-stability immediately follows, observing that (i) by construction \(\lim _{ \Vert X\Vert \rightarrow \infty } \frac{(G(X))^h \Vert \nabla G(X) \Vert }{\Vert X\Vert ^h}=\infty \); (ii) for any \(X_t:\Vert X_t\Vert \le b\), \(\frac{1}{h+1}\Big [G\big (X_t+Y_t)\big )^{h+1}-\big (G(X_t)\big )^{h+1}\Big ]\) can be bounded by an appropriate constant \(v_0\) (this because \(Y_t\) is bounded as well as \(G()\) is bounded (from above and below) over compact sets).

The extension to the more general case can be carried out by observing that \(\mathcal{L}(X)=\frac{1}{h+1}G(X)^{h+1}\) is a strong potential provided that of \(G(X)\) is a strong potential by Corollary 3. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Leonardi, E. Throughput optimal scheduling policies in networks of constrained queues. Queueing Syst 78, 197–223 (2014). https://doi.org/10.1007/s11134-014-9407-9

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11134-014-9407-9

Keywords

Mathematics Subject Classification

Navigation