Abstract
We investigate the impact of heterogeneity in the amount of incoming traffic routed by dispatchers in a non-cooperative load balancing game. For a fixed amount of total incoming traffic, we show that for a broad class of cost functions the worst-case social cost occurs when each dispatcher routes the same amount of traffic, that is, the game is symmetric. Using this result, we give lower bounds on the Price of Anarchy for (i) cost functions that are polynomial on server loads; and (ii) cost functions representing the mean delay of the shortest remaining processing time service discipline.
Similar content being viewed by others
Notes
Provided the Nash mapping associating to a vector \({\varvec{\lambda }}\) the strategy profile \({\mathcal {N}}({\varvec{\lambda }}) \in {\mathcal {X}}\) is continuous. This can be proven by a straightforward generalization of Theorem 3 in Ayesta et al. (2011) to general cost function \(\phi \) [see Brun and Prabhu (2012) for details].
References
Altman, E., Basar, T., Jimenez, T., & Shimkin, N. (2002). Competitive routing in networks with polynomial costs. Automatic Control, IEEE Transactions on, 47(1), 92–96.
Altman, E., Boulogne, T., El-Azouzi, R., Jiménez, T., & Wynter, L. (2006). A survey on networking games in telecommunications. Computers & Operations Research, 33(2), 286–311.
Anselmi, J., & Gaujal, B. (2010). Optimal routing in parallel, non-observable queues and the price of anarchy revisited. In 22nd International teletraffic congress (ITC), Amsterdam.
Ayesta, U., Brun, O., & Prabhu, B.J. (2011). Price of anarchy in non-cooperative load balancing games. Performance Evaluation, 68, 1312–1332, extended version available as LAAS Research Report. http://www.laas.fr/brun/loadbalancing.pdf
Brun, O., & Prabhu, B. (2012). Worst-case analysis of non-cooperative load balancing. Tech. rep., http://hal.archives-ouvertes.fr/hal-00747244
Burnetas, A. (2013). Customer equilibrium and optimal strategies in markovian queues in series. Annals of Operations Research, 208(1), 515–529.
Chen, H. L., Marden, J., & Wierman, A. (2009). The effect of local scheduling in load balancing designs. In Proceedings of IEEE Infocom.
Cominetti, R., Correa, J. R., & Stier-Moses, N. E. (2009). The impact of oligopolistic competition in networks. Operations Research, 57(6), 1421–1437.
Han, Z., Niyato, D., Saad, W., Baar, T., & Hjrungnes, A. (2011). Game theory in wireless and communication networks. Cambridge: Cambridge University Press.
Harchol-Balter, M. (2013). Performance modeling and design of computer systems: Queueing theory in action. Cambridge: Cambridge University Press.
Haviv, M., & Roughgarden, T. (2007). The price of anarchy in an exponential multi-server. Operations Research Letters, 35, 421–426.
Korilis, Y., Lazar, A., & Orda, A. (1997). Capacity allocation under noncooperative routing. IEEE Transactions on Automatic Control, 42(3), 309–325.
Koutsoupias, E., & Papadimitriou, C. H. (1999). Worst-case equilibria. In STACS 1999.
Lu, Y., Xie, Q., Kliot, G., Geller, A., Larus, J. R., & Greenberg, A. (2011). Join-idle-queue: A novel load balancing algorithm for dynamically scalable web services. Performance Evaluation, 68(11), 1056–1071.
Orda, A., Rom, R., & Shimkin, N. (1993). Competitive routing in multi-user communication networks. IEEE/ACM Transactions on Networking, 1, 510–521.
Osborne, M., & Rubinstein, A. (1994). A course in game theory. Cambridge: MIT Press.
Qiu, L., Yang, Y. R., Zhang, Y., & Shenker, S. (2003). On selfish routing in internet-like environments. In Proceedings of the 2003 conference on applications, technologies, architectures, and protocols for computer communications. ACM, New York, NY, USA, SIGCOMM ’03, pp. 151–162.
Roughgarden, T. (2003). The price of anarchy is independent of the network topology. Journal of Computer and System Sciences, 67(2), 341–364.
Zheng, Q., Tham, C. K., & Veeravalli, B. (2008). Dynamic load balancing and pricing in grid computing with communication delay. Journal of Grid Computing, 6(3), 239–253.
Acknowledgments
The authors would like to thank U. Ayesta for useful discussions.
Author information
Authors and Affiliations
Corresponding author
Additional information
A preliminary version of this paper was presented during the workshop AlgoGT 2010, Bordeaux, France.
Appendices
Proof of results in Sect. 3.1
1.1 Proof of Proposition 1
We first prove a series of technical lemmata before proving Proposition 1.
Lemma 7
\({\mathcal {S}}_i\cap {\mathcal {S}}_k\ne \emptyset \).
Proof
Assume the contrary, i.e., if \(m\in {\mathcal {S}}_i\) then \(m\notin {\mathcal {S}}_k\), and if \(n\in {\mathcal {S}}_k\) then \(n\notin {\mathcal {S}}_i\). For one such pair \(m\) and \(n\), from (4), we can conclude that \(\mu _i > \psi ^\prime _m(0,y_m) \ge \mu _k\) and \(\mu _k > \psi ^\prime _n(0,y_n) \ge \mu _i\), which is a contradiction. \(\square \)
Since \({\mathcal {S}}_i\cap {\mathcal {S}}_k\ne \emptyset \), from (2), we have
Lemma 8
\(\mu _i < \mu _k \iff \exists j\in {\mathcal {S}}_k : x_{i,j} < x_{k,j}\).
Proof
Straight part: From Lemma 7, \({\mathcal {S}}_i\cap {\mathcal {S}}_k\ne \emptyset \). If \(\mu _i < \mu _k\), then, from (19), \(\exists j\in {\mathcal {S}}_k : \psi ^\prime _j(x_{i,j},y_j) < \psi ^\prime _j(x_{k,j},y_j)\) which implies that \(\exists j\in {\mathcal {S}}_k : x_{i,j} < x_{k,j}\).
Converse part: \(\exists j\in {\mathcal {S}}_k : x_{i,j} < x_{k,j}\). Either \(j\in {\mathcal {S}}_i\) or \(j\notin {\mathcal {S}}_i\). If \(j\in {\mathcal {S}}_i\) then, from (19), \(\psi ^\prime _j(x_{i,j},y_j) < \psi ^\prime _j(x_{k,j},y_j)\) implies \(\mu _i < \mu _k\). If \(j\notin {\mathcal {S}}_i\), then, from (4), \(\mu _i\le \psi ^\prime _j(0,y_j) < \psi ^\prime _j(x_{k,j},y_j) < \mu _k\).
Lemma 9
If \(\mu _i < \mu _k\), then \({\mathcal {S}}_i\subset {\mathcal {S}}_k\).
Proof
If \(j\in {\mathcal {S}}_i\), then, from (4), \(\psi ^\prime _j(0,y_j) < \mu _i\). If \(\mu _i < \mu _k\) then \(\psi ^\prime _j(0,y_j) < \mu _k\). Hence, from (4) we can conclude that \(j\in {\mathcal {S}}_k\). Therefore, \({\mathcal {S}}_i\subset {\mathcal {S}}_k\). \(\square \)
Lemma 10
\(\exists m\in {\mathcal {S}}_k : x_{i,m} < x_{k,m} \iff x_{i,j} < x_{k,j},\quad \forall j\in {\mathcal {S}}_k\).
Proof
Straight part: If \(\exists m\in {\mathcal {S}}_k : x_{i,m} < x_{k,m}\), then, from Lemmata 8 and 9, we have \(\mu _i < \mu _k\) and \({\mathcal {S}}_i\subset {\mathcal {S}}_k\). For \(j\in {\mathcal {S}}_i\), from (19), we have \(\psi ^\prime _j(x_{i,j},y_j) < \psi ^\prime _j(x_{k,j},y_j)\), which implies \(x_{i,j} < x_{k,j}\). For \(j\in {\mathcal {S}}_k\setminus {\mathcal {S}}_i\), \(x_{i,j} = 0\) and \(0 < x_{k,j}\). Hence, \(x_{i,j} < x_{k,j}\), \(\forall j\in {\mathcal {S}}_k\).
Converse part: It is true from the statement. \(\square \)
We are now in position to prove Proposition 1.
Proof (Proof of Proposition 1)
1 \(\iff \) 2 \(\iff \) 3 follows from Lemmata 8 and 10. Now, we show 3 \(\iff \) 4.
Straight part: If \(x_{i,j} < x_{k,j}, \;\forall j\in {\mathcal {S}}_k\), then, from the fact that 3 \(\iff \) 1 and Lemma 9, we can conclude that \(\lambda _i = \sum _{j\in {\mathcal {S}}_i}x_{i,j} = \sum _{j\in {\mathcal {S}}_k} x_{i,j} < \sum _{j\in {\mathcal {S}}_k}x_{k,j} = \lambda _k\).
Converse part: Since \(\lambda _k = \sum _{j\in {\mathcal {S}}_k}x_{k,j}\), if \(\lambda _i < \lambda _k\), then \(\exists j\in {\mathcal {S}}_k : x_{i,j} < x_{k,j}\). Since 2 \(\iff \) 3, if \(\lambda _i < \lambda _k\), then \(x_{i,j} < x_{k,j}, \;\forall j\in {\mathcal {S}}_k\). \(\square \)
1.2 Proof of Proposition 2
As before, we first prove a series of technical lemmata.
Lemma 11
Proof
See the proof of Lemma 2 in Brun and Prabhu (2012). \(\square \)
Lemma 12
\({\mathcal {C}}_m\cap {\mathcal {C}}_n\ne \emptyset \).
Proof
Assume the contrary, i.e., if \(i\in {\mathcal {C}}_m\), then \(i\notin {\mathcal {C}}_n\), and if \(k\in {\mathcal {C}}_n\), then \(k\notin {\mathcal {C}}_m\). For one such pair \(i\) and \(k\), from (4), we can conclude that \(\psi ^\prime _m(0,y_m)<\mu _i \le \psi ^\prime _n(0,y_n)\) and \(\psi ^\prime _n(0,y_n) < \mu _k \le \psi ^\prime _m(0,y_m)\), which is a contradiction. \(\square \)
Lemma 13
If \(\psi ^\prime _n(0,y_n) \le \psi ^\prime _m(0,y_m)\), then \({\mathcal {C}}_m\subseteq {\mathcal {C}}_n\).
Proof
If \(i\in {\mathcal {C}}_m\), then, from (4), \(\mu _i > \psi ^\prime _m(0,y_m)\). If \(\psi ^\prime _n(0,y_n) \le \psi ^\prime _m(0,y_m)\), then \(\mu _i>\psi ^\prime _n(0,y_n)\). Hence, from (4) we can conclude that \(i\in {\mathcal {C}}_n\). Therefore, \({\mathcal {C}}_m\subseteq {\mathcal {C}}_n\). \(\square \)
Since \({\mathcal {C}}_m\cap {\mathcal {C}}_n\ne \emptyset \), from (2), we have
Lemma 14
We have \(\psi ^\prime _n(0,y_n) < \psi ^\prime _m(0,y_m)\) if and only if
Proof
Straight part: From Lemma 12, \({\mathcal {C}}_m\cap {\mathcal {C}}_n\ne \emptyset \). If \(\psi ^\prime _n(0,y_n) < \psi ^\prime _m(0,y_m)\), then, from (20), \(\exists i \in {\mathcal {C}}_n: \psi ^\prime _m(x_{i,m},y_m) - \psi ^\prime _m(0,y_m) < \psi ^\prime _n(x_{i,n},y_n)-\psi ^\prime _n(0,y_n)\), i.e. \(\frac{u_m \, x_{i,m}}{r_m} \, \phi ^\prime (\rho _m) < \frac{u_n \, x_{i,n}}{r_n} \, \phi ^\prime (\rho _n)\).
Converse part: Assume \(\exists i\in {\mathcal {C}}_n : \frac{u_m \, x_{i,m}}{r_m} \, \phi ^\prime (\rho _m) < \frac{u_n \, x_{i,n}}{r_n} \, \phi ^\prime (\rho _n)\). Then \(\psi ^\prime _m(x_{i,m},y_m) - \psi ^\prime _m(0,y_m) < \psi ^\prime _n(x_{i,n},y_n)-\psi ^\prime _n(0,y_n)\). Either \(i\in {\mathcal {C}}_m\) or \(i\notin {\mathcal {C}}_m\). If \(i\in {\mathcal {C}}_m\), then, from (20), \(\psi ^\prime _n(0,y_n) < \psi ^\prime _m(0,y_m)\). If \(i\notin {\mathcal {C}}_m\), then, from (3), \(\psi ^\prime _m(0,y_m) \ge \mu _i = \psi ^\prime _n(x_{i,n},y_n) > \psi ^\prime _n(0,y_n)\). Hence, \(\psi ^\prime _n(0,y_n) < \psi ^\prime _m(0,y_m)\). \(\square \)
Lemma 15
We have \(\psi ^\prime _n(0,y_n) < \psi ^\prime _m(0,y_m)\) if and only if
Proof
Straight part: If \(\psi ^\prime _n(0,y_n) < \psi ^\prime _m(0,y_m)\), then from Lemma 13, \({\mathcal {C}}_m\subset {\mathcal {C}}_n\). For \(i\in {\mathcal {C}}_m\), from (20), \(\psi ^\prime _m(x_{i,m},y_m) -\psi ^\prime _m(0,y_m) < \psi ^\prime _n(x_{i,n},y_n)-\psi ^\prime _n(0,y_n)\), i.e., \(\frac{u_m \, x_{i,m}}{r_m} \, \phi ^\prime (\rho _m)<\frac{u_n \, x_{i,n}}{r_n} \, \phi ^\prime (\rho _n)\). For \(i\in {\mathcal {C}}_n\setminus {\mathcal {C}}_m\), \(x_{i,m} = 0\) and \(0 < x_{i,n}\). Hence, since \(\phi \) is strictly increasing, \(\frac{u_m \, x_{i,m}}{r_m} \, \phi ^\prime (\rho _m) =0 <\frac{u_n \, x_{i,n}}{r_n} \, \phi ^\prime (\rho _n)\).
Converse part: the proof is a direct consequence of Lemma 14. \(\square \)
We are now in position to prove Proposition 2.
Proof (Proof of Proposition 2)
1 \(\iff \) 2 \(\iff \) 3 follows from Lemmata 14 and 15. Next, we show 3 \(\iff \) 4.
Straight part: If \(\frac{u_m \, x_{i,m}}{r_m} \, \phi ^\prime (\rho _m) < \frac{u_n \, x_{i,n}}{r_n} \, \phi ^\prime (\rho _n), \;\forall i\in {\mathcal {C}}_n\), then from the fact that 3 \(\iff \) 1 and Lemma 13, we can conclude that
Observe that it implies \(\rho _n>0\). Assume first that \(\rho _m>0\). Note that \(\psi ^\prime _n(0,y_n) < \psi ^\prime _m(0,y_m)\) can also be written as \(u_n \phi (\rho _n) < u_m \phi (\rho _m)\). Together with inequality 21, it implies that \(\rho _n \, \frac{\phi ^\prime (\rho _n)}{\phi (\rho _n)} > \rho _m \, \frac{\phi ^\prime (\rho _m)}{\phi (\rho _m)}\), which is equivalent to \(\rho _n>\rho _m\) according to Lemma 11. Clearly, \(\rho _n>\rho _m\) also holds if \(\rho _m=0\). We thus get \(u_n \, \phi (\rho _m) < u_n \, \phi (\rho _n) < u_m \, \phi (\rho _m)\) in both cases, which implies that \(u_n < u_m\), as claimed.
Converse part: To prove \(4 \Longrightarrow 1\), we prove that \(\lnot 1 \Longrightarrow \lnot 4\). Assume that \(u_n \phi (\rho _n) \ge u_m \phi (\rho _m)\). Since \(\lnot 1 \iff \lnot 2\), we get \(\frac{u_m \, x_{i,m}}{r_m} \, \phi ^\prime (\rho _m) \ge \frac{u_n \, x_{i,n}}{r_n} \, \phi ^\prime (\rho _n)\), \(\forall i\in {\mathcal {C}}_n\). According to Lemma 13, \(u_n \phi (\rho _n) \ge u_m \phi (\rho _m)\) implies \({\mathcal {C}}_n \subseteq {\mathcal {C}}_m\), and thus
Let us first assume that \(\rho _n>0\) and \(\rho _m>0\). Together with \(u_n \phi (\rho _n) \ge u_m \phi (\rho _m)\), it implies that \(\rho _n \,\frac{\phi ^\prime (\rho _n)}{\phi (\rho _n)} \le \rho _m \, \frac{\phi ^\prime (\rho _m)}{\phi (\rho _m)}\), which is equivalent to \(\rho _n \le \rho _m\) according to Lemma 11. Clearly, \(\rho _n \le \rho _m\) still holds if \(\rho _n=0\). The case \(\rho _m=0<\rho _n\) is impossible because \({\mathcal {C}}_n \subseteq {\mathcal {C}}_m\). We thus obtain that \(\rho _n \le \rho _m\) in all cases. But from \(u_n \phi (\rho _n) \ge u_m \phi (\rho _m)\) and \(\phi (\rho _m)\ge \phi (\rho _n)\), we deduce that \(u_n \ge u_m\). We thus conclude that if \(u_n < u_m\), then \(u_n\phi (\rho _n) < u_m \phi (\rho _m)\), i.e., \(\psi ^\prime _n(0,y_n) < \psi ^\prime _m(0,y_m)\). \(\square \)
1.3 Proof of Lemma 1
We give below the proof of lemmata 1.
Proof (Proof of Lemma 1)
From (2), if \(x_{i,j} > 0\), then
from which we conclude that
Similarly, we have
Now,
Thus,
From Proposition 2, we have \(u_{j+1} \phi (\rho _{j+1}) \ge u_j \phi (\rho _j)\). Moreover \(\phi ^\prime (\rho _j)>0\) because \(\phi \) is strictly increasing. Since the second term on the RHS is strictly positive if \({\mathcal {C}}_j\setminus {\mathcal {C}}_{j+1}\ne \emptyset \), we can conclude that \(\frac{\partial D_K}{\partial y_j}(\mathbf{x}) \ge \frac{\partial D_K}{\partial y_{j+1}}(\mathbf{x})\), with strict inequality if \({\mathcal {C}}_j\setminus {\mathcal {C}}_{j+1}\ne \emptyset \). \(\square \)
Proof of results in Sect. 3.2
1.1 Proof of the results in Sect. 3.2.1
We give below the proofs of lemmata 2–5.
Proof (Proof of Lemma 2)
Proof of part \(1\) : Since \(\psi ^\prime _j(x, y)\) is strictly increasing in each of its two arguments, \(\hat{y}_j \le y_j\) and \(\hat{x}_{i,j} \le x_{i,j}\) implies that \(\hat{\mu }_i = \psi ^\prime _j(\hat{x}_{i,j},\hat{y}_j) \le \psi ^\prime _j(x_{i,j},y_j) =\mu _i\). The proofs of parts \(2\), \(3\) and \(4\) follow similarly. \(\square \)
Proof (Proof of Lemma 3)
Assume the contrary. From Lemma 2.\(4\), \(\hat{y}_m > y_m\) and \(\hat{x}_{i,m} \ge x_{i,m}\) implies \(\hat{\mu }_i > \mu _i\). However, from Lemma 2.\(1\), \(\hat{y}_n \le y_n\) and \(\hat{x}_{i,n} \le x_{i,n}\) implies \(\hat{\mu }_i \le \mu _i\), which is a contradiction. \(\square \)
Proof (Proof of Lemma 4)
From (2), if \(i \in {\mathcal {C}}_j\), then
Thus,
Since \(N_j = \hat{N}_j\) (from Assumption 1) and \(\psi ^\prime _j(x, y)\) is strictly increasing in each of its two arguments, we can conclude that \(\sum _{i\in {\mathcal {C}}_j}\mu _i\) is a strictly increasing function of \(y_j\). \(\square \)
Proof (Proof of Lemma 5)
From Lemma 4, \(\hat{y}_m > y_m\) is equivalent to \(\sum _{i\in {\mathcal {C}}_m}\hat{\mu }_i > \sum _{i\in {\mathcal {C}}_m}\mu _i\), which, since \({\mathcal {C}}_m = {\mathcal {C}}_n\), is equivalent to \(\sum _{i\in {\mathcal {C}}_n}\hat{\mu }_i > \sum _{i\in {\mathcal {C}}_n}\mu _i\). Again, from Lemma 4, we can conclude that \(\hat{y}_n > y_n\).
1.2 Proof of the results in Sect. 3.2.2
We give below the proofs of propositions 3, 4 and 5.
We first prove a lemma that shows that \({\mathcal {S}}^+\) is empty if and only if the load of each and every server is constant under the transformation.
Lemma 16
\(y_j=\hat{y}_j, \forall j\in {\mathcal {S}}\iff {\mathcal {S}}^+=\emptyset \).
Proof
If \({\mathcal {S}}^+=\emptyset \) then \({\mathcal {S}}^-={\mathcal {S}}\). That is, \(\hat{y}_j \le y_j, \forall j\in {\mathcal {S}}\). We also have \(\sum _{j\in {\mathcal {S}}}\hat{y}_j = \sum _{j\in {\mathcal {S}}}y_j\). This is possible only if \(\hat{y}_j=y_j,\forall j\in {\mathcal {S}}\).
The converse is true by definition of \({\mathcal {S}}^+\). \(\square \)
Proof
[Proof of Proposition 3] Assume by contradiction that we can find a server \(s \in {\mathcal {S}}_1\) such that \(s \in {\mathcal {S}}^-\). Then, according to Corollary 1, \({\mathcal {S}}_1 \subset {\mathcal {S}}^-\). Since \({\mathcal {S}}^+ \ne \emptyset \) and \(\hat{y}_j > y_j\) for all \(j \in {\mathcal {S}}^+\), we have \(\sum _{j \in {\mathcal {S}}^+}{\hat{y}_j} > \sum _{j \in {\mathcal {S}}^+}{y_j}\), i.e.,
from which we conclude that there exists \(i\) such that \(\sum _{j \in {\mathcal {S}}^+}{\hat{x}_{i,j}} > \sum _{j \in \mathcal{S}^+}{x_{i,j}}\). Since \({\mathcal {S}}_k = {\mathcal {S}}_1 \subset {\mathcal {S}}^-\) for all \(k \in {\mathcal {C}}_{min}\), we necessarily have \(i \not \in {\mathcal {C}}_{min}\) and thus \(\hat{\lambda }_i \le \lambda _i\). Therefore,
Thus,
We therefore conclude that class \(i\) is such that \(\sum _{j \in {\mathcal {S}}^+}{\hat{x}_{i,j}} > \sum _{j \in {\mathcal {S}}^+}{x_{i,j}}\) and \(\sum _{j \in {\mathcal {S}}^-}{\hat{x}_{i,j}} <\sum _{j \in {\mathcal {S}}^-}{x_{i,j}}\). Therefore, we can find a server \(m \in {\mathcal {S}}^+\) and a server \(n \in {\mathcal {S}}^-\) such that \(\hat{x}_{i,m}> x_{i,m}\) and \(\hat{x}_{i,n} < x_{i,n}\). But according to Lemma 3, this is impossible. We therefore conclude that \({\mathcal {S}}_1 \subset {\mathcal {S}}^+\). \(\square \)
Proof
[Proof of Proposition 4] We first prove that if \(S^+=\emptyset \) then \({\mathcal {S}}_1={\mathcal {S}}_K\). From Lemma 16, this is equivalent to proving that if \(y_j=\hat{y}_j\), \(\forall j \in {\mathcal {S}}\) then \({\mathcal {S}}_1 = {\mathcal {S}}_K\). Assume the contrary, that is \({\mathcal {S}}_1\subsetneq {\mathcal {S}}_K\). Then, \(\exists m: m\in S_K, m\notin S_1\).
Since \(y_m=\hat{y}_m\), from Lemma 4, we get \(\sum _{i\in {\mathcal {C}}_m} \mu _i = \sum _{i\in {\mathcal {C}}_m} \hat{\mu }_i\), which we can rewrite as
We shall show that the above equality is not possible, which then proves the claim.
For \(i \in {\mathcal {C}}_{max}\), since \(\lambda _i > {\hat{\lambda }}_i\), \(\sum _{j\in {\mathcal {S}}_i}x_{i,j} > \sum _{j\in {\mathcal {S}}_i}\hat{x}_{i,j}\). Thus, there exists an \(n\in {\mathcal {S}}_i\) such that \(x_{i,n}>\hat{x}_{i,n}\). Since \(y_n=\hat{y}_n\), from Lemma 2.\(3\), we can conclude that \(\mu _i > \hat{\mu }_i\), and that \(\sum _{i \in {\mathcal {C}}_{max}}{\mu _i}>\sum _{i \in {\mathcal {C}}_{max}}{\hat{\mu }_i}\), which, upon substitution in (22), leads to
If \({\mathcal {C}}_m\setminus {\mathcal {C}}_{max}=\emptyset \), then the above inequality cannot be possible which then proves the claim. So, assume \({\mathcal {C}}_m\setminus {\mathcal {C}}_{max}\ne \emptyset \). Then the above inequality implies that \(\exists i\notin C_{min}\cup C_{max} : \mu _i < \hat{\mu }_i\). Since \(y_j=\hat{y}_j, \forall j\in {\mathcal {S}}_i\), application of Lemma 2.4 leads to \(x_{i,j} < \hat{x}_{i,j}, \forall j\in {\mathcal {S}}_i\), and consequently to \(\lambda _i = \sum _{j\in {\mathcal {S}}_i}x_{i,j} < \sum _{j\in {\mathcal {S}}_i}\hat{x}_{i,j} = {\hat{\lambda }}_i\). However, for \(i\notin {\mathcal {C}}_{min}\cup {\mathcal {C}}_{max}\), \(\lambda _i={\hat{\lambda }}_i\). Hence, there is a contradiction, and we can conclude that \({\mathcal {S}}_1={\mathcal {S}}_K\).
We now show that if \({\mathcal {S}}^+ \ne \emptyset \) then \({\mathcal {S}}_1\ne {\mathcal {S}}_K\). Assume the contrary, that is \({\mathcal {S}}_1 ={\mathcal {S}}_K\). From Proposition 3, if \({\mathcal {S}}^+ \ne \emptyset \) then \({\mathcal {S}}_1 = {\mathcal {S}}_K \subset {\mathcal {S}}^+\) which implies that \({\mathcal {S}}^-= \emptyset \), i.e., a contradiction. \(\square \)
Proof
[Proof of Proposition 5] If \({\mathcal {S}}^+ =\emptyset \) then the proposition is true. So, assume \({\mathcal {S}}^+\ne \emptyset \). Then, from Proposition 3, \({\mathcal {S}}_1\subset {\mathcal {S}}^+\). In order to prove the proposition, assume by contradiction that there exists a server \(j \in \{ S_1+1, \ldots , S_K-1 \}\) such that \(j \in {\mathcal {S}}^-\) and \(j+1 \in {\mathcal {S}}^+\). Again, if \(S_1+1=S_K\) then the proposition is true. So, assume that \(S_1+1<S_K\).
Since \(j\in {\mathcal {S}}^-\) and \(j+1\in {\mathcal {S}}^+\), from Lemma 5,
and
Moreover, from the contrapositive of Lemma 5, we can conclude that \({\mathcal {C}}_j{\setminus }{\mathcal {C}}_{j+1}\ne \emptyset \). Note that since \(j<S_K\), classes \(i \in {\mathcal {C}}_{max}\) do not belong to \({\mathcal {C}}_j{\setminus }{\mathcal {C}}_{j+1}\). Similarly, since \(j>S_1\), classes \(i \in {\mathcal {C}}_{min}\) do not belong to \({\mathcal {C}}_j{\setminus }{\mathcal {C}}_{j+1}\).
Since \({\mathcal {C}}_{j+1} \subset {\mathcal {C}}_j\), we have, for all \(i \in {\mathcal {C}}_{j+1}\),
Therefore, \(\sum _{i\in {\mathcal {C}}_{j+1}}\hat{\mu }_i > \sum _{i\in {\mathcal {C}}_{j+1}}\mu _i\) is equivalent to
and since \(\hat{y}_j \le y_j\), this implies that \(\sum _{i \in {\mathcal {C}}_{j+1}}{\hat{x}_{i,j}} > \sum _{i \in {\mathcal {C}}_{j+1}}{x_{i,j}}\). Since \(\hat{y}_j \le y_j\), necessarily \(\sum _{i \in {\mathcal {C}}_j \setminus {\mathcal {C}}_{j+1}}{\hat{x}_{i,j}} < \sum _{i \in {\mathcal {C}}_j \setminus {\mathcal {C}}_{j+1}}{x_{i,j}}\). However, since all classes \(k \in {\mathcal {C}}_{min} \cup {\mathcal {C}}_{max}\) do not belong to \(\mathcal{C}_j\setminus {\mathcal {C}}_{j+1}\), we know that \(\hat{\lambda }_i = \lambda _i\) for all \(i \in \mathcal{C}_j{\setminus }{\mathcal {C}}_{j+1}\), and thus
from which we obtain
and therefore
Subtracting (24) from (23), we obtain
Hence, for each server \(l<j\),
But, for \(l<j\) and \(l \in {\mathcal {S}}^+\), it implies that
and thus
From (25), we have
and using (26) it leads to
According to (24), for each server \(l<j\),
But, for \(l<j\), \(l \in {\mathcal {S}}^-\), it implies that
and thus
Now, summing (28) and (27) gives
However, for each server \(l \in {\mathcal {S}}^-\), we have \(\hat{y}_l \le y_l\) and thus \(\sum _{l<j, l \in {\mathcal {S}}^-}{\hat{y}_l} \le \sum _{l<j, l \in {\mathcal {S}}^-}{y_l}\). Since, for \(l < j\), \(y_l\) can also be written as \(y_l=\sum _{i \in {\mathcal {C}}_j}{x_{i,l}} + \sum _{i \not \in {\mathcal {C}}_j}{x_{i,l}}\), it yields
and using (29),
Therefore, there exists a class \(i \notin {\mathcal {C}}_j\) such that
It implies that, for this class \(i\), we can find a server \(n in {\mathcal {S}}_i\) and \(n \in {\mathcal {S}}^-\) such that \(\hat{x}_{i,n} < x_{i,n}\). Since \({\mathcal {C}}_{max} \subsetneq {\mathcal {C}}_j\), we know that \(i \not \in {\mathcal {C}}_{max}\). Moreover, since \({\mathcal {S}}_k ={\mathcal {S}}_1 \subset {\mathcal {S}}^+\) for all \(k \in {\mathcal {C}}_{min}\), \(i\not \in {\mathcal {C}}_{min}\). We therefore have \(\hat{\lambda }_i =\lambda _i\). Thus,
which implies
and with (31), it yields
This implies that there exists a server \(m<j\), \(m \in {\mathcal {S}}^+\) such that \(\hat{x}_{i,m} > x_{i,m}\). But, according to Lemma 3, there cannot be two servers \(m,n \in {\mathcal {S}}\) such that \(m \in {\mathcal {S}}^+\), \(n \in {\mathcal {S}}^-\), \(\hat{x}_{i,m} > x_{i,m}\) and \(\hat{x}_{i,n} < x_{i,n}\). This is a contradiction. Therefore, if \(j \in {\mathcal {S}^-}\), then \(j+1 \in {\mathcal {S}}^-\) for all servers \(j \in {\mathcal {S}}\). \(\square \)
Rights and permissions
About this article
Cite this article
Brun, O., Prabhu, B. Worst-case analysis of non-cooperative load balancing. Ann Oper Res 239, 471–495 (2016). https://doi.org/10.1007/s10479-014-1747-7
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-014-1747-7