Abstract
We study the following synchronous process that we call repeated ballsintobins. The process is started by assigning n balls to n bins in an arbitrary fashion. In every subsequent round, one ball is extracted from each nonempty bin according to some fixed strategy (random, FIFO, etc), and reassigned to one of the n bins uniformly at random. We define a configuration legitimate if its maximum load is \(\mathcal {O}(\log n)\). We prove that, starting from any configuration, the process converges to a legitimate configuration in linear time and then only takes on legitimate configurations over a period of length bounded by any polynomial in n, with high probability (w.h.p.). This implies that the process is selfstabilizing and that every ball traverses all bins within \(\mathcal {O}(n\log ^2 n)\) rounds, w.h.p.
Introduction
The process and its motivations
We study the following repeated ballsintobins process. Given any \(n \geqslant 2\), we initially assign n balls to n bins in an arbitrary way. Then, at every round, from each nonempty bin one ball is chosen according to some strategy (random, FIFO, etc) and reassigned to one of the n bins uniformly at random. Every ball thus performs a sort of delayed random walk over the bins and the delays of such random walks depend on the size of the bin queues encountered during their paths. It thus follows that these random walks are correlated. We study the impact of such correlation on the maximum load.
Inspired by previous notions of (load) stability [1, 2], we study the maximum load\(M^{(t)}\), i.e., the maximum number of balls inside one bin at round t and we are interested in the largest \(M^{(t)}\) achieved by the process over a period of any polynomial length. We say that a configuration is legitimate if its maximum load is \(\mathcal {O}(\log n)\) and a process is stable if, starting from any legitimate configuration, it only takes on legitimate configurations over a period of \({\mathrm {poly}}(n)\) length, w.h.p.^{Footnote 1} We also investigate a probabilistic version of selfstabilization [3, 4]: we say that a process is selfstabilizing if it is stable and if, moreover, starting from any configuration, it converges to a legitimate configuration, w.h.p. The convergence time of a selfstabilizing process is the maximum number of rounds required to reach a legitimate configuration starting from any configuration. This natural notion of (probabilistic) selfstabilization has also been inspired by that in [5] for other distributed processes.
The stability property, i.e. visiting only legitimate configurations for a relativelylong period, has consequences for other important aspects of this process. For instance, if the process is stable, every ball can be delayed for at most \(\mathcal {O}(\log n)\) rounds before leaving a node. Hence, we can get good bounds on the progress of a ball, namely the number of rounds the ball is selected from its current bin queue, along a relativelylong period. Furthermore, we can eventually bound the parallel cover time, i.e., the time required for every ball to visit all bins.
The process we study models a natural randomized solution to the fundamental task known as traversal [6, 7], which itself abstracts a number of crucial primitives in distributed computing, such as (parallel) resource (or task) assignment. In the basic case, the goal is to assign one resource in mutual exclusion to all processors (i.e. nodes) of a distributed system. This is typically described as a traversal process performed by a token (representing the resource or task) over the network. The process terminates when the token has visited all nodes of the system. Randomized protocols for this problem [8] are efficient approaches when, for instance, the network is prone to faults/changes and/or when there is no global labeling of the nodes, e.g., the network is anonymous.
A simple randomized protocol is the one based on random walks [5, 8, 9]: starting from any node, the token performs a random walk over the network until all nodes are visited, w.h.p. The first round in which all nodes have been visited by the token is called the cover time of the random walk [8, 10]. The expected cover time for general graphs is \(\mathcal {O}(V \cdot E)\) (see, e.g., [11]). In particular, random walks are at the basis of the first uniform selfstabilizing mutual exclusion protocol achieving graph traversal in general graphs [5].
In distributed systems, we often are in the presence of several resources or tasks that must be processed by every node in parallel. This naturally leads to consider the multitoken traversal problem. Here, n different tokens (resources) are initially distributed over the set of nodes and every token must visit all nodes of the network.
Similarly to the basic case, we propose a randomized solution based on (parallel) random walks. In order to visit nodes, every token performs a random walk under the constraint that every node can process and release at most one token per round. Note that this constraint causes random walks to become correlated. In particular, it is easy to see that, when the graph is complete, the above protocol—based on parallel random walks—is in fact equivalent to the repeated ballsintobins process analyzed in this paper. In such a scenario, maximum load becomes a critical measure: not only does it determine required buffer size at every node, it also affects the progress of the tokens and thus the parallel cover time.
Our results
We provide a new, almosttight analysis of the repeated ballsintobins process that significantly departs from previous ones and show that the system is selfstabilizing. In Theorem 1, we show that, for any arbitrarilylarge constant c, if the process starts from a legitimate configuration, then the maximum load \(M^{(t)}\) is \(\mathcal {O}(\log n)\) for all \(t = \mathcal {O}(n^c)\), w.h.p. Moreover, starting from any configuration, the system reaches a legitimate configuration within \(\mathcal {O}(n)\) rounds, w.h.p.
The above result strongly improves over the best previous bounds [12,13,14] and it is almost tight, since the classical lower bound \(\varOmega (\log n /\log \log n)\) on the maximum load (see, e.g., [11]) clearly applies also in our repeated setting. Our result further implies that, under the FIFO queueing policy, any ball performs \(\varOmega (t/\log n)\) steps of its individual random walk over any sequence of \(t = {\mathrm {poly}}(n)\) rounds w.h.p., so the parallel cover time is \(\mathcal {O}\left( n \log ^2n\right) \) w.h.p. This is only a \(\log n\) factor away from the lower bound following from the singleball process.
As observed in Sect. 1.1, when the graph is complete, the repeated ballsintobins process is equivalent to the protocol for the multitoken traversal task based on parallel random walks. In this setting, our results imply that, starting from any configuration, this task is completed w.h.p. with at most a logarithmic slowdown w.r.t. the case of a single token.
We can also consider the adversarial model in which, in some faulty rounds, an adversary can reassign the tokens to the nodes in an arbitrary way. Then, the selfstabilization and the linear convergence time shown in Theorem 1 imply that the \(\mathcal {O}\left( n \log ^2n\right) \) bound on the cover time still holds, provided that faulty rounds occur with a frequency no higher than cn, for a sufficiently large constant c. Thus, the random walkbased approach offers a simple solution to multitoken traversal that has minimal overhead, works in anonymous networks and is resilient to (possibly adversarial) faults. Moreover, its performances (in terms of congestion and cover time) are provably close to optimum.
To the purpose of analyzing the original process, we need to consider another repeated ballsintobins process, which we call Tetris and which is of independent interest (see the overview of our analysis in Sect. 3).
Related work
The repeated ballsintobins process was first considered in [13] (see there Lemma 2 in Subsection 3.1), [12] (see there Theorem 4.1 in Sect. 4), and [14] (see there Lemma 6 in Subsection 3.1), since it describes the process of performing parallel random walks in the (uniform) gossip model (also known as random phonecall model [15, 16]) when every message can contain at most one token. Maximum load (i.e., node congestion), token delays, mixing and cover times are here the most crucial aspects. We remark that the flavor of these studies is different from ours: indeed, their main goal is to keep maximum load and token delays logarithmic over some polylogarithmic period. Their aim is to achieve a fast mixing time for every random walk in the case of good expander graphs. In particular, in [13], a logarithmic bound is shown for the complete graph when \(m=\mathcal {O}(n/\log n)\) random walks are performed over a logarithmic time interval, while a similar bound is also given for some families of almostregular random graphs in [14]. A new analysis is given in [12] for regular graphs and time intervals of arbitrary length, yielding the bound \(\mathcal {O}\left( \sqrt{t}\right) \). Finally, after the conference version of this paper [17], a probabilistic version of the Tetris process, where the number of new balls arriving at each round is a random variable with expectation \(\lambda n\), for some \(\lambda = \lambda (n) \in [0,1]\), has been studied in [18].
Ballsintobins processes have been extensively studied in the area of parallel and distributed computing, mainly to address balancedallocation problems [19,20,21], PRAM simulation [22] and hashing [23]. In order to optimize the total number of random bin choices used for the allocation, further allocation strategies have been proposed and analyzed (see, e.g., [24,25,26,27,28]). As previously mentioned, our notion of stability is inspired by those studied in [1, 2] where load balancing algorithms are analyzed in scenarios in which new tasks arrive during the run of the system, and existing jobs are executed by the processors and leave the system. An adversarial model for a sequential ballsintobins process has been studied in [29]. We remark that, in the above previous works, the goal is different from ours: each ball/task must be allocated to one, arbitrary bin/processor (it is not a tokentraversal process).
To the best of our knowledge, the closest model to our setting in classical queuing theory is the closed Jackson network [30]. In this model, time is continuous and each node processes a single token among those in its queue; processing each token takes an exponentially distributed interval of time. As soon as its processing is completed, each token leaves the current node and enters the queue of a neighbor chosen uniformly at random. Notice that, since time is continuous, the process’ events are sequential, so that the associated Markov chain is much simpler than the one describing our parallel process. In particular, the stationary distribution of a closed Jackson network can be expressed as a productform distribution. It is noted in [31] that “[...] virtually all of the models that have been successfully analyzed in classical queuing network theory are models having a socalled product form stationary distribution”. Because of the above considerations regarding the difficulty of our process (especially the nonreversibility of its Markov chain), the stationary distribution is instead very likely not to exhibit a productform distribution, thus laying outside the domain where the techniques of classical queuing theory seem effective. We finally cite the seminal work [32] on adversarial queuing systems: here, new tokens (having specified source and destination nodes) are inserted in the nodes according to some adversarial strategy and a notion of edgecongestion stability is investigated.
Other closely related lines of work have investigated similar problem in the setting with an infinite number of bins [33, 34]. Finally, [35] provided a general framework to study the stabilization time of asynchronous ballsintobins processes, and [36] investigated the generalization of the 2choices ballsintobins process in which at each round a ball is relaunched by selecting d bins u.a.r. and placing the ball in the one with the smallest load. The latter process has also been applied to cover ball deletions [37].
Preliminaries
We recall that the repeated ballsintobins process is described as follows:
We are given any \(n \geqslant 2\) balls and bins. Initially, balls are assigned to bins in an arbitrary way. Then, in every subsequent round, one ball is chosen from each nonempty bin according to some strategy^{Footnote 2} and reassigned to one of the n bins uniformly at random.
We are interested in the maximum load\(M^{(t)}\), i.e., the maximum number of balls enqueued at the same bin round t and we are interested in how \(M^{(t)}\) evolves over time. To the purpose of studying the maximum load of the repeated balls into bins process, the state of the system is completely characterized by the load of every bin. Formally, for each bin \(u \in [n]\) let \({\mathscr {\mathcal {Q}}}_u^{(t)}\) be the r.v.^{Footnote 3} indicating the number of balls, i.e. the load, in u at round t. We write \(\mathbf {Q}^{(t)}\) for the vector of these random variables, i.e., \(\mathbf {Q}^{(t)} = \left( {\mathscr {\mathcal {Q}}}_u^{(t)}: u \in [n]\right) \). We write \(\mathbf {q}= (q_1, \dots , q_n)\) for a (load) configuration, i.e., \(q_u \in \{0, 1, \dots , n\}\) for every \(u \in [n]\) and \(\sum _{u = 1}^n q_u = n\). We define the maximum load of a configuration \(\mathbf {q}= (q_1, \dots , q_n)\) as
and, for brevity’ sake, given any round t of the process, we define
According to the above definition, we say that a configuration \(\mathbf {q}\) is legitimate if \(M(\mathbf {q}) \leqslant \beta \cdot \log n\), for some absolute constant \(\beta > 0\).
Selfstabilization of repeated balls into bins
In this section, we prove the main result of this paper.
Theorem 1
Let c be an arbitrarilylarge constant and let \(\mathbf {q}\) be any legitimate configuration. Let the repeated ballsintobins process start from \(\mathbf {Q}^{(0)} = \mathbf {q}\). Then, over any period of length \(\mathcal {O}(n^c)\), the process visits only legitimate configurations, w.h.p., i.e., \(M^{(t)} = \mathcal {O}(\log n)\) for all \(t = \mathcal {O}(n^c)\) w.h.p. Moreover, starting from any configuration, the system reaches a legitimate configuration within \(\mathcal {O}(n)\) rounds, w.h.p.
The proof relies on the analysis of the behaviour of some essential random variables describing the repeated ballsintobins process. In the next paragraph, we informally describe the main steps of this analysis. Then in Sects. 3.2–3.4, we prove the technical results required by such steps and, finally, in Sect. 3.5 these technical results are combined in order to prove Theorem 1.
Overview of the analysis
In the repeated ballsintobins process, every bin can release at most one ball per round. As a consequence, the random walks performed by the balls delay each other and are thus correlated in a way that can make bin queues larger than in the independent case. Indeed, intuitively speaking, a large load observed at a bin in some round makes “any” ball more likely to spend several future rounds in that bin, because if the ball ends up in that bin in one of the next few rounds, it will undergo a large delay. This is essentially the major technical issue to cope with.
The previous approach in [12] relies on the fact that, in every round, the expected balance between the number of incoming and outgoing balls is always nonpositive for every nonempty bin (notice that the expected number of incoming balls is always at most one). This may suggest viewing the process as a sort of parallel birthdeath process [10]. Using this approach and with some further arguments, one can (only) get the “standarddeviation” bound \(\mathcal {O}(\sqrt{t})\) in [12]. Our new analysis proving Theorem 1 proceeds along three main steps.
(i) We first show that, after the first round, the aforementioned expected balance is always negative, namely, not larger than \(1/4\). Indeed, the number of empty bins remains at least n / 4 with (very) high probability, which is extremely useful since a bin can only receive tokens from nonempty bins. This fact is shown to hold starting from any configuration and over any period of polynomial length.
(ii) In order to exploit the above negative balance to bound the load of the bins, we need some strong concentration bound on the number of balls entering a specific bin u along any period of polynomial size. However, it is easy to see that, for any fixed u, the random variables \(\left\{ Z^{(t)}_u\right\} _{t \geqslant 0}\) counting the number of balls entering bin u are not mutually independent, neither are they negatively associated, so that we cannot apply standard tools to prove concentration (see “Appendix B” for a counterexample).
To address this issue, we define a simpler repeated ballsintobins process as follows.
Using a coupling argument and our previous upper bound on the number of empty bins, we prove that the maximum number of balls accumulating in a bin in the original process is not larger than the maximum number of balls accumulating in a bin in the Tetris process, w.h.p.
(iii) The Tetris process is simpler than the original one since, at every round, the number of balls assigned to the bins does not depend on the system’s state in the previous round. Hence, random variables \(\left\{ \hat{Z}^{(t)}_u\right\} _{t \geqslant 0}\) counting the number of balls arriving at bin u in the Tetris process are mutually independent. We can thus apply standard concentration bounds. On the other hand, differently from the approximating process considered in [12], the negative balance of incoming and outgoing balls proved in Step (i) still holds, thus yielding a much smaller bound on the maximum load than that in [12].
In the remainder of this section, we formally describe the above three steps, thus proving Theorem 1.
On the number of empty bins
We next show that the number of empty bins is at least a constant fraction of n over a very large timewindow, w.h.p. This fact could be proved by standard concentration arguments if, at every round, all balls were thrown independently and uniformly at random. A little care is instead required in our process to properly handle, at any round, “congested” bins whose load exceeds 1. These bins will be surely nonempty at the next round too. So, the number of empty bins at a given round also depends on the number of congested bins in the previous round.
Lemma 1
Let \(\mathbf {q}= (q_1, \dots , q_n)\) be a configuration in a given round and let X be the random variable indicating the number of empty bins in the next round. For any large enough n, it holds that
where \(\alpha \) is a suitable positive constant.
Proof
Let \(a = a(\mathbf {q})\) and \(b = b(\mathbf {q})\) respectively denote the number of empty bins and the number of bins with exactly one token in configuration \(\mathbf {q}\). For each bin u of the \(a+b\) bins with at most one token, let \(Y_u\) be the random variable indicating whether or not bin u is empty in the next round, so that
where in the last inequality we used the fact that \(1  x \geqslant e^{ \frac{x}{1x}}\). Hence we have that
The crucial fact is that the number of bins with two or more tokens cannot exceed the number of empty bins, i.e. \(n  (a+b) \leqslant a\). Thus, we can bound the number of empty bins from below,^{Footnote 4}\(a \geqslant (nb)/2\), and by using that bound in (1) we get
Now observe that, for large enough n a positive constant \(\varepsilon \) exists such that
It is easy to show that random variables \(Y_1, \dots , Y_{a+b}\) are negatively associated (e.g., see Theorem 13 in [38]). Thus we can apply (see Lemma 7 in [38]) the Chernoff bound (6) with \(\delta = \varepsilon / (1+\varepsilon )\) to X to obtain
\(\square \)
From the above lemma it easily follows that, if we look at our process over a timewindow \(T = T(n)\) of polynomial size, after the first round we always see at least n / 4 empty bins, w.h.p. More formally, for every \(t \in \{1, \dots , T\}\), let \(\mathcal {E}_t\) be the event “The number of empty bins at round t is at least n / 4”. From Lemma 1 and the union bound we get the following lemma.
Lemma 2
Let \(\mathbf {q}_0\) denote the initial configuration, let \(T = T(n) = n^c\) for an arbitrarily large constant c. For any large enough n it holds that
where \(\gamma \) is a suitable positive constant.
Proof
By using the union bound we obtain
By conditioning on the configuration at round \(t1\), from the Markov property and Lemma 1 it then follows that
Hence,
for a suitable positive constant \(\gamma \). \(\square \)
Coupling with Tetris
Using a coupling argument and Lemma 2 we now prove that the maximum load in the original process is stochastically not larger than the maximum load in the Tetris process w.h.p.
In what follows we denote by \(W^{(t)}\) the set of nonempty bins at round t in the original process. Recall that, in the latter, at every round a ball is selected from every nonempty bin u and it is moved to a bin chosen u.a.r. Accordingly we define, for every round t, the random variables
where \(X_u^{(t+1)}\) indicates the new position reached in round \(t+1\) by the ball selected in round t from bin u. Notice that for every nonempty bin \(u \in W^{(t)}\) we have that \(\mathbf {P}_{}{\left( X_u^{(t+1)} = v \right) } = 1/n\) for every bin \(v \in [n]\). The random process \(\left\{ \mathbf {Q}^{(t)} : t \in \mathbb {N} \right\} \) is completely defined by random variables \(X_u^{(t)}\)’s, indeed we can write
and
Analogously, for each bin \(u \in [n]\) in the Tetris process, let \(\hat{\mathcal {Q}}_u^{(t)}\) be the random variable indicating the number of balls in bin u in round t. We next prove that, over any polynomiallylarge time window, the maximum load of any bin in our process is stochastically smaller than the maximum number of balls in a bin of the Tetris process w.h.p. More formally, we prove the following lemma.
Lemma 3
Let us start both the original process and the Tetris process from the same configuration \(\mathbf {q}= (q_1, \dots , q_n)\) such that \(\sum _{u = 1}^n q_u = n\) and containing at least n / 4 empty bins. Let \(T = T(n)\) be an arbitrary round and let \(M_T\) and \(\hat{M}_T\) be respectively the random variables indicating the maximum loads in our original process and in the Tetris process, up to round T. Formally
and
Then, for every \(k \geqslant 0\) it holds that
for a suitable positive constant \(\gamma \).
Proof
We proceed by coupling the Tetris process with the original one round by round. Intuitively speaking the coupling proceeds as follows:

Case (i): the number of nonempty bins in the original process is \(h \leqslant \frac{3}{4} n\). For each nonempty bin u, let \(i_u\) be the ball picked from u. We throw one of the \(\frac{3}{4} n\) new balls of the Tetris process in the same bin in which \(i_u\) ends up. Then, we throw all the remaining \(\frac{3}{4} n  h\) balls independently u.a.r.

Case (ii): the number of nonempty bins is \(h >\frac{3}{4} n\). We run one round of the Tetris process independently from the original one.
By construction, if the number of nonempty bins in the original process is not larger than \(\frac{3}{4} n\) at any round, case (ii) never applies and the Tetris process “dominates” the original one, meaning that every bin in the Tetris process contains at least as many balls as the corresponding bin in the original one. Since from Lemma 2 we know that the number of nonempty bins in the original process is not larger than \(\frac{3}{4} n\) for any timewindow of polynomial size w.h.p., we thus have that the Tetris process dominates the original process for the whole time window w.h.p.
Formally, for \(t \in \{1, \dots , T\}\), denote by \(B^{(t)}\) the set of new balls in the Tetris process at round t (recall that the size of \(B^{(t)}\) is (3 / 4)n for every \(t \in \{1, \dots , T\}\)). For any round t and any ball \(i \in B^{(t)}\), let \(\hat{X}_i^{(t)}\) be the random variable indicating the bin where the ball ends up. Finally, let \(\left\{ U_i^{(t)} : t = 1, \dots , T, i \in B^{(t)} \right\} \) be a family of i.i.d. random variables uniform over [n].
At any round \(t \in \{ 1, \dots , T\}\), the following two cases may arise.
\(\text { If }W^{(t1)} \leqslant (3/4)n\): Let \(B^{(t)}_W\) be an arbitrary subset of \(B^{(t)}\) with size exactly \(W^{(t1)}\), let \(f^{(t)} : B^{(t)}_W \rightarrow W^{(t1)}\) be an arbitrary bijection and set
\({\text {If} W^{(t1)} > (3/4)n}\): Set \(\hat{X}_i^{(t)} = U_i^{(t)}\) for all \(i \in B^{(t)}\).
By construction we have that random variables
are mutually independent and uniformly distributed over [n]. Moreover, in the joint probability space for any k we have that
Finally, let \(\mathcal {E}_T\) be the event “There are at least n / 4 empty bins at all rounds \(t \in \{ 1, \dots , T\}\)” and observe that, from the coupling we have defined, the event \(\mathcal {E}_T\) implies event “\(\hat{M}_T \geqslant M_T\)”. Hence \(\mathbf {P}_{}{\left( \hat{M}_T < M_T \right) } \leqslant \mathbf {P}_{}{\left( \overline{\mathcal {E}_T} \right) }\) and the thesis follows from Lemma 2. \(\square \)
Analysis of the Tetris process
We begin by observing that in the Tetris process, the random variables indicating the number of balls ending up in a bin in different rounds are i.i.d. binomial. This fact is extremely useful to give upper bounds on the load of the bins, as we do in the next simple lemma, that will be used to prove selfstabilization of the original process.
Lemma 4
From any initial configuration, in the Tetris process every bin will be empty at least once within 5n rounds, w.h.p.
Proof
Let \(u \in [n]\) be a bin with \(k \leqslant n\) balls in the initial configuration. For \(t \in \{ 1, \dots , 5n\}\) let \(Y_t\) be the random variable indicating the number of new balls ending up in bin u at round t. Notice that in the Tetris process \(Y_1, \dots , Y_{5n}\) are i.i.d. \(B\left( (3/4)n, 1/n \right) \) hence
and by applying Chernoff bound (7) with \(\delta = 1/15\) we get
Now let \(\mathcal {E}_u\) be the event “Bin u will be nonempty for all the 5n rounds”. Since when a bin is nonempty it loses a ball at every round, event \(\mathcal {E}_u\) implies, in particular, that
That is
Thus
The thesis then follows from the union bound over all bins \(u \in [n]\). \(\square \)
We next focus on the maximum load that can be observed in the Tetris process at any given bin within a finite interval of time. We note that this result could be proved using tools from drift analysis (e.g., see [39]). We provide here an elementary and direct proof, that explicitly relies on the Markovian structure of the Tetris process.
Let \(\{X_t\}_t\) be a sequence of i.i.d. \(B\left( (3/4)n,1/n \right) \) random variables and let \(Z_t\) be the Markov chain with state space \(\{0,1,2, \dots \}\) defined as follows
Observe that 0 is an absorbing state for \(Z_t\) and let \(\tau \) be the absorption time \(\tau = \inf \{t \in \mathbb {N} : Z_t = 0\}\). We first prove the following lemma.
Lemma 5
For any initial starting state \(k \in \mathbb {N}\) and any \(t \geqslant 8 k\), it holds that
Proof
Observe that
where in the last inequality we used hypothesis \(k < (1/8)t\). Since the \(X_i\)s are i.i.d. binomial B((3 / 4)n, 1 / n), it follows that \(\sum _{i = 1}^{t} X_i\) is binomial B((3 / 4)nt, 1 / n) and from Chernoff bound we get that
\(\square \)
Now we can easily prove the following statement on the Tetris process.
Lemma 6
Let c be an arbitrarilylarge constant, and let the Tetris process start from any legitimate configuration. The maximum load \(\hat{M}^{(t)}\) is \(\mathcal {O}(\log n)\) for all \(t = \mathcal {O}(n^c)\), w.h.p.
Proof
Consider an arbitrary bin u that is nonempty in the initial legitimate configuration. Let \({\hat{\mathcal {Q}}}^{(0)} = \mathcal {O}(\log n)\) be its initial load^{Footnote 5} and let \(\tau = \inf \left\{ t : \hat{\mathcal {Q}}^{(t)} = 0 \right\} \) be the first round the bin becomes empty. Observe that, for any \(t \leqslant \tau \), \({\hat{\mathcal {Q}}}^{(t)}\) behaves exactly as the Markov chain defined in (4). Hence, from Lemma 5 it follows that for every constant \(\hat{c}\) such that \(\hat{c} \log n \geqslant 8 {\hat{\mathcal {Q}}}^{(0)}\) we have
Thus, within \(\mathcal {O}(\log n)\) rounds the bin will be empty w.h.p., and since the load of the bin decreases of at most one unit per round, the load of the bin is \(\mathcal {O}(\log n)\) for all such rounds w.h.p.
Next, define a phase as any sequence of rounds that starts when the bin becomes nonempty and ends when it becomes empty again. Notice that, by using a standard ballsintobins argument, in the first round of each phase the load of the bin will be \(\mathcal {O}(\log n / \log \log n)\) w.h.p. Moreover, in any phase the load of the bin can be coupled with the Markov chain in (4). Hence, for any arbitrary large constant c we can choose the constant \(\hat{c}\) in (5) large enough so that, by taking the union bound over all phases up to round \(n^c\), the load of the bin is \(\mathcal {O}(\log n)\) in all rounds \(t \leqslant n^c\) w.h.p.
Finally, observe that for any bin that is initially empty the same argument applies with the only difference that the first phase for the bin does not start at round 0 but at the first round the bin becomes nonempty. The thesis thus follows from a union bound over all the bins. \(\square \)
Back to the original process: proof of Theorem 1
From a standard ballsintobins argument (see, e.g., [11]), starting from any legitimate configuration, after one round the process still lies in a legitimate configuration w.h.p. and, thanks to Lemma 1, there are at least n / 4 empty bins w.h.p. From Lemma 3 with \(T = \mathcal {O}\left( n^c \right) \), we have that the maximum load of the repeated ballsintobins process does not exceed the maximum load of the Tetris process in all rounds \(1,\dots ,T\), w.h.p. Finally, the upper bound on the maximum load of the Tetris process in Lemma 6 completes the proof of the first statement of Theorem 1.
As for selfstabilization, given an arbitrary initial configuration, Lemma implies that within \(\mathcal {O}(n)\) rounds, all bins have been emptied at least once, w.h.p. When a bin becomes empty, Lemma 5 ensures that its load will be \(\mathcal {O}(\log n)\) over a polynomial number of rounds. Hence, within \(\mathcal {O}(n)\) rounds, the system will reach a legitimate configuration, w.h.p. \(\square \)
On the multitoken traversal problem
As mentioned in the introduction, the repeated ballsintobins process can also be seen as running parallel random walks of n distinct tokens (i.e. balls), each of them starting from a node (i.e. bins) of the complete graph of size n. This is a randomized protocol for the multitoken traversal problem where tokens represent different resources/tasks that must be assigned to all nodes in mutual exclusion [8]. In this scenario, a critical complexity measure is the (global) cover time, i.e., the time required by any token to visit all nodes.
It is important to observe that our analysis of the maximum load works for anonymous tokens and nodes and, hence, for any particular queuing strategy. Under FIFO strategy, no token spends in a bin a number of rounds exceeding the current load as it entered the bin. Theorem 1 then implies that, after an initial stabilizing phase of \(\mathcal {O}(n)\) rounds, every token will spend at most a logarithmic number of rounds in any bin queue it traverses and over any period of polynomial length, w.h.p. We also know that the cover time of the single randomwalk process is w.h.p. \(\mathcal {O}(n \log n)\) (see, e.g., [11]). Combining the above two facts, we easily get the following, almost tight result on the multitoken traversal problem problem.
Corollary 1
The randomwalk protocol for the multitoken traversal problem on the clique has cover time \(\mathcal {O}\left( n \log ^2n\right) \), w.h.p.
Adversarial model
The selfstabilization property shown in Theorem 1 makes the random walk protocol robust to transient faults. We can consider an adversarial model in which, in some faulty rounds, an adversary can reassign the tokens to the nodes in an arbitrary way. Then, the linear convergence time shown in Theorem 1 implies that the \(\mathcal {O}\left( n \log ^2n\right) \) bound on the cover time still holds provided the faulty rounds happen with a frequency not higher than \(\gamma n\), for any constant \(\gamma \geqslant 6\). Indeed, thanks to Lemma 4, the action of an adversary manipulating the system configuration once every \(\gamma n\) rounds can affect only the successive 5n rounds, while our analysis in the nonadversarial model does hold for the remaining \((\gamma 5) n\) rounds. It follows that the overall slowdown on the cover time produced by such an adversary is at most a constant factor on the previous \(\mathcal {O}\left( n\log ^2 n\right) \) upper bound, w.h.p.
Conclusions and open questions
In this paper, we showed that repeated ballsintobin is selfstabilizing when the number m of balls equals the number n of bins (obviously, this is still the case, whenever \(m < n\)). An interesting open question is whether this result extends to larger values of m, i.e., for any \(m = \mathcal {O}(n \log n)\).
A more general interesting question is the study of this process over more general graph classes. This line of research is also motivated by several recent applications of parallel random walks in the (uniform) gossip model [8, 13, 14, 40]. As mentioned in the introduction, previous analysis of this process provides a bound \(\mathcal {O}\left( \sqrt{t}\right) \) on the maximum load after t rounds on regular graphs [12]. We believe this previous bound for regular graphs is far from tight and it leads to rough bounds on parallel cover times on these networks. We conjecture that the maximum load remains logarithmic for a long period in any regular graph. A possible reason for this phenomenon (if true) might be that the expected difference between (token) arrivals and departures is always nonpositive at every node in regular graphs. As highlighted in our analysis of the complete graph, this fact alone is not enough but it could be combined with a suitable bound on the number of empty bins, in order to prove our conjecture in this more general case. Unfortunately, noncomplete graphs present a further technical issue: in order to apply any argument based on the presence of empty bins, not only do we need to argue about their number, but also about their distribution across the network. This technical issue seems to be far from trivial even on simple topologies such as rings.
Finally, a technical question concerns the tightness of our bound on the maximum load. In the classical (one shot) ballsintobins problem, it is wellknown that the maximum load of the bins is \(\varTheta \left( \log n / \log \log n \right) \) w.h.p. One may wonder whether our \(\mathcal {O}\left( \log n \right) \) upper bound on the maximum load of the repeated process for a polynomial number of rounds is tight, or it can be improved to \(\mathcal {O}\left( \log n / \log \log n \right) \). We conjecture that, within any polynomial time window, the probability that the maximum load asymptotically exceeds \(\log n / \log \log n\) is nonnegligible.
Notes
We say that a sequence of events \(\mathcal E_n\), \(n= 1,2, \ldots \) holds with high probability (w.h.p.) if \(\mathbf {P}_{}{\left( \mathcal E_n \right) } = 1 \mathcal {O}(1/n^{\gamma })\) for some positive constant \(\gamma >0\).
Our results are oblivious to the specific queueing strategy and we make no assumptions about. It might be random, FIFO, etc.
We always use capital letters for random variables, lower case for quantities, and bold for vectors.
Observe that this argument only works to get a lower bound on the number of empty bins and not for an upper bound.
We omit the subscript u in the remainder of this proof since clear from context.
References
Anagnostopoulos, A., Kirsch, A., Upfal, E.: Load balancing in arbitrary network topologies with stochastic adversarial input. SIAM J. Comput. 34(3), 616–639 (2005)
Berenbrink, P., Friedetzky, T., Goldberg, L.A.: The natural workstealing algorithm is stable. SIAM J. Comput. 32(5), 1260–1279 (2003)
Dijkstra, E.W.: Selfstabilizing systems in spite of distributed control. Commun. ACM 17(11), 643–644 (1974)
Dolev, S.: SelfStabilization. MIT Press, Cambridge (2000)
Israeli, A., Jalfon, M.: Token management schemes and random walks yield selfstabilizing mutual exclusion. In: Proceedings of the ACM Symposium on Principles of Distributed Computing (PODC), pp. 119–131. ACM (1990)
Santoro, N.: Design and analysis of distributed algorithms. Wiley, Hoboken (2006)
Lynch, N.A.: Distributed algorithms. Morgan Kaufmann, Burlington (1996)
Cooper, C.: Random walks, interacting particles, dynamic networks: randomness can be helpful. In: Proceedings of the International Colloquium on Structural Information and Communication Complexity (SIROCCO), LNCS, vol. 6796, pp. 1–14. Springer (2011)
Ikeda, S., Kubo, I., Okumoto, N., Yamashita, M.: Fair circulation of a token. IEEE Trans. Parallel Distrib. Syst. 13(4), 367–372 (2002)
Levin, D.A., Peres, Y., Wilmer, E.L.: Markov chains and mixing times. American Mathematical Society, Providence (2009)
Mitzenmacher, M., Upfal, E.: Probability and computing: randomized algorithms and probabilistic analysis. Cambridge University Press, Cambridge (2005)
Becchetti, L., Clementi, A., Natale, E., Pasquale, F., Silvestri, R.: Plurality consensus in the gossip model. In: Proceedings of the ACMSIAM Symposium on Discrete Algorithms (SODA), pp. 371–390. SIAM (2015)
Berenbrink, P., Czyzowicz, J., Elsässer, R., Gasieniec, L.: Efficient information exchange in the random phonecall model. In: Proceedings of the International Colloquium on Automata, Languages, and Programming (ICALP), LNCS, vol. 6198, pp. 127–138. Springer (2010)
Elsässer, R., Kaaser, D.: On the influence of graph density on randomized gossiping. In: Proceedings of the IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 521–531 (2015)
Demers, A., Greene, D., Hauser, C., Irish, W., Larson, J., Shenker, S., Sturgis, H., Swinehart, D., Terry, D.: Epidemic algorithms for replicated database maintenance. In: Proceedings of the ACM Symposium on Principles of Distributed Computing (PODC), pp. 1–12. ACM (1987)
Karp, R., Schindelhauer, C., Shenker, S., Vocking, B.: Randomized rumor spreading. In: Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS), pp. 565–574. IEEE (2000)
Becchetti L., Clementi A., Natale E., Pasquale F., Posta G.: Selfstabilizing repeated ballsintobins. In: Proceedings of the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 332–339. ACM (2015)
Berenbrink, P., Friedetzky, T., Kling, P., MallmannTrenn, F., Nagel, L., Wastell, C.: Selfstabilizing balls and bins in batches: the power of leaky bins (extended abstract). In: Proceedings of the ACM Symposium on Principles of Distributed Computing (PODC), pp. 83–92. ACM (2016)
Azar, Y., Broder, A.Z., Karlin, A., Upfal, E.: Balanced allocations. SIAM J. Comput. 29(1), 180–200 (1999)
Berenbrink, P., Czumaj, A., Steger, A., Vöcking, B.: Balanced allocations: the heavily loaded case. SIAM J. Comput. 35(6), 1350–1385 (2006)
Raab, M., Steger, A.: “Balls into bins” a simple and tight analysis. In: Proceedings of the International Workshop on Randomization and Approximation Techniques in Computer Science (RANDOM), LNCS, vol. 1518, pp. 159–170. Springer (1998)
Karp, R.M., Luby, M., auf der Heide, F.Meyer: Efficient pram simulation on a distributed memory machine. Algorithmica 16(4–5), 517–542 (1996)
Dietzfelbinger, M., Goerdt, A., Mitzenmacher, M., Montanari, A., Pagh, R., Rink, M.: Tight thresholds for cuckoo hashing via xorsat. In: Proceedings of the International Colloquium on Automata, Languages, and Programming (ICALP), LNCS, vol. 6198, pp. 213–225. Springer (2010)
Adler, M., Chakrabarti, S., Mitzenmacher, M., Rasmussen, L.: Parallel randomized load balancing. In: Proceedings of the ACM Symposium on Theory of Computing (STOC), pp. 238–247. ACM (1995)
Berenbrink, P., Khodamoradi, K., Sauerwald, T., Stauffer, A.: Ballsintobins with nearly optimal load distribution. In: Proceedings of the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 326–335. ACM (2013)
Mitzenmacher, M.: The power of two choices in randomized load balancing. IEEE Trans. Parallel Distrib. Syst. 12(10), 1094–1104 (2001)
Mitzenmacher, M., Prabhakar, B., Shah, D.: Load balancing with memory. In: Proceedings of the IEEE Symposium on Foundations of Computer Science (FOCS), pp. 799–808. IEEE (2002)
Vöcking, B.: How asymmetry helps load balancing. J. ACM 50(4), 568–589 (2003)
Awerbuch, B., Scheideler, C.: Towards a scalable and robust DHT. Theory Comput. Syst. 45(2), 234–260 (2009)
Asmussen, S.: Applied Probability and Queues. Springer, Berlin (2003)
Harrison, J.M., Williams, R.J.: Brownian models of feedforward queueing networks: quasireversibility and product form solutions. Ann. Appl. Probab. 2(2), 263–293 (1992)
Borodin, A., Kleinberg, J., Raghavan, P., Sudan, M., Williamson, D.P.: Adversarial queuing theory. J. ACM 48(1), 13–38 (2001)
Berenbrink, P., Czumaj, A., Friedetzky, T., Vedenskaya, N.D.: Infinite parallel job allocation. In: Proceedings of the ACM Symposium on Parallel Algorithms and Architectures (SPAA), pp. 99–108. ACM (2000)
Adler, M., Berenbrink, P., Schröder, K.: Analyzing an infinite parallel job allocation process. In: Proceedings of the European Symposium on Algorithms (ESA), LNCS, vol. 1461, pp. 417–428. Springer (1998)
Czumaj, A.: Recovery time of dynamic allocation processes. In: Proceedings of the ACM Symposium on Parallel Algorithms and Architectures (SPAA), pp. 202–211. ACM (1998)
Czumaj, A., Stemann, V.: Randomized allocation processes. In: Proceedings of the Symposium on Foundations of Computer Science (FOCS), pp. 194–203 (1997)
Cole, R., Frieze, A., Maggs, B., Mitzenmacher, M., Richa, A.W., Sitaraman, R., Upfal, E.: On balls and bins with deletions. In: Randomization and Approximation Techniques in Computer Science, LNCS, pp. 145–158. Springer (1998)
Dubhashi, D.P., Ranjan, D.: Balls and bins: a study in negative dependence. Random Struct. Algorithms 13(2), 99–124 (1998)
Hajek, B.: Hittingtime and occupationtime bounds implied by drift analysis with applications. Adv. Appl. Probab. 14(3), 502–525 (1982)
Haeupler, B., Pandurangan, G., Peleg, R., Rajaraman, D., Sun, Z.: Discovery through gossip. Random Struct. Algorithms 48(3), 565–587 (2016)
Acknowledgements
Open access funding provided by Max Planck Society. We would like to thank Riccardo Silvestri and Pierre Fraigniaud for helpful discussions and important hints.
Author information
Affiliations
Corresponding author
Appendices
Appendix A: Useful inequalities
Lemma 7
(Chernoff bound) Let \(\{X_t : t \in [n] \}\) be a family of independent binary random variables. Let \(X = \sum _{t = 1}^n X_t\) and let \(\mu _L \leqslant \mathbf {E}_{} \left[ X \right] \leqslant \mu _H\). For every \(\delta \in (0,1)\) it holds that
Appendix B: Negative association
Definition 1
(Negative association) Random variables \(X_1, \dots , X_n\) are negatively associated if, for every pair of disjoint subsets \(I, J \subseteq [n]\), it holds that
for all pairs of functions \(f: \mathbb {R}^{I} \rightarrow \mathbb {R}\) and \(g: \mathbb {R}^{J} \rightarrow \mathbb {R}\) that are both nondecreasing or both nonincreasing.
A counterexample
As announced in the overview of the analysis in Sect. 3, we here give a simple counterexample showing that, in our ballsintobins process, the random variables counting the number of balls arriving in a given bin in different rounds cannot be negatively associated.
Consider our random process with \(n=2\) and let \(X_1\) and \(X_2\) be the random variables indicating the number of tokens arriving at the first bin in rounds 1 and 2, respectively. Let \(f \equiv g\) be the nonincreasing function
If \(X_1\) and \(X_2\) were negatively associated, we thus would have that \(\mathbf {P}_{}{\left( X_1 = 0, X_2 = 0 \right) } \leqslant \mathbf {P}_{}{\left( X_1 = 0 \right) } \mathbf {P}_{}{\left( X_2 = 0 \right) }\). However, by direct calculation it is easy to compute that
because, in order for “\(X_1 = 0, X_2 = 0\)” to happen, at the first round both balls have to end up in the second bin (this happens with probability 1 / 4) and at the second round the ball chosen in the second bin has to stay there (this happens with probability 1 / 2). But we have that \(\mathbf {P}_{}{\left( X_1 = 0 \right) } = 1/4\) and by conditioning on all the three possible configurations at round 1 we have \(\mathbf {P}_{}{\left( X_2 = 0 \right) } = 3/8\). Thus
In general, intuitively speaking it seems that event “\(X_t = 0\)” makes more likely the event that there are a lot of empty bins in the system, which in turn makes more likely event “\(X_{t+1} = 0\)” that the bin will receive no tokens at round \(t+1\) as well.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Becchetti, L., Clementi, A., Natale, E. et al. Selfstabilizing repeated ballsintobins. Distrib. Comput. 32, 59–68 (2019). https://doi.org/10.1007/s0044601703204
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0044601703204
Keywords
 Balls into bins
 Selfstabilizing systems
 Markov chains
 Parallel resource assignment