Skip to main content
Log in

Minimizing message size in stochastic communication patterns: fast self-stabilizing protocols with 3 bits

  • Published:
Distributed Computing Aims and scope Submit manuscript

Abstract

This paper considers the basic \({\mathcal {PULL}}\) model of communication, in which in each round, each agent extracts information from few randomly chosen agents. We seek to identify the smallest amount of information revealed in each interaction (message size) that nevertheless allows for efficient and robust computations of fundamental information dissemination tasks. We focus on the Majority Bit Dissemination problem that considers a population of n agents, with a designated subset of source agents. Each source agent holds an input bit and each agent holds an output bit. The goal is to let all agents converge their output bits on the most frequent input bit of the sources (the majority bit). Note that the particular case of a single source agent corresponds to the classical problem of Broadcast (also termed Rumor Spreading). We concentrate on the severe fault-tolerant context of self-stabilization, in which a correct configuration must be reached eventually, despite all agents starting the execution with arbitrary initial states. In particular, the specification of who is a source and what is its initial input bit may be set by an adversary. We first design a general compiler which can essentially transform any self-stabilizing algorithm with a certain property (called “the bitwise-independence property”) that uses \(\ell \)-bits messages to one that uses only \(\log \ell \)-bits messages, while paying only a small penalty in the running time. By applying this compiler recursively we then obtain a self-stabilizing Clock Synchronization protocol, in which agents synchronize their clocks modulo some given integer T, within \(\tilde{\mathcal {O}}(\log n\log T)\) rounds w.h.p., and using messages that contain 3 bits only. We then employ the new Clock Synchronization tool to obtain a self-stabilizing Majority Bit Dissemination protocol which converges in \(\tilde{\mathcal {O}}(\log n)\) time, w.h.p., on every initial configuration, provided that the ratio of sources supporting the minority opinion is bounded away from half. Moreover, this protocol also uses only 3 bits per interaction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. We note that stochastic communication patterns such as \({\mathcal {PULL}}\) or \({\mathcal {PUSH}}\) are inherently sensitive to congestion issues. Indeed, in such models it is unclear how to simulate a protocol that uses large messages while using only small size messages. For example, the straightforward strategy of breaking a large message into small pieces and sequentially sending them one after another does not work, since one typically cannot make sure that the small messages reach the same destination. Hence, reducing the message size may have a profound impact on the running time, and perhaps even on the solvability of the problem at hand.

  2. With a slight abuse of notation, with \(\tilde{\mathcal {O}}(f(n)g(T))\) we refer to \(f(n)g(T) \log ^{\mathcal {O}(1)}(f(n)) \log ^{\mathcal {O}(1)}(g(T))\). All logarithms are in base 2.

  3. Specifically, it is possible to show that, as a corollary of our analysis and the fault-tolerance property of the analysis in [24], if \(T \le poly(n)\) then Syn-Clock can tolerate the presence of up to \(\mathcal {O}(n^{1/2 - \epsilon })\) Byzantine agents for any \(\epsilon >0\). In addition, Syn-Phase-Spread can tolerate \(\min \{(1-\epsilon )(k_{maj}-k_{min}),n^{1/2-\epsilon }\}\) Byzantine agents, where \(k_{maj}\) and \(k_{min}\) are the number of sources supporting the majority and minority opinions, respectively. Note that for the case of a single source (\(k=1\)), no Byzantine agents are allowed; indeed, a single Byzantine agent pretending to be the source with the opposite opinion can clearly ruin any protocol.

  4. Our protocols will use this protocol as a black box. However, we note that the constructions we outline are in fact independent of the choice of consensus protocol, and this protocol could be replaced by other protocols that achieve similar guarantees.

  5. The original statement of [24] says that if at most \(\kappa \le \sqrt{n}\) agents can be corrupted at any round, then convergence happens for all but at most \(\mathcal {O}(\kappa )\) agents. Let us explain how this implies the statement we gave, namely that we can replace \(\mathcal {O}(\kappa )\) by \(\kappa \), if \(\kappa \le n^{\frac{1}{2}-\epsilon }\). Assume that we are in the regime \(\kappa \le n^{\frac{1}{2}-\epsilon }\). It follows from [24] that all but a set of \(\mathcal {O}(\kappa )\) agents reach consensus after \(\mathcal {O}(\log n)\) round. This set of size \(\mathcal {O}(\kappa )\) contains both Byzantine and non Byzantine agents. However, if the number of agents holding the minority opinion is \(\mathcal {O}(\kappa ) = \mathcal {O}(n^{1/2-\epsilon })\), then the expected number of non Byzantine agents that disagree with the majority at the next round is in expectation \(\mathcal {O}(\kappa ^2 /n) = \mathcal {O}(n^{- 2\epsilon } )\). Thus, by Markov’s inequality, this implies, that at the next round consensus is reached among all non-Byzantine agents w.h.p. Note also that, for the same reasons, the Byzantine agents do not affect any other non-Byzantine agent for \(n^{\epsilon } \) rounds w.h.p.

  6. Observe that, once clock \(C'\) is synchronized, the bits of \(Q_{T'}\) do not change for each agent during each subphase. Thus, we may replace maj-consensus by the Min protocol where on each round of subphase i each agent u pulls another agent v u.a.r. and updates her ith bit of Q to the minimum between her current i-th bit of Q and the one of v. However, for simplicity’s sake, we reuse the already introduced maj-consensus protocol.

  7. From Theorem 2.1, we have that after \(\gamma \log n\) rounds, with \(\gamma \) large enough, the probability that consensus has not been reached is smaller than \( \frac{1}{n^2}\). Thus, after \(N\cdot \gamma \log n\) rounds, the probability that consensus has not been reached is smaller than \(\frac{1}{n^{2N}}\). If we choose \(N\log n = \log n + \log \log T\), we thus get the claimed upper bound \(\frac{1}{n^2 \log T}\).

  8. A similar protocol was suggested during discussions with Bernhard Haeupler.

References

  1. Afek, Y., Alon, N., Barad, O., Hornstein, E., Barkai, N., Bar-Joseph, Z.: A biological solution to a fundamental distributed computing problem. Science 331, 183–185 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  2. Alistarh, D., Gelashvili, R.: Polylogarithmic-time leader election in population protocols. In: ICALP, pp. 479–491 (2015)

  3. Angluin, D., Aspnes, J., Diamadi, Z., Fischer, M.J., Peralta, R.: Computation in networks of passively mobile finite-state sensors. Distrib. Comput. 18(4), 235–253 (2006)

    Article  MATH  Google Scholar 

  4. Angluin, D., Aspnes, J., Eisenstat, D.: A simple population protocol for fast robust approximate majority. Distrib. Comput. 21(2), 87–102 (2008)

    Article  MATH  Google Scholar 

  5. Angluin, D., Aspnes, J., Fischer, M.J., Jiang, H.: Self-stabilizing population protocols. TAAS 3(4), 13 (2008)

    Article  Google Scholar 

  6. Angluin, D., Fischer, M.J., Jiang, H.: Stabilizing Consensus in Mobile Networks, pp. 37–50. Springer, Berlin (2006)

    Google Scholar 

  7. Aspnes, J., Ruppert, E.: An introduction to population protocols. Bull. EATCS 93, 98–117 (2007)

    MathSciNet  MATH  Google Scholar 

  8. Attiya, H., Herzberg, A., Rajsbaum, S.: Optimal clock synchronization under different delay assumptions. SIAM J. Comput. 25(2), 369–389 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  9. Beauquier, J., Burman, J., Kutten, S.: A self-stabilizing transformer for population protocols with covering. Theor. Comput. Sci. 412(33), 4247–4259 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Becchetti, L., Clementi, A., Natale, E., Pasquale, F., Posta, G.: Self-stabilizing repeated balls-into-bins. In: SPAA, pp. 332–339 (2015)

  11. Becchetti, L., Clementi, A.E.F., Natale, E., Pasquale, F., Silvestri, R.: Plurality consensus in the gossip model. In: SODA, pp. 371–390 (2015)

  12. Becchetti, L., Clementi, A.E.F., Natale, E., Pasquale, F., Trevisan, L.: Stabilizing consensus with many opinions. In: SODA, pp. 620–635 (2016)

  13. Ben-Or, M., Dolev, D., Hoch, E.N.: Fast self-stabilizing byzantine tolerant digital clock synchronization. In: PODC, pp. 385–394 (2008)

  14. Boczkowski, L., Korman, A., Natale, E.: Brief announcement: self-stabilizing clock synchronization with 3-bit messages. In: PODC (2016)

  15. Boczkowski, L., Korman, A., Natale, E.: Minimizing message size in stochastic communication patterns: fast self-stabilizing protocols with 3 bits. In: SODA (2017)

  16. Censor-Hillel, K., Haeupler, B., Kelner, J.A., Maymounkov, P.: Global computation in a poorly connected world: fast rumor spreading with no dependence on conductance. In: STOC, pp. 961–970 (2012)

  17. Chen, H.-L., Cummings, R., Doty, D., Soloveichik, D.: Speed faults in computation by chemical reaction networks. In: Distributed Computing, pp. 16–30 (2014)

  18. Chierichetti, F., Lattanzi, S., Panconesi, A.: Rumor spreading in social networks. In: ICALP, pp. 375–386 (2009)

  19. Cooper, C., Elsässer, R., Radzik, T., Rivera, N., Shiraga, T.: Fast consensus for voting on general expander graphs. In: DISC, pp. 248–262. Springer, Berlin (2015)

  20. Couzin, I., Krause, J., Franks, N., Levin, S.: Effective leadership and decision making in animal groups on the move. Nature 433, 513–516 (2005)

    Article  Google Scholar 

  21. Demers, A.J., Greene, D.H., Hauser, C., Irish, W., Larson, J., Shenker, S., Sturgis, H.E., Swinehart, D.C., Terry, D.B.: Epidemic algorithms for replicated database maintenance. Oper. Syst. Rev. 22(1), 8–32 (1988)

    Article  Google Scholar 

  22. Dijkstra, E.W.: Self-stabilizing systems in spite of distributed control. Commun. ACM 17(11), 643–644 (1974)

    Article  MATH  Google Scholar 

  23. Doerr, B., Fouz, M.: Asymptotically optimal randomized rumor spreading. Electron. Notes Discrete Math. 38, 297–302 (2011)

    Article  MATH  Google Scholar 

  24. Doerr, B., Goldberg, L.A., Minder, L., Sauerwald, T., Scheideler, C.: Stabilizing consensus with the power of two choices. In: SPAA, pp. 149–158 (2011)

  25. Dolev, D. Hoch, E.N.: On self-stabilizing synchronous actions despite byzantine attacks. In: DISC, pp. 193–207 (2007)

  26. Dolev, D., Korhonen, J.H., Lenzen, C., Rybicki, J., Suomela, J.: Synchronous counting and computational algorithm design. In: SSS, pp. 237–250 (2013)

  27. Dolev, S.: Possible and impossible self-stabilizing digital clock synchronization in general graphs. Real-Time Syst. 12(1), 95–107 (1997)

    Article  MathSciNet  Google Scholar 

  28. Dolev, S., Welch, J.L.: Self-stabilizing clock synchronization in the presence of byzantine faults. J. ACM 51(5), 780–799 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  29. Doty, D., Soloveichik, D.: Stable leader election in population protocols requires linear time. CoRR, abs/1502.04246 (2015)

  30. Elsässer, R., Friedetzky, T., Kaaser, D., Mallmann-Trenn, F., Trinker, H.: Efficient k-party voting with two choices. CoRR, abs/1602.04667 (2016)

  31. Elsässer, R., Sauerwald, T.: On the runtime and robustness of randomized broadcasting. Theor. Comput. Sci. 410(36), 3414–3427 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  32. Emek, Y., Wattenhofer, R.: Stone age distributed computing. In: PODC, pp. 137–146 (2013)

  33. Feinerman, O., Haeupler, B., Korman, A.: Breathe before speaking: efficient information dissemination despite noisy, limited and anonymous communication. In: PODC, pp. 114–123 (2014)

  34. Feinerman, O., Korman, A.: Clock synchronization and estimation in highly dynamic networks: an information theoretic approach. In: SIROCCO, pp. 16–30 (2015)

  35. Feinerman, O., Korman, A.: Individual versus collective cognition in social insects. Submitted to Journal of Experimental Biology (2016)

  36. Harkness, R., Maroudas, N.: Central place foraging by an ant (cataglyphis bicolor fab.): a model of searching. Anim. Behav. 33(3), 916–928 (1985)

    Article  Google Scholar 

  37. Herman, T.: Phase clocks for transient fault repair. IEEE Trans. Parallel Distrib. Syst. 11(10), 1048–1057 (2000)

    Article  Google Scholar 

  38. Karp, R.M., Schindelhauer, C., Shenker, S., Vöcking, B.: Randomized rumor spreading. In: FOCS, pp. 565–574 (2000)

  39. Kempe, D., Dobra, A., Gehrke, J.: Gossip-based computation of aggregate information. In: FOCS, pp. 482–491. IEEE (2003)

  40. Kravchik, A., Kutten, S.: Time optimal synchronous self stabilizing spanning tree. In: Afek, Y. (ed.) DISC, Jerusalem, Israel, October 14–18, 2013. Proceedings, volume 8205 of Lecture Notes in Computer Science, pp. 91–105. Springer (2013)

  41. Lamport, L.: Time, clocks, and the ordering of events in a distributed system. Commun. ACM 21(7), 558–565 (1978)

    Article  MATH  Google Scholar 

  42. Lenzen, C., Locher, T., Sommer, P., Wattenhofer, R.: Clock synchronization: ppen problems in theory and practice. In: SOFSEM, pp. 61–70 (2010)

  43. Lenzen, C., Locher, T., Wattenhofer, R.: Tight bounds for clock synchronization. J. ACM 57(2), 8 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  44. Lenzen, C., Rybicki, J.: Efficient counting with optimal resilience. In: DISC, pp. 16–30 (2015)

  45. Lenzen, C., Rybicki, J., Suomela, J.: Towards optimal synchronous counting. In: PODC, pp. 441–450 (2015)

  46. McDiarmid, C.: Concentration, pp. 195–248. Springer, New York (1998)

    MATH  Google Scholar 

  47. Razin, N., Eckmann, J., Feinerman, O.: Desert ants achieve reliable recruitment across noisy interactions. J. R. Soc. Interface 10(20170079) (2013)

  48. Roberts, G.: Why individual vigilance increases as group size increases. Anim. Behav. 51, 1077–1086 (1996)

    Article  Google Scholar 

  49. Sumpter, D.J., et al.: Consensus decision making by fish. Curr. Biol. 22(25), 1773–1777 (2008)

    Article  Google Scholar 

Download references

Acknowledgements

The problem of self-stabilizing Bit Dissemination was introduced through discussions with Ofer Feinerman. The authors are also thankful for Omer Angel, Bernhard Haeupler, Parag Chordia, Iordanis Kerenidis, Fabian Kuhn, Uri Feige, and Uri Zwick for helpful discussions regarding that problem. The authors also thank Michele Borassi for his helpful suggestions regarding the Clock Synchronization problem.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lucas Boczkowski.

Additional information

A preliminary version of this work appears as a 3-pages Brief Announcement in PODC 2016 [14] and as an extended abstract at SODA 2017 [15].

This work has been partly done while E.N. was visiting the Simons Institute for the Theory of Computing.

This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No 648032).

Appendices

Technical tools

Theorem A.1

([46]) Let \(X_{1},\ldots ,X_{n}\) be n independent random variables. If \(X_{i}\le M\) for each i, then

$$\begin{aligned} ~~~~\Pr \left( \sum _{i}X_{i}\ge \mathbb {E}\left[ \sum _{i}X_{i}\right] +\lambda \right) \le e^{ -\frac{\lambda ^{2}}{2\left( \sqrt{\sum _{i}\mathbb {E}\left[ X_{i}^{2}\right] }+\frac{M\lambda }{3}\right) }}. \end{aligned}$$
(A.1)

Corollary A.1

Let \(\mu = \mathbb {E}\left[ \sum _{i}X_{i}\right] \). If the \(X_{i}\)s are binary then, for \(\lambda =\sqrt{ \mu \log n}\) and sufficiently large n, (A.1) gives

$$\begin{aligned} \Pr \left( \sum _{i}X_{i}\ge \mu +\sqrt{\mu \log n}\right)&\le e^{-\sqrt{ \mu \log n}},\\ \Pr \left( \sum _{i}X_{i}\le \mu -\sqrt{ \mu \log n}\right)&\le e^{-\sqrt{ \mu \log n}}. \end{aligned}$$

Proof

The fact that the \(X_{i}\)s are binary implies that \(\sum _{i}\mathbb {E}\left[ X_{i}^{2}\right] \le \sum _{i}\mathbb {E}\left[ X_{i}\right] \).

By setting \(\lambda =\sqrt{\mathbb {E}\left[ \sum _{i}X_{i}\right] \log n}\), one can show that the l.h.s. of (A.1) is upper bounded by \(e^{-\sqrt{ \mu \log n}}\). \(\square \)

Proof of Lemma B.1

Lemma B.1

Let \(f,\tau : \mathbb {R}_{+} \rightarrow \mathbb {R}\) be functions defined by \(f(x) = \lceil \log x \rceil +1\) and

$$\begin{aligned} \tau (x) = \inf \left\{ k \in \mathbb {N}\mid f^{\circledast k}(x) \le 3 \right\} , \end{aligned}$$

where we denote by \(f^{\circledast k}\) the k-fold iteration of f. It holds that

$$\begin{aligned} \tau (T) \le \log ^{\circledast 4 }T + \mathcal {O}(1). \end{aligned}$$

Proof

We can notice that \( f(T) \le T -1, \) if T is bigger than some constant c. Moreover, when \(f(x) \le c\), the number of iterations before reaching 1 is O(1). This implies that \( \tau (T) \le T + \mathcal {O}(1). \) But in fact, by definition, \(\ell (T) = \tau \left( f^{\circledast 4}(T)\right) +4\) (provided \(f^{\circledast 4}(T) > 1\), which holds if T is big enough). Hence

$$\begin{aligned} \tau (T)&\le g\left( f^{\circledast 4}(T)\right) +4 \le f^{\circledast 4}(T) + \mathcal {O}(1)\\&\le \log ^{\circledast 4} T + \mathcal {O}(1). \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Boczkowski, L., Korman, A. & Natale, E. Minimizing message size in stochastic communication patterns: fast self-stabilizing protocols with 3 bits. Distrib. Comput. 32, 173–191 (2019). https://doi.org/10.1007/s00446-018-0330-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00446-018-0330-x

Navigation