Skip to main content
Log in

Probabilistic consensus via polling and majority rules

  • Published:
Queueing Systems Aims and scope Submit manuscript

Abstract

In this paper, we consider lightweight decentralised algorithms for achieving consensus in distributed systems. Each member of a distributed group has a private value from a fixed set consisting of, say, two elements, and the goal is for all members to reach consensus on the majority value. We explore variants of the voter model applied to this problem. In the voter model, each node polls a randomly chosen group member and adopts its value. The process is repeated until consensus is reached. We generalise this so that each member polls a (deterministic or random) number of other group members and changes opinion only if a suitably defined super-majority has a different opinion. We show that this modification greatly speeds up the convergence of the algorithm, as well as substantially reducing the probability of it reaching consensus on the incorrect value.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Abdullah, M.A., Draief, M.: Consensus on the initial global majority by local majority polling for a class of sparse graphs, (2012). arXiv:1209.5025

  2. Benezit, F., Thiran, P., Vetterli, M.: Interval consensus: from quantized gossip to voting. In: Proceeding of IEEE ICASSP, pp. 3661–3664 (2009)

  3. Cooper, C., Elsasser, R., Ono, H., Radzik, T.: Coalescing random walks and voting on connected graphs, (2013). arXiv:1204.4106

  4. Donnelly, P., Welsh, D.: Finite particle systems and infection models. Math. Proc. Camb. Phil. Soc. 94, 167–182 (1983)

    Article  Google Scholar 

  5. Doyle, P.G., Snell, J.L.: Random Walks and Electrical Networks. The Mathematical Association of America, Washington D.C (1984)

    Google Scholar 

  6. Draief, M., Vojnovic, M.: Convergence speed of binary interval consensus. SIAM J. Control Optim. 50, 1087–1109 (2012)

    Article  Google Scholar 

  7. Hassin, Y., Peleg, D.: Distributed probabilistic polling and applications to proportionate agreement. In: Proceedings of 26th International Colloquium on Automata, Languages & Programming, pp. 402–411 (1999)

  8. Hendrickx, J.M., Olshevsky, A., Tsitsiklis, J.N.: Distributed anonymous discrete function computation and averaging. IEEE Trans. Automat. Control 56(10), 2276–2289 (2011)

    Article  Google Scholar 

  9. Kanoria, Y., Montanari, A.: Majority dynamics on trees and the dynamic cavity method. Ann. Appl. Probab. 21(5), 1694–1748 (2011)

    Article  Google Scholar 

  10. Kashyap, A., Basar, T., Srikant, R.: Quantized consensus. Automatica 43(7), 1192–1203 (2007)

    Article  Google Scholar 

  11. Mosk-Aoyama, D., Shah, D.: Fast distributed algorithms for computing separable functions. IEEE Trans. Inf. Theory 54(7), 2997–3007 (2008)

    Article  Google Scholar 

  12. Mossel, E., Neeman, J., Tamuz, O.: Majority dynamics and aggregation of information in social networks. Auton. Agent Multi-Agent Syst. 28(3), 408–429 (2014)

    Article  Google Scholar 

  13. Olshevsky, A., Tsitsiklis, J.: Convergence speed in distributed consensus and averaging. SIAM J. Control Optim. 48, 33–55 (2009)

    Article  Google Scholar 

  14. Perron, E., Vasudevan D., Vojnovic, M.: Using three states for binary consensus on complete graphs. In: Proceedings of IEEE Infocom, IEEE Communications Society, New York (2009)

  15. Shang, S., Cuff, P., Hui, P., Kulkarni, S.: An upper bound on the convergence time for quantized consensus. In: Proceedings of IEEE Infocom, IEEE Communications Society, New York (2013)

  16. Surowiecki, J.: The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business,Economies, Societies and Nations. Doubleday, New York (2004)

    Google Scholar 

Download references

Acknowledgments

The authors would like to thank the reviewers and the associated editor for their helpful and insightful comments which have improved this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to James Cruise.

Appendix: Proof of Theorems

Appendix: Proof of Theorems

Throughout the proofs we adopt the convention that any summation with a lower index larger than it’s upper index is taken to be zero.

Proof (Proof of Theorem 1)

Let \(X(t)\) denote the number of nodes in state 1 at time \(t\). As we are considering the \((m,m)\) algorithm on a complete graph with \(N\) nodes, the dynamics of \(X\) are governed by a continuous time Markov chain on the state space \(\{ 0,1,\ldots , N \}\), with absorbing states at \(0\) and \(N\), and with transition rates given by

$$\begin{aligned} X \mathrm goes to {\left\{ \begin{array}{ll} X+1 \mathrm with\,rate \,(N-X) \left( \frac{X}{N}\right) ^m ,\\ X-1 \mathrm with\,rate \, X \left( \frac{N-X}{N}\right) ^m .\end{array}\right. } \end{aligned}$$

We are interested in the probability of \(X\) hitting state \(N\) before state \(0\), and hence being absorbed in state \(N\).

If the initial system state is \(X(0)=i<N/2\), then hitting state \(N\) is equivalent to the system reaching consensus on the incorrect (minority) value. Recall from (1) that we defined

$$\begin{aligned} h_N(i/N)=\mathbb {P}(\exists \ t \mathrm such \,that X(t) = N | X(0) =i), \end{aligned}$$

as the hitting probability of state \(N\) from an initial state with \(i\) nodes in state 1. As discussed in Sect. 3.1, we obtain by conditioning on the first jump that the hitting probabilities \(h_N(\frac{i}{N})\), \(i=0,1,\ldots ,N\), satisfy the recursion (4).

In order to solve this recursion, we make use of an analogy with electrical resistor networks (see Snell and Doyle [5]), as outlined in Sect. 3.1. Let \(R_i\) be the resistance between nodes \(i\) and \(i-1\) in a series network of \(N\) resistors, and let \(V_i\) denote the voltage at node \(i\), i.e. the node between resistors \(R_i\) and \(R_{i\,+\,1}\). Set \(V_0=0\) and \(V_N=1\). If we normalise \(R_1=1\), and take

$$\begin{aligned} R_{i\,+\,1} = \left( \frac{1-i/N}{i/N}\right) ^{m\,-\,1} R_i, \end{aligned}$$

then the hitting probabilities \(h_N(\frac{i}{N})\), \(i=1,\ldots ,N-1\) satisfy the same Eq. (3) as the voltages \(V_i\), \(i=1,\ldots ,N-1\) in the resistor network described above. We now solve for these voltages.

We obtain from the above recursion for the resistor values that

$$\begin{aligned} R_{i} = \prod _{j=1}^{i-1} \left( \frac{N-j}{j}\right) ^{m-1} = {N-1\atopwithdelims ()i-1}^{(m-1)}, \end{aligned}$$

where we use the convention that an empty product is equal to 1. Now, by Ohm’s law, the current through the series resistor network is given by \(1/(R_1+R_2+\ldots +R_N)\). Consequently, the voltage at node \(i\) is \(V_i=\sum _{k=1}^i R_k/\sum _{k=1}^N R_k\). Since this is the same as the hitting probability \(h_N(i/N)\), we obtain that

$$\begin{aligned} h_N(i/N) =\frac{ \sum _{j=0}^{i-1} {N-1\atopwithdelims ()j}^{(m-1)}}{ \sum _{j=0}^{N-1} {N-1\atopwithdelims ()j}^{(m-1)}}. \end{aligned}$$

For \(\alpha \in [0,1]\) such that \(\alpha N\) is not an integer, we defined \(h_N(\alpha )\) in (2) by linear interpolation. Since \(h_N(i/N)\) is increasing in \(i\), it follows that

$$\begin{aligned} h_N(\alpha ) \le \frac{ \sum _{i=0}^{\lceil \alpha N \rceil -1} {N\atopwithdelims ()i}^{(m-1)}}{ \sum _{i=0}^{N-1} {N\atopwithdelims ()i}^{(m-1)}}. \end{aligned}$$
(5)

We shall obtain an upper bound on the numerator and a lower bound on the denominator in the expression above. First observe that, if \(i\le \alpha N\), then

$$\begin{aligned} {N-1\atopwithdelims ()i\,-\,1} = \frac{i}{N-i} {N-1\atopwithdelims ()i} \le \frac{\alpha }{1-\alpha } {N-1\atopwithdelims ()i}, \end{aligned}$$

and so

$$\begin{aligned} {N-1 \atopwithdelims ()\lceil \alpha N \rceil -i} \le \Bigl ( \frac{\alpha }{1-\alpha } \Bigr )^{i-1} {N-1 \atopwithdelims ()\lceil \alpha N \rceil -1} \end{aligned}$$

for all \(i\ge 1\). It follows that

$$\begin{aligned} \sum _{i=0}^{\lceil \alpha N \rceil -1} {N-1 \atopwithdelims ()i}^{(m-1)} \le {N-1\atopwithdelims ()\lceil \alpha N \rceil -1}^{m-1} \left( \sum _{i=0}^{\lceil \alpha N \rceil -1} \Bigl ( \frac{\alpha }{1-\alpha } \Bigr )^{(m-1)i}\right) . \end{aligned}$$

Now, if \(\alpha \in (0,1/2)\) as in the statement of the theorem, then

$$\begin{aligned} \sum _{i=0}^{\lceil \alpha N \rceil -1} \Bigl ( \frac{\alpha }{1-\alpha } \Bigr )^{(m-1)i} \le \sum _{i=0}^{\infty } \Bigl ( \frac{\alpha }{1-\alpha } \Bigr )^{(m-1)i} = c<\infty , \end{aligned}$$

for a constant \(c\) that does not depend on \(N\). Combining this with Stirling’s formula, we obtain that

$$\begin{aligned} \sum _{i=0}^{\lceil \alpha N \rceil -1} {N-1\atopwithdelims ()i}^{(m-1)} \le \Bigl ( \frac{c}{\sqrt{2\pi \alpha (1-\alpha )N}} \Bigr )^{m-1} e^{(N-1)(m-1)H(\alpha )}, \end{aligned}$$
(6)

where we recall that \(H(\cdot )\) denotes the binary entropy function. Likewise, we obtain using Stirling’s formula that

$$\begin{aligned} \sum _{i=0}^{N-1} {N-1\atopwithdelims ()i}^{(m-1)} \ge {N-1\atopwithdelims ()\lfloor N/2 \rfloor }^{m-1} \ge \Bigl ( \frac{2}{\pi N} \Bigr )^{m-1} e^{(N-1)(m-1)H(1/2)}. \end{aligned}$$
(7)

Substituting (6) and (7) in (5), we obtain that

$$\begin{aligned} h_N(\alpha ) \le c e^{-(N-1)(m-1)[H(\alpha )-H(1/2)]} = c\exp (-(N-1)(m-1)D(\alpha ;1/2)), \end{aligned}$$

where \(c>0\) is a constant that may depend on \(\alpha \) but does not depend on \(N\). This establishes the first claim of the theorem.

In order to establish the second claim about logarithmic equivalence, we need a lower bound on \(h_N(\alpha )\) that matches the above upper bound to within a term that is subexponential in \(N\). We obtain such a bound from

$$\begin{aligned} h_N(\alpha ) \ge \frac{ \sum _{i=0}^{\lfloor \alpha N \rfloor -1} {N-1\atopwithdelims ()i}^{(m-1)}}{ \sum _{i=0}^{N-1} {N-1\atopwithdelims ()i}^{(m-1)}}. \end{aligned}$$

by obtaining a lower bound on the numerator and an upper bound on the denominator. First, an upper bound on the denominator is as follows:

$$\begin{aligned} \sum _{i=0}^{N-1} {N-1\atopwithdelims ()i}^{(m-1)}&\le \left( \sum _{i=0}^{N-1} {N-1\atopwithdelims ()i} \right) ^{m-1} \\&\le 2^{(N-1)(m-1)} = \exp ((N-1)(m-1)H(1/2)), \end{aligned}$$

since \(H(1/2)=\log 2\). Next, we can get a lower bound on the numerator by simply replacing the sum with its last term, corresponding to \(i=\lfloor \alpha N \rfloor -1\). By Stirling’s formula, this gives a lower bound which is identical to the upper bound in (6), up to a constant. Hence, we get

$$\begin{aligned} h_N(\alpha )&\ge c \Bigl ( \frac{1}{\sqrt{2\pi \alpha (1-\alpha )N}} \Bigr )^{m-1} e^{(N-1)(m-1)[H(\alpha )-H(1/2)]} \\&= cN^{-(m-1)/2} \exp (-(N-1)(m-1)D(\alpha ;1/2). \end{aligned}$$

Since this agrees with the lower bound on \(h_N\) up to a polynomial in \(N\), the claimed logarithmic equivalence follows. This completes the proof of the theorem. \(\square \)

Proof (Proof of Theorem 2)

We consider the \((m,d)\) algorithm. Let \(X(t)\) denote the number nodes in state 1 at time \(t\), and let \(Z_x\) denote a Binomial (\(m,x\)) random variable. Then \(X(t)\) evolves as continuous time Markov chain with transition rates

$$\begin{aligned} X \mathrm goes to {\left\{ \begin{array}{ll} X+1 \mathrm with\, rate \,(N-X) \mathbb {P}(Z_{X/N}\ge d) ,\\ X-1 \mathrm with\, rate \,X \mathbb {P}(Z_{X/N}\le (m-d)) .\end{array}\right. } \end{aligned}$$
(8)

As before, we denote the hitting probability of state \(N\) given that we start in state \(i\) as

$$\begin{aligned} h_N(i/N) = \mathbb {P}(X(\infty ) =N | X(0)=i). \end{aligned}$$

We extend the definition of \(h_N\) to \([0,1]\) by linear interpolation, as in (2).

By conditioning on the first transition, we observe that the hitting probabilities satisfy the recurrence relation (4), with the transition probabilities \(p_{i,\,i+1}\) and \(p_{i,\,i-1}\) being proportional to the rates specified above. Hence, as in the proof of Theorem 1, we can solve the recurrence using an electrical network analogy. The hitting probabilities \(h_N(i/N)\) satisfy the same equations as the voltages \(V_i\) in a series resistor network with \(V_0=0\) and \(V_N=1\), where the resistances are related by

$$\begin{aligned} R_{i+1}=\frac{p_{i,\,i-1}}{p_{i,\,i+1}} R_i = \frac{ i\mathbb {P}(Z_{i/N} \le m-d) }{ (N-i)\mathbb {P}(Z_{i/N}\ge d) } R_i. \end{aligned}$$

Using the definition of the function \(g\) in the statement of Theorem 2, we can rewrite this is \(R_{i+1}= g(i/N) R_i\). Taking \(R_1=1\) without loss of generality, and solving for the voltages in this resistor network, we obtain

$$\begin{aligned} h_N(i/N) = \frac{ \sum _{j=1}^i R_j }{ \sum _{j=1}^N R_j } = \frac{ \sum _{j=1}^i \prod _{k=1}^{j-1} g\bigl ( \frac{k}{N} \bigr ) }{ \sum _{j=1}^N \prod _{k=1}^{j-1} g\bigl ( \frac{k}{N} \bigr ) }. \end{aligned}$$
(9)

As usual, we take empty sums to be 0 and empty products to be 1. In order to obtain an upper bound on \(h_N(i/N)\), we seek an upper bound on the numerator and a lower bound on the denominator. We shall bound the numerator from above by \(N\) times the largest summand, and the denominator from below by a single summand, corresponding to \(i=\lfloor N/2 \rfloor \).

We assumed in the statement of Theorem 2 that \(m\ge 2\) and \(d>m/2\). Now,

$$\begin{aligned} g(x) = \frac{ x\mathbb {P}(Z_{1-x} \ge d) }{ (1-x)\mathbb {P}(Z_{x}\ge d) } = \frac{ \sum _{k=d}^m {m\atopwithdelims ()k} (1-x)^k x^{m-k+1} }{ \sum _{k=d}^m {m\atopwithdelims ()k} x^k (1-x)^{m-k+1} }. \end{aligned}$$
(10)

Comparing the summands term by term, their ratio is \(\bigl (\frac{x}{1-x}\bigr )^{m-2k+1}\) for index \(k\). This is bigger than 1 for all \(x<1/2\) because \(m-2k+1\) is negative or zero for all \(k\ge d>m/2\). Therefore, we conclude that \(g(x)\ge 1\) for all \(x\le 1/2\). Using this fact, we see from (9) that the summands in the numerator are non-decreasing in \(j\), and hence the largest summand corresponds to \(j=i\). Noting, also, that the function \(h_N\) is non-decreasing, we obtain that

$$\begin{aligned} h_N(\alpha ) \le \frac{ N\prod _{k=1}^{\lfloor \alpha N\rfloor -1} g\bigl ( \frac{k}{N} \bigr ) }{ \prod _{k=1}^{\lfloor N/2 \rfloor } g\bigl ( \frac{k}{N} \bigr ) } = \frac{N}{\prod _{k=\lfloor \alpha N\rfloor }^{\lfloor N/2 \rfloor } g(k/N) }. \end{aligned}$$

Taking logarithms and letting \(N\) tend to infinity, we get

$$\begin{aligned} \limsup _{N\rightarrow \infty } \frac{1}{N}\log h_N(\alpha ) \le \limsup _{N\rightarrow \infty } \frac{-1}{N}\sum _{k=\lfloor \alpha N\rfloor }^{\lfloor N/2 \rfloor } \log g(k/N). \end{aligned}$$

In order to show that the above sum converges to the Riemann integral of \(\int _{\alpha }^{1/2} \log g(x)\), as claimed in the theorem, it suffices to show that the function \(\log g\) is continuous over this compact interval. Now, from (10), the function \(g\) is a ratio of polynomials that are bounded away from zero on the interval \([\alpha ,1/2]\), and the claim follows. \(\square \)

Proof (Proof of Theorem 3)

We consider the \((m,d)\) algorithm. Let \(X(t)\) denote the number nodes in state 1 at time \(t\), and let \(Z_x\) denote a Binomial (\(m,x\)) random variable. Then \(X(t)\) evolves as continuous time Markov chain with transition rates given by (8). We shall bound the time to consensus by introducing a simpler Markov chain whose associated hitting times stochastically dominate those of the consensus process. This new chain can be viewed as a simple random walk with a negative drift and a reflecting upper boundary.

Fix \(\epsilon \in (0,1)\). We shall define a birth–death Markov chain \(Y\) on the integers \(\{ 0,1, \ldots , N\}\) by specifying the transition probabilities of the jump chain and the holding times in each state. The definitions involve parameters \(\beta \in (0,1)\) and \(c_1,c_2 >0\) that will be specified later. The transition probabilities of the jump chain are given by

$$\begin{aligned} p_{i,\,i-1} = 1-p_{i,\,i+1} = {\left\{ \begin{array}{ll} \beta , &{} 1\le i < \frac{(1\,-\,\epsilon )N}{2}, \\ \frac{1}{2}, &{} \frac{(1\,-\, \epsilon )N}{2} \le i \le \frac{(1\,+ \,\epsilon )N}{2}, \\ 1- \beta , &{} \frac{(1\,+\, \epsilon )N}{2} < i \le N\,-\,1, \\ \end{array}\right. } \end{aligned}$$
(11)

while \(p_{0,\,0}=p_{N,\,N}=1\). The holding times are exponentially distributed, with rates \(c_1 i\) in state \(i<(1-\epsilon )N/2\), \(c_1 (N-i)\) for \(i>(1+\epsilon )N/2\), and \(c_2 N\) for \((1-\epsilon )N/2 \le i \le (1+\epsilon )N/2\). Note that the jump probabilities and holding times are symmetric about \(N/2\), and that \(0\) and \(N\) are absorbing states. The jump chain behaves as a symmetric random walk in the central section, and as a random walk with drift towards the boundaries in the outer sections.

We shall show that, for a suitable choice of \(\beta \), \(c_1\) and \(c_2\), the time to absorption of the consensus process \(X(t)\) is stochastically dominated by that for \(Y_{\epsilon }(t)\). By explicitly bounding the latter, we shall obtain a bound on the time to reach consensus. Recall that \(Z_x\) denotes a Binomial (\(m,x\)) random variable. We take

$$\begin{aligned} c_1=\mathbb {P}(Z_{\frac{1\,-\,\epsilon }{2}} \le m-d), \; c_2 = \mathbb {P}(Z_{\frac{1\,-\,\epsilon }{2}} \ge d), \; \beta = \frac{ (1\,-\,\epsilon ) c_2 }{ (1\,-\,\epsilon )c_2 + (1\,+\,\epsilon )c_1 }. \end{aligned}$$
(12)

Define \(\tilde{X}(t) = \min \{ X(t), N-X(t) \}\), and \(\tilde{Y}(t) = \min \{ Y(t), N-Y(t) \}\). Then \(\tilde{X}\) and \(\tilde{Y}\) are Markov processes on \(\{ 0,1,\ldots ,\lfloor N/2 \rfloor \}\), with the same jump rates and holding times as \(X\) and \(Y\), respectively, except at state \(\lfloor N/2 \rfloor \) where there is no upward jump, and where the holding time is suitably modified to account for the censored jumps. Moreover, the modified processes \(\tilde{X}\) and \(\tilde{Y}\) have a unique absorbing state at 0, and the same time to absorption as \(X\) and \(Y\), respectively.

The claim of the theorem will be immediate from the following two lemmas. In fact, they establish that \(\tau ^0_N(x)\) is no bigger than \(\tilde{\tau }^0_N(x)\) (defined below), and that the latter is \(O(\log N)\). To show that \(\tau ^0_N(x)\) is in fact \(\Theta (\log N)\), observe that for consensus to be reached, either every node in the minority state has to have updated its opinion at least once, or every node in the majority state needs to have done so. As each node updates its opinion after independent Exp(1) times, this involved the maximum of \(O(N)\) independent Exp(1) random variables, which has mean of order \(\log N\). \(\square \)

Lemma 1

For given initial conditions \(\tilde{X}(0) \le \tilde{Y}(0)\), the stochastic processes \(\tilde{X}(t)\) and \(\tilde{Y}(t)\) can be coupled in such a way that \(\tilde{X}(t) \le \tilde{Y}(t)\) for all \(t\ge 0\). In particular, the process \(\tilde{X}\) first falls below level \(\lfloor \alpha N \rfloor \), and likewise hits the absorbing state 0, no later than \(\tilde{Y}\) does the same.

Lemma 2

Let \(x\in (0,1/2)\) and \(\alpha \in (0,x)\) be given. Fix \(\epsilon \in (0,1-2x)\), and let the processes \(Y\) and \(\tilde{Y}\) be defined as above. Let

$$\begin{aligned} \tilde{\tau }^{\alpha }_N(x)&= \mathbb {E}_{\lfloor xN \rfloor } (\inf \{t>0: \tilde{Y}(t) \le \alpha N), \\ \tilde{\tau }^0_N(x)&= \mathbb {E}_{\lfloor xN \rfloor } (\inf \{t>0: \tilde{Y}(t) =0), \end{aligned}$$

where the subscript on the expectation denotes the initial state \(\tilde{Y}(0)\). Then, \(\tilde{\tau }^{\alpha }_N(x)=O(1)\) and \(\tilde{\tau }^0_N(x)=O(\log N)\).

Proof (Proof of Lemma 1)

Let \(i< N/2\). The jump probability from \(i\) to \(i-1\) for the Markov process \(\tilde{X}(t)\) is the same as for \(X(t)\) and is given by

$$\begin{aligned} \frac{ i\mathbb {P}(Z_{i/N} \le m-d) }{ i\mathbb {P}(Z_{i/N} \le m-d) + (N-i)\mathbb {P}(Z_{i/N} \ge d)}. \end{aligned}$$

If \(i<(1-\epsilon )N/2\), then this quantity is no smaller than the corresponding jump probability \(\beta \) for \(\tilde{Y}(t)\), defined in (12); this is because \(\mathbb {P}(Z_{i/N} \le m-d)\) is decreasing in \(i\), while \(\mathbb {P}(Z_{i/N} \ge d)\) is increasing. Also, if \(d\) is bigger than \(m/2\), as we assume, then, for all \(i\) between \((1-\epsilon )N/2\) and \(N/2\), the jump probability of \(X\) from \(i\) to \(i-1\) is no smaller than a half, since

$$\begin{aligned} \mathbb {P}(Z_{i/N} \le m-d) \ge \mathbb {P}(Z_{(N-i)/N} \le m-d) = \mathbb {P}(Z_{i/N} \ge d). \end{aligned}$$
(13)

Next, we compare the holding times in different states for the two processes. The rate of moving out of state \(i\) for the \(\tilde{X}\) process is

$$\begin{aligned} i\mathbb {P}(Z_{i/N}\le m-d) + (N-i)\mathbb {P}(Z_{i/N} \ge d), \end{aligned}$$

whereas for the \(\tilde{Y}\) process it is

$$\begin{aligned} i\mathbb {P}(Z_{(1\,-\,\epsilon )N/2} \le m-d),&i<(1\,-\,\epsilon )N/2 \\ N\mathbb {P}(Z_{(1\,-\,\epsilon )N/2} \ge d),&(1\,-\,\epsilon )N/2\le i <\lfloor N/2 \rfloor . \end{aligned}$$

As \(\mathbb {P}(Z_{i/N}\le m-d)\) is decreasing in \(i\), it is clear that this rate is greater for the \(\tilde{X}\) process if \(i<(1-\epsilon )N/2\). It can be seen using (13) that it is also greater for \((1-\epsilon )N/2 \le i < \lfloor N/2 \rfloor \).

Thus, we have shown that the jump probability from \(i\) to \(i-1\) for \(\tilde{X}\) is no smaller than that for \(\tilde{Y}\), and that the holding time in each state is no greater (in the standard stochastic order). The existence of the claimed coupling follows from these facts. Indeed, such a coupling can be constructed by letting the processes involve independently when \(\tilde{X}(t) \ne \tilde{Y}(t)\), but, when they are equal, sampling the residual holding times and the subsequent jumps jointly to respect the desired ordering. \(\square \)

Proof (Proof of Lemma 2)

Define \(\hat{Y}(t) = \min \{ \tilde{Y}(t), \lceil \frac{(1-\epsilon )N}{2} \rceil \}\). Then, \(\hat{Y}(t)\) is a semi-Markov process. The associated jump chain is Markovian with the same transition probabilities as the \(\tilde{Y}(t)\) process, except that the only possible transition from \(\lceil \frac{(1-\epsilon )N}{2} \rceil \) is to \(\lceil \frac{(1-\epsilon )N}{2} \rceil -1\). The holding times in all states other \(\lceil \frac{(1-\epsilon )N}{2} \rceil \) are exponential with the same rates as in the process \(Y(t)\), but the holding times in \(\lceil \frac{(1-\epsilon )N}{2} \rceil \) have the distribution of the exit time of the process \(Y(t)\) from the interval \(\{ \lceil \frac{(1-\epsilon )N}{2} \rceil , \ldots , \lceil \frac{(1+\epsilon )N}{2} \rceil \}\) in each visit.

Let \(t_j\) denote the mean time spent by the process \(\hat{Y}(t)\) in state \(j\) during each visit. Then \(t_j = 1/(jc_1)\) for \(j<\lceil \frac{(1-\epsilon )N}{2} \rceil \). In order to compute \(t_{\lceil (1-\epsilon )N/2 \rceil }\), we first note that, by well-known results for the symmetric random walk, the mean number of steps for \(Y(t)\) to exit the interval \(\{ \lceil \frac{(1-\epsilon )N}{2} \rceil \}, \ldots , \lceil \frac{(1+\epsilon )N}{2} \rceil \}\) after entering it at the boundary is \(\lceil \epsilon N \rceil \). Moreover, the mean holding time in each state in this interval is \(1/(Nc_2)\) by definition. Hence, the mean exit time for \(Y(t)\) from this interval, which is also the mean holding time for \(\hat{Y}(t)\) in \(\lceil \frac{(1-\epsilon )N}{2} \rceil \), is \(\epsilon /c_2\). This is a constant that does not depend on \(N\).

Now, \(\tilde{\tau }_N^0(x)\) is the mean time for \(\tilde{Y}\), and hence \(\hat{Y}\), to hit 0. By decomposing this into the number of visits to each intermediate state, and the expected time in each state during each visit, we can write

$$\begin{aligned} \tilde{\tau }_N^0(x) = \sum _{j\,=\,1}^{\lceil (1\,-\,\epsilon )N/2 \rceil } f_{\lfloor xN \rfloor , j} n_j t_j, \end{aligned}$$
(14)

where \(f_{ij}\) denotes the probability that \(\hat{Y}\), started in state \(i\), hits state \(j\) before 0, and \(n_j\) denotes the mean number of visits to state \(j\) conditional on ever visiting it. If we let \(f_{jj}\) denote the return probability to state \(j\) before hitting 0, then the number of visits to \(j\) is geometrically distributed with parameter \(f_{jj}\) by the Markov property, and so \(n_j = 1/(1-f_{jj})\). Using well-known results for the gambler’s ruin problem, we have

$$\begin{aligned} f_{ij}={\left\{ \begin{array}{ll} \frac{ (\frac{\beta }{1-\beta })^i-1 }{ (\frac{\beta }{1-\beta })^j-1 }, &{} i<j \\ 1, &{} i>j, \end{array}\right. } \end{aligned}$$

where we recall that \(\beta >1/2\) is the transition probability from \(i\) to \(i-1\) for the jump chain associated with \(\hat{Y}(t)\), which is a biased random walk with reflection at \(\lceil (1-\epsilon ) N/2 \rceil \). By conditioning on the first step, we also have

$$\begin{aligned} f_{jj}={\left\{ \begin{array}{ll} 1-\beta +\beta f_{j-1,\,j}, &{} j\ne \lceil (1\,-\,\epsilon ) N/2 \rceil , \\ f_{j-1,\,j}, &{} j = \lceil (1\,-\,\epsilon ) N/2 \rceil . \end{array}\right. } \end{aligned}$$

Solving for \(f_{jj}\) and substituting in \(n_j=1/(1-f_{jj})\), we obtain after some tedious calculations that

$$\begin{aligned} n_j = \frac{1-(\frac{1-\beta }{\beta })^j}{2\beta -1} \le \frac{1}{2\beta -1}. \end{aligned}$$

But \(\beta \) is a constant that does not depend on \(N\), and is strictly bigger than a half under the assumption that \(d>m/2\) made in the statement of the theorem. Hence, we see that the expected number of returns to any state is bounded uniformly by a finite constant.

Now, substituting the results obtained above for \(n_j\) and \(t_j\) in (14), and noting that \(f_{\lfloor xN \rfloor ,j} \le 1\) for all \(j\) as it is a probability, we obtain that

$$\begin{aligned} \tilde{\tau }^0_N(x) \le \frac{1}{2\beta -1} \Bigl ( \sum _{j\,=\,1}^{\lceil (1\,-\,\epsilon )N/2 \rceil -1} \frac{1}{jc_1} + \frac{\epsilon }{c_2} \Bigr ) = O(\log N). \end{aligned}$$

In fact, a slightly more careful analysis shows that the time to absorption is \(O(\log \tilde{Y}(0))\), which is a tighter bound if the initial condition grows slower than a fraction of \(N\), i.e. if the population is already close to consensus to start with.

The derivation of the upper bound on \(\tilde{\tau }^{\alpha }_N(x)\) is very similar, except that we get an analogue of (14) where the sum only runs over \(j\ge \alpha N\). Moreover, the terms in the sum have the interpretation of hitting probabilities and number of returns before crossing the level \(\alpha N\), which is bounded by the hitting probabilities and number of returns before hitting 0. The details are omitted. \(\square \)

Proof (Proof of Theorem 4)

Let \(X(t)\) be the number nodes in state 1 at time \(t\). The dynamics of \(X\) are governed by the continuous time Markov chain. Before stating the transition rates we define the two functions \(p_1\) and \(p_2\) by

$$\begin{aligned} p_1(m,d,x)=\mathbb {P}( Z_{m,\,x}\le (m-d)) \end{aligned}$$

and

$$\begin{aligned} p_2(m,d,x) =\mathbb {P}( Z_{m,\,x}\ge d) \end{aligned}$$

where \(Z_{m,\,x}\) denotes a random variable with the Bin \((m,x)\) distribution as before, but we now make the dependence on \(m\) explicit in the notation. Then the transition rates of the Markov Chain are

$$\begin{aligned} X \mathrm goes to {\left\{ \begin{array}{ll} X+1 \mathrm with\, rate \,(N-X) \mathbb {E}_{(M,\,D)}(p_2(M,D,X/N)),\\ X-1 \mathrm with\, rate \,X \mathbb {E}_{(M,\,D)}(p_1(M,D,X/N)) ,\end{array}\right. } \end{aligned}$$

where \(\mathbb {E}_{(M,\,D)}\) is expectation taken with respect to the random choice of \((M,D)\), the algorithm to be used. Finding the hitting probability and the times to consensus follows exactly the same process as for the deterministic case with the minor changes. Therefore, we do not include the proofs here. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cruise, J., Ganesh, A. Probabilistic consensus via polling and majority rules. Queueing Syst 78, 99–120 (2014). https://doi.org/10.1007/s11134-014-9397-7

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11134-014-9397-7

Keywords

Mathematics Subject Classification

Navigation